00:00:00.002 Started by upstream project "autotest-per-patch" build number 122846 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.232 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.232 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.701 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.714 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.726 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.726 > git config core.sparsecheckout # timeout=10 00:00:04.738 > git read-tree -mu HEAD # timeout=10 00:00:04.755 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.780 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.780 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.961 [Pipeline] Start of Pipeline 00:00:04.988 [Pipeline] library 00:00:04.995 Loading library shm_lib@master 00:00:04.995 Library shm_lib@master is cached. Copying from home. 00:00:05.020 [Pipeline] node 00:00:20.031 Still waiting to schedule task 00:00:20.031 Waiting for next available executor on ‘vagrant-vm-host’ 00:06:54.250 Running on VM-host-SM4 in /var/jenkins/workspace/freebsd-vg-autotest 00:06:54.251 [Pipeline] { 00:06:54.263 [Pipeline] catchError 00:06:54.264 [Pipeline] { 00:06:54.278 [Pipeline] wrap 00:06:54.289 [Pipeline] { 00:06:54.296 [Pipeline] stage 00:06:54.298 [Pipeline] { (Prologue) 00:06:54.316 [Pipeline] echo 00:06:54.317 Node: VM-host-SM4 00:06:54.323 [Pipeline] cleanWs 00:06:54.334 [WS-CLEANUP] Deleting project workspace... 00:06:54.334 [WS-CLEANUP] Deferred wipeout is used... 00:06:54.345 [WS-CLEANUP] done 00:06:54.518 [Pipeline] setCustomBuildProperty 00:06:54.596 [Pipeline] nodesByLabel 00:06:54.598 Found a total of 1 nodes with the 'sorcerer' label 00:06:54.610 [Pipeline] httpRequest 00:06:54.614 HttpMethod: GET 00:06:54.615 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:06:54.616 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:06:54.617 Response Code: HTTP/1.1 200 OK 00:06:54.618 Success: Status code 200 is in the accepted range: 200,404 00:06:54.618 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:06:54.757 [Pipeline] sh 00:06:55.036 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:06:55.055 [Pipeline] httpRequest 00:06:55.059 HttpMethod: GET 00:06:55.060 URL: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:06:55.060 Sending request to url: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:06:55.061 Response Code: HTTP/1.1 200 OK 00:06:55.062 Success: Status code 200 is in the accepted range: 200,404 00:06:55.062 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:06:57.189 [Pipeline] sh 00:06:57.466 + tar --no-same-owner -xf spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:07:00.762 [Pipeline] sh 00:07:01.072 + git -C spdk log --oneline -n5 00:07:01.072 2dc74a001 raid: free base bdev earlier during removal 00:07:01.072 6518a98df raid: remove base_bdev_lock 00:07:01.072 96aff3c95 raid: fix some issues in raid_bdev_write_config_json() 00:07:01.072 f9cccaa84 raid: examine other bdevs when starting from superblock 00:07:01.072 688de1b9f raid: factor out a function to get a raid bdev by uuid 00:07:01.101 [Pipeline] writeFile 00:07:01.116 [Pipeline] sh 00:07:01.393 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:01.404 [Pipeline] sh 00:07:01.680 + cat autorun-spdk.conf 00:07:01.680 SPDK_TEST_UNITTEST=1 00:07:01.680 SPDK_RUN_VALGRIND=0 00:07:01.680 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:01.680 SPDK_TEST_NVME=1 00:07:01.680 SPDK_TEST_BLOCKDEV=1 00:07:01.680 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:01.687 RUN_NIGHTLY=0 00:07:01.689 [Pipeline] } 00:07:01.705 [Pipeline] // stage 00:07:01.720 [Pipeline] stage 00:07:01.722 [Pipeline] { (Run VM) 00:07:01.735 [Pipeline] sh 00:07:02.015 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:02.015 + echo 'Start stage prepare_nvme.sh' 00:07:02.015 Start stage prepare_nvme.sh 00:07:02.015 + [[ -n 9 ]] 00:07:02.015 + disk_prefix=ex9 00:07:02.015 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:07:02.015 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:07:02.015 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:07:02.015 ++ SPDK_TEST_UNITTEST=1 00:07:02.015 ++ SPDK_RUN_VALGRIND=0 00:07:02.015 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:02.015 ++ SPDK_TEST_NVME=1 00:07:02.015 ++ SPDK_TEST_BLOCKDEV=1 00:07:02.015 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:02.015 ++ RUN_NIGHTLY=0 00:07:02.015 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:07:02.015 + nvme_files=() 00:07:02.015 + declare -A nvme_files 00:07:02.015 + backend_dir=/var/lib/libvirt/images/backends 00:07:02.015 + nvme_files['nvme.img']=5G 00:07:02.015 + nvme_files['nvme-cmb.img']=5G 00:07:02.015 + nvme_files['nvme-multi0.img']=4G 00:07:02.015 + nvme_files['nvme-multi1.img']=4G 00:07:02.015 + nvme_files['nvme-multi2.img']=4G 00:07:02.015 + nvme_files['nvme-openstack.img']=8G 00:07:02.015 + nvme_files['nvme-zns.img']=5G 00:07:02.015 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:02.015 + (( SPDK_TEST_FTL == 1 )) 00:07:02.015 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:02.015 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:02.015 + for nvme in "${!nvme_files[@]}" 00:07:02.015 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:07:02.015 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:02.015 + for nvme in "${!nvme_files[@]}" 00:07:02.015 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:07:02.273 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:02.273 + for nvme in "${!nvme_files[@]}" 00:07:02.273 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:07:02.273 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:02.273 + for nvme in "${!nvme_files[@]}" 00:07:02.273 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:07:02.531 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:02.531 + for nvme in "${!nvme_files[@]}" 00:07:02.531 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:07:02.789 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:02.789 + for nvme in "${!nvme_files[@]}" 00:07:02.789 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:07:02.789 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:02.789 + for nvme in "${!nvme_files[@]}" 00:07:02.789 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:07:04.203 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:04.203 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:07:04.203 + echo 'End stage prepare_nvme.sh' 00:07:04.203 End stage prepare_nvme.sh 00:07:04.216 [Pipeline] sh 00:07:04.497 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:04.497 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f freebsd13 00:07:04.497 00:07:04.497 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:07:04.497 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:07:04.497 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:07:04.497 HELP=0 00:07:04.498 DRY_RUN=0 00:07:04.498 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:07:04.498 NVME_DISKS_TYPE=nvme, 00:07:04.498 NVME_AUTO_CREATE=0 00:07:04.498 NVME_DISKS_NAMESPACES=, 00:07:04.498 NVME_CMB=, 00:07:04.498 NVME_PMR=, 00:07:04.498 NVME_ZNS=, 00:07:04.498 NVME_MS=, 00:07:04.498 NVME_FDP=, 00:07:04.498 SPDK_VAGRANT_DISTRO=freebsd13 00:07:04.498 SPDK_VAGRANT_VMCPU=10 00:07:04.498 SPDK_VAGRANT_VMRAM=12288 00:07:04.498 SPDK_VAGRANT_PROVIDER=libvirt 00:07:04.498 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:04.498 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:04.498 SPDK_OPENSTACK_NETWORK=0 00:07:04.498 VAGRANT_PACKAGE_BOX=0 00:07:04.498 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:04.498 FORCE_DISTRO=true 00:07:04.498 VAGRANT_BOX_VERSION= 00:07:04.498 EXTRA_VAGRANTFILES= 00:07:04.498 NIC_MODEL=e1000 00:07:04.498 00:07:04.498 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:07:04.498 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:07:07.782 Bringing machine 'default' up with 'libvirt' provider... 00:07:08.716 ==> default: Creating image (snapshot of base box volume). 00:07:08.716 ==> default: Creating domain with the following settings... 00:07:08.716 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1715738876_2fe0f68cafef99beb270 00:07:08.716 ==> default: -- Domain type: kvm 00:07:08.716 ==> default: -- Cpus: 10 00:07:08.716 ==> default: -- Feature: acpi 00:07:08.716 ==> default: -- Feature: apic 00:07:08.716 ==> default: -- Feature: pae 00:07:08.716 ==> default: -- Memory: 12288M 00:07:08.716 ==> default: -- Memory Backing: hugepages: 00:07:08.716 ==> default: -- Management MAC: 00:07:08.716 ==> default: -- Loader: 00:07:08.716 ==> default: -- Nvram: 00:07:08.716 ==> default: -- Base box: spdk/freebsd13 00:07:08.716 ==> default: -- Storage pool: default 00:07:08.717 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1715738876_2fe0f68cafef99beb270.img (32G) 00:07:08.717 ==> default: -- Volume Cache: default 00:07:08.717 ==> default: -- Kernel: 00:07:08.717 ==> default: -- Initrd: 00:07:08.717 ==> default: -- Graphics Type: vnc 00:07:08.717 ==> default: -- Graphics Port: -1 00:07:08.717 ==> default: -- Graphics IP: 127.0.0.1 00:07:08.717 ==> default: -- Graphics Password: Not defined 00:07:08.717 ==> default: -- Video Type: cirrus 00:07:08.717 ==> default: -- Video VRAM: 9216 00:07:08.717 ==> default: -- Sound Type: 00:07:08.717 ==> default: -- Keymap: en-us 00:07:08.717 ==> default: -- TPM Path: 00:07:08.717 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:08.717 ==> default: -- Command line args: 00:07:08.717 ==> default: -> value=-device, 00:07:08.717 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:08.717 ==> default: -> value=-drive, 00:07:08.717 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:07:08.717 ==> default: -> value=-device, 00:07:08.717 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:08.975 ==> default: Creating shared folders metadata... 00:07:08.975 ==> default: Starting domain. 00:07:11.532 ==> default: Waiting for domain to get an IP address... 00:07:33.583 ==> default: Waiting for SSH to become available... 00:07:48.473 ==> default: Configuring and enabling network interfaces... 00:07:51.004 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:03.203 ==> default: Mounting SSHFS shared folder... 00:08:03.203 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:08:03.203 ==> default: Checking Mount.. 00:08:03.766 ==> default: Folder Successfully Mounted! 00:08:03.766 ==> default: Running provisioner: file... 00:08:04.022 default: ~/.gitconfig => .gitconfig 00:08:04.587 00:08:04.587 SUCCESS! 00:08:04.587 00:08:04.587 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:08:04.587 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:04.587 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:08:04.587 00:08:04.596 [Pipeline] } 00:08:04.614 [Pipeline] // stage 00:08:04.623 [Pipeline] dir 00:08:04.624 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:08:04.625 [Pipeline] { 00:08:04.640 [Pipeline] catchError 00:08:04.641 [Pipeline] { 00:08:04.655 [Pipeline] sh 00:08:04.935 + vagrant ssh-config --host vagrant 00:08:04.936 + sed -ne /^Host/,$p 00:08:04.936 + tee ssh_conf 00:08:09.119 Host vagrant 00:08:09.119 HostName 192.168.121.249 00:08:09.119 User vagrant 00:08:09.119 Port 22 00:08:09.119 UserKnownHostsFile /dev/null 00:08:09.119 StrictHostKeyChecking no 00:08:09.119 PasswordAuthentication no 00:08:09.119 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:08:09.119 IdentitiesOnly yes 00:08:09.119 LogLevel FATAL 00:08:09.119 ForwardAgent yes 00:08:09.119 ForwardX11 yes 00:08:09.119 00:08:09.132 [Pipeline] withEnv 00:08:09.135 [Pipeline] { 00:08:09.150 [Pipeline] sh 00:08:09.430 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:09.430 source /etc/os-release 00:08:09.430 [[ -e /image.version ]] && img=$(< /image.version) 00:08:09.430 # Minimal, systemd-like check. 00:08:09.430 if [[ -e /.dockerenv ]]; then 00:08:09.430 # Clear garbage from the node's name: 00:08:09.430 # agt-er_autotest_547-896 -> autotest_547-896 00:08:09.430 # $HOSTNAME is the actual container id 00:08:09.430 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:09.430 if mountpoint -q /etc/hostname; then 00:08:09.430 # We can assume this is a mount from a host where container is running, 00:08:09.430 # so fetch its hostname to easily identify the target swarm worker. 00:08:09.430 container="$(< /etc/hostname) ($agent)" 00:08:09.430 else 00:08:09.430 # Fallback 00:08:09.430 container=$agent 00:08:09.430 fi 00:08:09.430 fi 00:08:09.430 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:09.430 00:08:09.441 [Pipeline] } 00:08:09.461 [Pipeline] // withEnv 00:08:09.472 [Pipeline] setCustomBuildProperty 00:08:09.486 [Pipeline] stage 00:08:09.488 [Pipeline] { (Tests) 00:08:09.507 [Pipeline] sh 00:08:09.787 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:09.807 [Pipeline] timeout 00:08:09.807 Timeout set to expire in 1 hr 0 min 00:08:09.809 [Pipeline] { 00:08:09.824 [Pipeline] sh 00:08:10.105 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:10.671 HEAD is now at 2dc74a001 raid: free base bdev earlier during removal 00:08:10.687 [Pipeline] sh 00:08:10.967 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:10.980 [Pipeline] sh 00:08:11.262 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:11.310 [Pipeline] sh 00:08:11.621 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:08:11.621 ++ readlink -f spdk_repo 00:08:11.621 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:08:11.621 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:08:11.621 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:08:11.621 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:08:11.621 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:08:11.621 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:08:11.621 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:08:11.621 + cd /usr/home/vagrant/spdk_repo 00:08:11.621 + source /etc/os-release 00:08:11.621 ++ NAME=FreeBSD 00:08:11.621 ++ VERSION=13.2-RELEASE 00:08:11.621 ++ VERSION_ID=13.2 00:08:11.621 ++ ID=freebsd 00:08:11.621 ++ ANSI_COLOR='0;31' 00:08:11.621 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:08:11.621 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:08:11.621 ++ HOME_URL=https://FreeBSD.org/ 00:08:11.621 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:08:11.621 + uname -a 00:08:11.621 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:08:11.621 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:11.879 Contigmem (not present) 00:08:11.879 Buffer Size: not set 00:08:11.879 Num Buffers: not set 00:08:11.879 00:08:11.879 00:08:11.879 Type BDF Vendor Device Driver 00:08:11.879 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:08:11.879 + rm -f /tmp/spdk-ld-path 00:08:11.879 + source autorun-spdk.conf 00:08:11.879 ++ SPDK_TEST_UNITTEST=1 00:08:11.879 ++ SPDK_RUN_VALGRIND=0 00:08:11.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:11.879 ++ SPDK_TEST_NVME=1 00:08:11.879 ++ SPDK_TEST_BLOCKDEV=1 00:08:11.879 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:11.879 ++ RUN_NIGHTLY=0 00:08:11.879 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:11.879 + [[ -n '' ]] 00:08:11.879 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:08:11.879 + for M in /var/spdk/build-*-manifest.txt 00:08:11.879 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:11.879 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:08:11.879 + for M in /var/spdk/build-*-manifest.txt 00:08:11.879 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:11.879 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:08:11.879 ++ uname 00:08:11.879 + [[ FreeBSD == \L\i\n\u\x ]] 00:08:11.879 + dmesg_pid=1268 00:08:11.879 + [[ FreeBSD == FreeBSD ]] 00:08:11.879 + export LC_ALL=C LC_CTYPE=C 00:08:11.879 + LC_ALL=C 00:08:11.879 + LC_CTYPE=C 00:08:11.879 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.879 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.879 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:11.879 + tail -F /var/log/messages 00:08:11.879 + [[ -x /usr/src/fio-static/fio ]] 00:08:11.879 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:11.879 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:11.879 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:11.879 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:08:11.879 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:08:11.879 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:08:11.879 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:11.879 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:08:11.879 Test configuration: 00:08:11.879 SPDK_TEST_UNITTEST=1 00:08:11.879 SPDK_RUN_VALGRIND=0 00:08:11.879 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:11.879 SPDK_TEST_NVME=1 00:08:11.879 SPDK_TEST_BLOCKDEV=1 00:08:11.879 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:12.137 RUN_NIGHTLY=0 02:08:59 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.137 02:08:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:12.137 02:08:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.137 02:08:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.137 02:08:59 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:08:12.137 02:08:59 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:08:12.137 02:08:59 -- paths/export.sh@4 -- $ export PATH 00:08:12.137 02:08:59 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:08:12.137 02:08:59 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:08:12.137 02:08:59 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:12.137 02:08:59 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715738939.XXXXXX 00:08:12.137 02:08:59 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715738939.XXXXXX.SsyTpEGR 00:08:12.137 02:08:59 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:12.137 02:08:59 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:08:12.137 02:08:59 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:08:12.137 02:08:59 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:12.137 02:08:59 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:12.137 02:08:59 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:12.137 02:08:59 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:12.137 02:08:59 -- common/autotest_common.sh@10 -- $ set +x 00:08:12.137 02:09:00 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:08:12.137 02:09:00 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:12.137 02:09:00 -- pm/common@17 -- $ local monitor 00:08:12.137 02:09:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.137 02:09:00 -- pm/common@25 -- $ sleep 1 00:08:12.137 02:09:00 -- pm/common@21 -- $ date +%s 00:08:12.137 02:09:00 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715738940 00:08:12.137 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715738940_collect-vmstat.pm.log 00:08:13.513 02:09:01 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:13.513 02:09:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:13.513 02:09:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:13.513 02:09:01 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:08:13.513 02:09:01 -- spdk/autobuild.sh@16 -- $ date -u 00:08:13.513 Wed May 15 02:09:01 UTC 2024 00:08:13.513 02:09:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:13.513 v24.05-pre-653-g2dc74a001 00:08:13.513 02:09:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:13.513 02:09:01 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:08:13.513 02:09:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:13.513 02:09:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:13.513 02:09:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:13.513 02:09:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:13.513 02:09:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:13.513 02:09:01 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:08:13.513 02:09:01 -- spdk/autobuild.sh@58 -- $ unittest_build 00:08:13.513 02:09:01 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:08:13.513 02:09:01 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:08:13.513 02:09:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:08:13.513 02:09:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:13.513 ************************************ 00:08:13.513 START TEST unittest_build 00:08:13.513 ************************************ 00:08:13.513 02:09:01 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:08:13.513 02:09:01 unittest_build -- common/autobuild_common.sh@404 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:08:14.453 Notice: Vhost, rte_vhost library, virtio, and fuse 00:08:14.453 are only supported on Linux. Turning off default feature. 00:08:14.453 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.453 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:15.018 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:08:15.276 Using 'verbs' RDMA provider 00:08:27.739 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:37.707 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:37.707 Creating mk/config.mk...done. 00:08:37.707 Creating mk/cc.flags.mk...done. 00:08:37.707 Type 'gmake' to build. 00:08:37.707 02:09:25 unittest_build -- common/autobuild_common.sh@405 -- $ gmake -j10 00:08:37.707 gmake[1]: Nothing to be done for 'all'. 00:08:41.892 ps: stdin: not a terminal 00:08:46.079 The Meson build system 00:08:46.079 Version: 1.3.1 00:08:46.079 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:08:46.079 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:46.079 Build type: native build 00:08:46.079 Program cat found: YES (/bin/cat) 00:08:46.079 Project name: DPDK 00:08:46.079 Project version: 23.11.0 00:08:46.079 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:08:46.079 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:08:46.079 Host machine cpu family: x86_64 00:08:46.079 Host machine cpu: x86_64 00:08:46.079 Message: ## Building in Developer Mode ## 00:08:46.079 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:08:46.079 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:46.079 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:46.079 Program python3 found: YES (/usr/local/bin/python3.9) 00:08:46.079 Program cat found: YES (/bin/cat) 00:08:46.079 Compiler for C supports arguments -march=native: YES 00:08:46.079 Checking for size of "void *" : 8 00:08:46.079 Checking for size of "void *" : 8 (cached) 00:08:46.079 Library m found: YES 00:08:46.079 Library numa found: NO 00:08:46.079 Library fdt found: NO 00:08:46.079 Library execinfo found: YES 00:08:46.079 Has header "execinfo.h" : YES 00:08:46.079 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:08:46.079 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:46.079 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:46.079 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:46.079 Run-time dependency openssl found: YES 3.0.13 00:08:46.079 Run-time dependency libpcap found: NO (tried pkgconfig) 00:08:46.079 Library pcap found: YES 00:08:46.079 Has header "pcap.h" with dependency -lpcap: YES 00:08:46.079 Compiler for C supports arguments -Wcast-qual: YES 00:08:46.079 Compiler for C supports arguments -Wdeprecated: YES 00:08:46.079 Compiler for C supports arguments -Wformat: YES 00:08:46.079 Compiler for C supports arguments -Wformat-nonliteral: YES 00:08:46.079 Compiler for C supports arguments -Wformat-security: YES 00:08:46.079 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:46.079 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:46.079 Compiler for C supports arguments -Wnested-externs: YES 00:08:46.079 Compiler for C supports arguments -Wold-style-definition: YES 00:08:46.079 Compiler for C supports arguments -Wpointer-arith: YES 00:08:46.079 Compiler for C supports arguments -Wsign-compare: YES 00:08:46.079 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:46.079 Compiler for C supports arguments -Wundef: YES 00:08:46.079 Compiler for C supports arguments -Wwrite-strings: YES 00:08:46.079 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:46.079 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:08:46.079 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:46.079 Compiler for C supports arguments -mavx512f: YES 00:08:46.079 Checking if "AVX512 checking" compiles: YES 00:08:46.079 Fetching value of define "__SSE4_2__" : 1 00:08:46.079 Fetching value of define "__AES__" : 1 00:08:46.079 Fetching value of define "__AVX__" : 1 00:08:46.079 Fetching value of define "__AVX2__" : 1 00:08:46.079 Fetching value of define "__AVX512BW__" : 1 00:08:46.079 Fetching value of define "__AVX512CD__" : 1 00:08:46.079 Fetching value of define "__AVX512DQ__" : 1 00:08:46.079 Fetching value of define "__AVX512F__" : 1 00:08:46.079 Fetching value of define "__AVX512VL__" : 1 00:08:46.079 Fetching value of define "__PCLMUL__" : 1 00:08:46.079 Fetching value of define "__RDRND__" : 1 00:08:46.079 Fetching value of define "__RDSEED__" : 1 00:08:46.079 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:46.079 Fetching value of define "__znver1__" : (undefined) 00:08:46.079 Fetching value of define "__znver2__" : (undefined) 00:08:46.079 Fetching value of define "__znver3__" : (undefined) 00:08:46.079 Fetching value of define "__znver4__" : (undefined) 00:08:46.079 Compiler for C supports arguments -Wno-format-truncation: NO 00:08:46.079 Message: lib/log: Defining dependency "log" 00:08:46.079 Message: lib/kvargs: Defining dependency "kvargs" 00:08:46.079 Message: lib/telemetry: Defining dependency "telemetry" 00:08:46.080 Checking if "Detect argument count for CPU_OR" compiles: YES 00:08:46.080 Checking for function "getentropy" : YES 00:08:46.080 Message: lib/eal: Defining dependency "eal" 00:08:46.080 Message: lib/ring: Defining dependency "ring" 00:08:46.080 Message: lib/rcu: Defining dependency "rcu" 00:08:46.080 Message: lib/mempool: Defining dependency "mempool" 00:08:46.080 Message: lib/mbuf: Defining dependency "mbuf" 00:08:46.080 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:46.080 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:46.080 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:46.080 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:46.080 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:46.080 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:46.080 Compiler for C supports arguments -mpclmul: YES 00:08:46.080 Compiler for C supports arguments -maes: YES 00:08:46.080 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:46.080 Compiler for C supports arguments -mavx512bw: YES 00:08:46.080 Compiler for C supports arguments -mavx512dq: YES 00:08:46.080 Compiler for C supports arguments -mavx512vl: YES 00:08:46.080 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:46.080 Compiler for C supports arguments -mavx2: YES 00:08:46.080 Compiler for C supports arguments -mavx: YES 00:08:46.080 Message: lib/net: Defining dependency "net" 00:08:46.080 Message: lib/meter: Defining dependency "meter" 00:08:46.080 Message: lib/ethdev: Defining dependency "ethdev" 00:08:46.080 Message: lib/pci: Defining dependency "pci" 00:08:46.080 Message: lib/cmdline: Defining dependency "cmdline" 00:08:46.080 Message: lib/hash: Defining dependency "hash" 00:08:46.080 Message: lib/timer: Defining dependency "timer" 00:08:46.080 Message: lib/compressdev: Defining dependency "compressdev" 00:08:46.080 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:46.080 Message: lib/dmadev: Defining dependency "dmadev" 00:08:46.080 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:46.080 Message: lib/reorder: Defining dependency "reorder" 00:08:46.080 Message: lib/security: Defining dependency "security" 00:08:46.080 Has header "linux/userfaultfd.h" : NO 00:08:46.080 Has header "linux/vduse.h" : NO 00:08:46.080 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:08:46.080 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:46.080 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:46.080 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:46.080 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:46.080 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:46.080 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:46.080 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:08:46.080 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:46.080 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:46.080 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:46.080 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:46.080 Configuring doxy-api-html.conf using configuration 00:08:46.080 Configuring doxy-api-man.conf using configuration 00:08:46.080 Program mandb found: NO 00:08:46.080 Program sphinx-build found: NO 00:08:46.080 Configuring rte_build_config.h using configuration 00:08:46.080 Message: 00:08:46.080 ================= 00:08:46.080 Applications Enabled 00:08:46.080 ================= 00:08:46.080 00:08:46.080 apps: 00:08:46.080 00:08:46.080 00:08:46.080 Message: 00:08:46.080 ================= 00:08:46.080 Libraries Enabled 00:08:46.080 ================= 00:08:46.080 00:08:46.080 libs: 00:08:46.080 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:46.080 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:46.080 cryptodev, dmadev, reorder, security, 00:08:46.080 00:08:46.080 Message: 00:08:46.080 =============== 00:08:46.080 Drivers Enabled 00:08:46.080 =============== 00:08:46.080 00:08:46.080 common: 00:08:46.080 00:08:46.080 bus: 00:08:46.080 pci, vdev, 00:08:46.080 mempool: 00:08:46.080 ring, 00:08:46.080 dma: 00:08:46.080 00:08:46.080 net: 00:08:46.080 00:08:46.080 crypto: 00:08:46.080 00:08:46.080 compress: 00:08:46.080 00:08:46.080 00:08:46.080 Message: 00:08:46.080 ================= 00:08:46.080 Content Skipped 00:08:46.080 ================= 00:08:46.080 00:08:46.080 apps: 00:08:46.080 dumpcap: explicitly disabled via build config 00:08:46.080 graph: explicitly disabled via build config 00:08:46.080 pdump: explicitly disabled via build config 00:08:46.080 proc-info: explicitly disabled via build config 00:08:46.080 test-acl: explicitly disabled via build config 00:08:46.080 test-bbdev: explicitly disabled via build config 00:08:46.080 test-cmdline: explicitly disabled via build config 00:08:46.080 test-compress-perf: explicitly disabled via build config 00:08:46.080 test-crypto-perf: explicitly disabled via build config 00:08:46.080 test-dma-perf: explicitly disabled via build config 00:08:46.080 test-eventdev: explicitly disabled via build config 00:08:46.080 test-fib: explicitly disabled via build config 00:08:46.080 test-flow-perf: explicitly disabled via build config 00:08:46.080 test-gpudev: explicitly disabled via build config 00:08:46.080 test-mldev: explicitly disabled via build config 00:08:46.080 test-pipeline: explicitly disabled via build config 00:08:46.080 test-pmd: explicitly disabled via build config 00:08:46.080 test-regex: explicitly disabled via build config 00:08:46.080 test-sad: explicitly disabled via build config 00:08:46.080 test-security-perf: explicitly disabled via build config 00:08:46.080 00:08:46.080 libs: 00:08:46.080 metrics: explicitly disabled via build config 00:08:46.080 acl: explicitly disabled via build config 00:08:46.080 bbdev: explicitly disabled via build config 00:08:46.080 bitratestats: explicitly disabled via build config 00:08:46.080 bpf: explicitly disabled via build config 00:08:46.080 cfgfile: explicitly disabled via build config 00:08:46.080 distributor: explicitly disabled via build config 00:08:46.080 efd: explicitly disabled via build config 00:08:46.080 eventdev: explicitly disabled via build config 00:08:46.080 dispatcher: explicitly disabled via build config 00:08:46.080 gpudev: explicitly disabled via build config 00:08:46.080 gro: explicitly disabled via build config 00:08:46.080 gso: explicitly disabled via build config 00:08:46.080 ip_frag: explicitly disabled via build config 00:08:46.080 jobstats: explicitly disabled via build config 00:08:46.080 latencystats: explicitly disabled via build config 00:08:46.080 lpm: explicitly disabled via build config 00:08:46.080 member: explicitly disabled via build config 00:08:46.080 pcapng: explicitly disabled via build config 00:08:46.080 power: only supported on Linux 00:08:46.080 rawdev: explicitly disabled via build config 00:08:46.080 regexdev: explicitly disabled via build config 00:08:46.080 mldev: explicitly disabled via build config 00:08:46.080 rib: explicitly disabled via build config 00:08:46.080 sched: explicitly disabled via build config 00:08:46.080 stack: explicitly disabled via build config 00:08:46.080 vhost: only supported on Linux 00:08:46.080 ipsec: explicitly disabled via build config 00:08:46.080 pdcp: explicitly disabled via build config 00:08:46.080 fib: explicitly disabled via build config 00:08:46.080 port: explicitly disabled via build config 00:08:46.080 pdump: explicitly disabled via build config 00:08:46.080 table: explicitly disabled via build config 00:08:46.080 pipeline: explicitly disabled via build config 00:08:46.080 graph: explicitly disabled via build config 00:08:46.080 node: explicitly disabled via build config 00:08:46.080 00:08:46.080 drivers: 00:08:46.080 common/cpt: not in enabled drivers build config 00:08:46.080 common/dpaax: not in enabled drivers build config 00:08:46.080 common/iavf: not in enabled drivers build config 00:08:46.080 common/idpf: not in enabled drivers build config 00:08:46.080 common/mvep: not in enabled drivers build config 00:08:46.080 common/octeontx: not in enabled drivers build config 00:08:46.080 bus/auxiliary: not in enabled drivers build config 00:08:46.080 bus/cdx: not in enabled drivers build config 00:08:46.080 bus/dpaa: not in enabled drivers build config 00:08:46.080 bus/fslmc: not in enabled drivers build config 00:08:46.080 bus/ifpga: not in enabled drivers build config 00:08:46.080 bus/platform: not in enabled drivers build config 00:08:46.080 bus/vmbus: not in enabled drivers build config 00:08:46.080 common/cnxk: not in enabled drivers build config 00:08:46.080 common/mlx5: not in enabled drivers build config 00:08:46.080 common/nfp: not in enabled drivers build config 00:08:46.080 common/qat: not in enabled drivers build config 00:08:46.080 common/sfc_efx: not in enabled drivers build config 00:08:46.080 mempool/bucket: not in enabled drivers build config 00:08:46.080 mempool/cnxk: not in enabled drivers build config 00:08:46.080 mempool/dpaa: not in enabled drivers build config 00:08:46.080 mempool/dpaa2: not in enabled drivers build config 00:08:46.080 mempool/octeontx: not in enabled drivers build config 00:08:46.080 mempool/stack: not in enabled drivers build config 00:08:46.080 dma/cnxk: not in enabled drivers build config 00:08:46.080 dma/dpaa: not in enabled drivers build config 00:08:46.080 dma/dpaa2: not in enabled drivers build config 00:08:46.080 dma/hisilicon: not in enabled drivers build config 00:08:46.080 dma/idxd: not in enabled drivers build config 00:08:46.080 dma/ioat: not in enabled drivers build config 00:08:46.080 dma/skeleton: not in enabled drivers build config 00:08:46.080 net/af_packet: not in enabled drivers build config 00:08:46.080 net/af_xdp: not in enabled drivers build config 00:08:46.080 net/ark: not in enabled drivers build config 00:08:46.080 net/atlantic: not in enabled drivers build config 00:08:46.080 net/avp: not in enabled drivers build config 00:08:46.080 net/axgbe: not in enabled drivers build config 00:08:46.080 net/bnx2x: not in enabled drivers build config 00:08:46.080 net/bnxt: not in enabled drivers build config 00:08:46.080 net/bonding: not in enabled drivers build config 00:08:46.080 net/cnxk: not in enabled drivers build config 00:08:46.080 net/cpfl: not in enabled drivers build config 00:08:46.080 net/cxgbe: not in enabled drivers build config 00:08:46.080 net/dpaa: not in enabled drivers build config 00:08:46.080 net/dpaa2: not in enabled drivers build config 00:08:46.080 net/e1000: not in enabled drivers build config 00:08:46.080 net/ena: not in enabled drivers build config 00:08:46.081 net/enetc: not in enabled drivers build config 00:08:46.081 net/enetfec: not in enabled drivers build config 00:08:46.081 net/enic: not in enabled drivers build config 00:08:46.081 net/failsafe: not in enabled drivers build config 00:08:46.081 net/fm10k: not in enabled drivers build config 00:08:46.081 net/gve: not in enabled drivers build config 00:08:46.081 net/hinic: not in enabled drivers build config 00:08:46.081 net/hns3: not in enabled drivers build config 00:08:46.081 net/i40e: not in enabled drivers build config 00:08:46.081 net/iavf: not in enabled drivers build config 00:08:46.081 net/ice: not in enabled drivers build config 00:08:46.081 net/idpf: not in enabled drivers build config 00:08:46.081 net/igc: not in enabled drivers build config 00:08:46.081 net/ionic: not in enabled drivers build config 00:08:46.081 net/ipn3ke: not in enabled drivers build config 00:08:46.081 net/ixgbe: not in enabled drivers build config 00:08:46.081 net/mana: not in enabled drivers build config 00:08:46.081 net/memif: not in enabled drivers build config 00:08:46.081 net/mlx4: not in enabled drivers build config 00:08:46.081 net/mlx5: not in enabled drivers build config 00:08:46.081 net/mvneta: not in enabled drivers build config 00:08:46.081 net/mvpp2: not in enabled drivers build config 00:08:46.081 net/netvsc: not in enabled drivers build config 00:08:46.081 net/nfb: not in enabled drivers build config 00:08:46.081 net/nfp: not in enabled drivers build config 00:08:46.081 net/ngbe: not in enabled drivers build config 00:08:46.081 net/null: not in enabled drivers build config 00:08:46.081 net/octeontx: not in enabled drivers build config 00:08:46.081 net/octeon_ep: not in enabled drivers build config 00:08:46.081 net/pcap: not in enabled drivers build config 00:08:46.081 net/pfe: not in enabled drivers build config 00:08:46.081 net/qede: not in enabled drivers build config 00:08:46.081 net/ring: not in enabled drivers build config 00:08:46.081 net/sfc: not in enabled drivers build config 00:08:46.081 net/softnic: not in enabled drivers build config 00:08:46.081 net/tap: not in enabled drivers build config 00:08:46.081 net/thunderx: not in enabled drivers build config 00:08:46.081 net/txgbe: not in enabled drivers build config 00:08:46.081 net/vdev_netvsc: not in enabled drivers build config 00:08:46.081 net/vhost: not in enabled drivers build config 00:08:46.081 net/virtio: not in enabled drivers build config 00:08:46.081 net/vmxnet3: not in enabled drivers build config 00:08:46.081 raw/*: missing internal dependency, "rawdev" 00:08:46.081 crypto/armv8: not in enabled drivers build config 00:08:46.081 crypto/bcmfs: not in enabled drivers build config 00:08:46.081 crypto/caam_jr: not in enabled drivers build config 00:08:46.081 crypto/ccp: not in enabled drivers build config 00:08:46.081 crypto/cnxk: not in enabled drivers build config 00:08:46.081 crypto/dpaa_sec: not in enabled drivers build config 00:08:46.081 crypto/dpaa2_sec: not in enabled drivers build config 00:08:46.081 crypto/ipsec_mb: not in enabled drivers build config 00:08:46.081 crypto/mlx5: not in enabled drivers build config 00:08:46.081 crypto/mvsam: not in enabled drivers build config 00:08:46.081 crypto/nitrox: not in enabled drivers build config 00:08:46.081 crypto/null: not in enabled drivers build config 00:08:46.081 crypto/octeontx: not in enabled drivers build config 00:08:46.081 crypto/openssl: not in enabled drivers build config 00:08:46.081 crypto/scheduler: not in enabled drivers build config 00:08:46.081 crypto/uadk: not in enabled drivers build config 00:08:46.081 crypto/virtio: not in enabled drivers build config 00:08:46.081 compress/isal: not in enabled drivers build config 00:08:46.081 compress/mlx5: not in enabled drivers build config 00:08:46.081 compress/octeontx: not in enabled drivers build config 00:08:46.081 compress/zlib: not in enabled drivers build config 00:08:46.081 regex/*: missing internal dependency, "regexdev" 00:08:46.081 ml/*: missing internal dependency, "mldev" 00:08:46.081 vdpa/*: missing internal dependency, "vhost" 00:08:46.081 event/*: missing internal dependency, "eventdev" 00:08:46.081 baseband/*: missing internal dependency, "bbdev" 00:08:46.081 gpu/*: missing internal dependency, "gpudev" 00:08:46.081 00:08:46.081 00:08:46.081 Build targets in project: 81 00:08:46.081 00:08:46.081 DPDK 23.11.0 00:08:46.081 00:08:46.081 User defined options 00:08:46.081 buildtype : debug 00:08:46.081 default_library : static 00:08:46.081 libdir : lib 00:08:46.081 prefix : / 00:08:46.081 c_args : -fPIC -Werror 00:08:46.081 c_link_args : 00:08:46.081 cpu_instruction_set: native 00:08:46.081 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:46.081 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:46.081 enable_docs : false 00:08:46.081 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:46.081 enable_kmods : true 00:08:46.081 tests : false 00:08:46.081 00:08:46.081 Found ninja-1.11.1 at /usr/local/bin/ninja 00:08:46.647 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:46.647 [1/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:46.647 [2/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:46.647 [3/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:46.647 [4/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:46.647 [5/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:46.647 [6/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:08:46.647 [7/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:46.647 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:46.907 [9/231] Linking static target lib/librte_log.a 00:08:46.907 [10/231] Linking static target lib/librte_kvargs.a 00:08:46.907 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:46.907 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:46.907 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:46.907 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:46.907 [15/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:46.907 [16/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:46.907 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:47.168 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:47.168 [19/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:47.168 [20/231] Linking static target lib/librte_telemetry.a 00:08:47.168 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:47.168 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:47.168 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:47.168 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:47.426 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:47.426 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:47.426 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:47.426 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:47.426 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:47.426 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:47.426 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:47.426 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:47.426 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:47.426 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:47.426 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:47.426 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:47.686 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:47.686 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:47.686 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:47.686 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:47.687 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:47.687 [42/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:47.687 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:47.687 [44/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.687 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:47.687 [46/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:47.687 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:47.955 [48/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:47.955 [49/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:47.955 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:47.955 [51/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:47.955 [52/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:08:47.955 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:47.955 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:08:47.955 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:47.955 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:47.955 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:47.955 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:47.955 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:48.212 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:48.213 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:08:48.213 [62/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:48.213 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:08:48.213 [64/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:48.213 [65/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:08:48.213 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:08:48.213 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:08:48.213 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:08:48.213 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:08:48.471 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:08:48.471 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:08:48.471 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:48.471 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:48.471 [74/231] Linking static target lib/librte_eal.a 00:08:48.471 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:48.471 [76/231] Linking static target lib/librte_ring.a 00:08:48.747 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:48.747 [78/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:48.747 [79/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:48.747 [80/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:48.747 [81/231] Linking static target lib/librte_rcu.a 00:08:48.747 [82/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.747 [83/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:48.747 [84/231] Linking static target lib/librte_mempool.a 00:08:48.747 [85/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:48.747 [86/231] Linking target lib/librte_log.so.24.0 00:08:48.747 [87/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:48.747 [88/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.747 [89/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.747 [90/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:48.747 [91/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:08:49.012 [92/231] Linking target lib/librte_kvargs.so.24.0 00:08:49.012 [93/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:49.012 [94/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:49.012 [95/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.012 [96/231] Linking static target lib/librte_mbuf.a 00:08:49.012 [97/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:49.012 [98/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:08:49.012 [99/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:49.012 [100/231] Linking target lib/librte_telemetry.so.24.0 00:08:49.012 [101/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:49.012 [102/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:49.012 [103/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:49.012 [104/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:49.271 [105/231] Linking static target lib/librte_net.a 00:08:49.271 [106/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:49.271 [107/231] Linking static target lib/librte_meter.a 00:08:49.271 [108/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:08:49.271 [109/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.530 [110/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:49.530 [111/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.530 [112/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:49.530 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:49.530 [114/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.530 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:49.789 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:49.789 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:49.789 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:49.789 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:49.789 [120/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:49.789 [121/231] Linking static target lib/librte_pci.a 00:08:50.053 [122/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:50.053 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:50.053 [124/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:50.053 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:50.053 [126/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:50.053 [127/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:50.053 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:50.053 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:50.053 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:50.053 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:50.312 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:50.313 [133/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:50.313 [134/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:50.313 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:50.313 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:50.313 [137/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.313 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:50.313 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:50.313 [140/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:50.313 [141/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:50.313 [142/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:50.313 [143/231] Linking static target lib/librte_ethdev.a 00:08:50.313 [144/231] Linking static target lib/librte_cmdline.a 00:08:50.572 [145/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.572 [146/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:50.572 [147/231] Linking static target lib/librte_timer.a 00:08:50.572 [148/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:50.831 [149/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:50.831 [150/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:50.831 [151/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:50.831 [152/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:51.090 [153/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:51.090 [154/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:51.090 [155/231] Linking static target lib/librte_compressdev.a 00:08:51.090 [156/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.090 [157/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:51.090 [158/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:51.090 [159/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:51.090 [160/231] Linking static target lib/librte_dmadev.a 00:08:51.090 [161/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:51.090 [162/231] Linking static target lib/librte_hash.a 00:08:51.348 [163/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:51.348 [164/231] Linking static target lib/librte_security.a 00:08:51.348 [165/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:51.348 [166/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:51.348 [167/231] Linking static target lib/librte_reorder.a 00:08:51.348 [168/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:51.348 [169/231] Linking static target lib/librte_cryptodev.a 00:08:51.606 [170/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.606 [171/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:51.606 [172/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:51.606 [173/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.606 [174/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.606 [175/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.606 [176/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:08:51.606 [177/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:51.606 [178/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:51.607 [179/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:51.607 [180/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.607 [181/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.865 [182/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:51.865 [183/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:51.865 [184/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:52.123 [185/231] Linking static target drivers/librte_bus_vdev.a 00:08:52.123 [186/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:52.123 [187/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:52.123 [188/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:52.124 [189/231] Linking static target drivers/librte_bus_pci.a 00:08:52.124 [190/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:52.124 [191/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:52.124 [192/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.382 [193/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.382 [194/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:52.382 [195/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:52.640 [196/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:52.641 [197/231] Linking static target drivers/librte_mempool_ring.a 00:08:52.641 [198/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:53.208 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:08:53.208 machine -> /usr/src/sys/amd64/include 00:08:53.208 x86 -> /usr/src/sys/x86/include 00:08:53.208 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:08:53.208 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:08:53.208 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:08:53.208 touch opt_global.h 00:08:53.209 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:08:53.209 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:08:53.209 :> export_syms 00:08:53.209 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:08:53.209 objcopy --strip-debug contigmem.ko 00:08:53.468 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:08:53.468 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:08:53.468 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:08:53.468 :> export_syms 00:08:53.468 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:08:53.468 objcopy --strip-debug nic_uio.ko 00:08:56.042 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:00.275 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:00.275 [203/231] Linking target lib/librte_eal.so.24.0 00:09:00.275 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:00.275 [205/231] Linking target lib/librte_timer.so.24.0 00:09:00.275 [206/231] Linking target lib/librte_dmadev.so.24.0 00:09:00.275 [207/231] Linking target lib/librte_pci.so.24.0 00:09:00.275 [208/231] Linking target lib/librte_ring.so.24.0 00:09:00.275 [209/231] Linking target drivers/librte_bus_vdev.so.24.0 00:09:00.275 [210/231] Linking target lib/librte_meter.so.24.0 00:09:00.275 [211/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:00.275 [212/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:00.275 [213/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:00.275 [214/231] Linking target lib/librte_rcu.so.24.0 00:09:00.275 [215/231] Linking target lib/librte_mempool.so.24.0 00:09:00.275 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:09:00.275 [217/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:00.275 [218/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:00.534 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:09:00.534 [220/231] Linking target lib/librte_mbuf.so.24.0 00:09:00.534 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:00.534 [222/231] Linking target lib/librte_net.so.24.0 00:09:00.534 [223/231] Linking target lib/librte_reorder.so.24.0 00:09:00.534 [224/231] Linking target lib/librte_compressdev.so.24.0 00:09:00.534 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:09:00.794 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:00.794 [227/231] Linking target lib/librte_hash.so.24.0 00:09:00.794 [228/231] Linking target lib/librte_cmdline.so.24.0 00:09:00.794 [229/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:00.794 [230/231] Linking target lib/librte_ethdev.so.24.0 00:09:00.794 [231/231] Linking target lib/librte_security.so.24.0 00:09:00.794 INFO: autodetecting backend as ninja 00:09:00.794 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:01.753 CC lib/ut/ut.o 00:09:01.753 CC lib/ut_mock/mock.o 00:09:01.753 CC lib/log/log.o 00:09:01.753 CC lib/log/log_flags.o 00:09:01.753 CC lib/log/log_deprecated.o 00:09:01.753 LIB libspdk_ut_mock.a 00:09:01.753 LIB libspdk_log.a 00:09:01.753 LIB libspdk_ut.a 00:09:02.011 CXX lib/trace_parser/trace.o 00:09:02.011 CC lib/dma/dma.o 00:09:02.011 CC lib/ioat/ioat.o 00:09:02.011 CC lib/util/base64.o 00:09:02.011 CC lib/util/cpuset.o 00:09:02.011 CC lib/util/crc16.o 00:09:02.011 CC lib/util/bit_array.o 00:09:02.011 CC lib/util/crc32c.o 00:09:02.011 CC lib/util/crc32.o 00:09:02.011 CC lib/util/crc32_ieee.o 00:09:02.011 CC lib/util/crc64.o 00:09:02.011 CC lib/util/dif.o 00:09:02.011 LIB libspdk_dma.a 00:09:02.011 CC lib/util/fd.o 00:09:02.011 CC lib/util/file.o 00:09:02.011 CC lib/util/hexlify.o 00:09:02.011 CC lib/util/iov.o 00:09:02.011 CC lib/util/math.o 00:09:02.011 CC lib/util/pipe.o 00:09:02.011 CC lib/util/strerror_tls.o 00:09:02.011 LIB libspdk_ioat.a 00:09:02.268 CC lib/util/string.o 00:09:02.268 CC lib/util/uuid.o 00:09:02.268 CC lib/util/fd_group.o 00:09:02.268 CC lib/util/xor.o 00:09:02.268 CC lib/util/zipf.o 00:09:02.268 LIB libspdk_util.a 00:09:02.268 CC lib/conf/conf.o 00:09:02.268 CC lib/json/json_parse.o 00:09:02.268 CC lib/json/json_util.o 00:09:02.527 CC lib/json/json_write.o 00:09:02.527 CC lib/env_dpdk/env.o 00:09:02.527 CC lib/env_dpdk/memory.o 00:09:02.527 CC lib/idxd/idxd.o 00:09:02.527 CC lib/vmd/vmd.o 00:09:02.527 CC lib/rdma/common.o 00:09:02.527 CC lib/idxd/idxd_user.o 00:09:02.527 CC lib/rdma/rdma_verbs.o 00:09:02.527 LIB libspdk_conf.a 00:09:02.527 CC lib/vmd/led.o 00:09:02.527 CC lib/env_dpdk/pci.o 00:09:02.527 LIB libspdk_json.a 00:09:02.527 CC lib/env_dpdk/init.o 00:09:02.527 CC lib/env_dpdk/threads.o 00:09:02.527 LIB libspdk_idxd.a 00:09:02.527 LIB libspdk_vmd.a 00:09:02.527 LIB libspdk_rdma.a 00:09:02.527 CC lib/env_dpdk/pci_ioat.o 00:09:02.527 CC lib/env_dpdk/pci_virtio.o 00:09:02.527 CC lib/env_dpdk/pci_vmd.o 00:09:02.785 CC lib/jsonrpc/jsonrpc_server.o 00:09:02.785 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:02.785 CC lib/jsonrpc/jsonrpc_client.o 00:09:02.785 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:02.785 CC lib/env_dpdk/pci_idxd.o 00:09:02.785 CC lib/env_dpdk/pci_event.o 00:09:02.785 CC lib/env_dpdk/sigbus_handler.o 00:09:02.785 CC lib/env_dpdk/pci_dpdk.o 00:09:02.785 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:02.785 LIB libspdk_jsonrpc.a 00:09:02.785 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:03.044 CC lib/rpc/rpc.o 00:09:03.044 LIB libspdk_rpc.a 00:09:03.044 LIB libspdk_trace_parser.a 00:09:03.044 LIB libspdk_env_dpdk.a 00:09:03.044 CC lib/keyring/keyring.o 00:09:03.044 CC lib/keyring/keyring_rpc.o 00:09:03.044 CC lib/notify/notify_rpc.o 00:09:03.044 CC lib/notify/notify.o 00:09:03.044 CC lib/trace/trace.o 00:09:03.044 CC lib/trace/trace_flags.o 00:09:03.044 CC lib/trace/trace_rpc.o 00:09:03.303 LIB libspdk_notify.a 00:09:03.303 LIB libspdk_keyring.a 00:09:03.303 LIB libspdk_trace.a 00:09:03.303 CC lib/thread/thread.o 00:09:03.303 CC lib/thread/iobuf.o 00:09:03.303 CC lib/sock/sock.o 00:09:03.303 CC lib/sock/sock_rpc.o 00:09:03.562 LIB libspdk_sock.a 00:09:03.562 LIB libspdk_thread.a 00:09:03.562 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:03.562 CC lib/nvme/nvme_ctrlr.o 00:09:03.562 CC lib/nvme/nvme_fabric.o 00:09:03.562 CC lib/nvme/nvme_ns.o 00:09:03.562 CC lib/nvme/nvme_ns_cmd.o 00:09:03.562 CC lib/nvme/nvme_pcie_common.o 00:09:03.562 CC lib/nvme/nvme_pcie.o 00:09:03.562 CC lib/nvme/nvme_qpair.o 00:09:03.562 CC lib/nvme/nvme.o 00:09:03.562 CC lib/nvme/nvme_quirks.o 00:09:03.820 CC lib/nvme/nvme_transport.o 00:09:04.079 CC lib/nvme/nvme_discovery.o 00:09:04.079 CC lib/accel/accel.o 00:09:04.079 CC lib/accel/accel_rpc.o 00:09:04.079 CC lib/accel/accel_sw.o 00:09:04.079 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:04.079 CC lib/blob/blobstore.o 00:09:04.079 CC lib/init/json_config.o 00:09:04.079 CC lib/blob/request.o 00:09:04.079 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:04.079 CC lib/blob/zeroes.o 00:09:04.079 CC lib/init/subsystem.o 00:09:04.079 CC lib/nvme/nvme_tcp.o 00:09:04.079 LIB libspdk_accel.a 00:09:04.079 CC lib/blob/blob_bs_dev.o 00:09:04.079 CC lib/nvme/nvme_opal.o 00:09:04.079 CC lib/init/subsystem_rpc.o 00:09:04.337 CC lib/nvme/nvme_io_msg.o 00:09:04.337 CC lib/init/rpc.o 00:09:04.337 CC lib/bdev/bdev.o 00:09:04.337 CC lib/bdev/bdev_rpc.o 00:09:04.337 LIB libspdk_init.a 00:09:04.337 CC lib/bdev/bdev_zone.o 00:09:04.337 CC lib/bdev/part.o 00:09:04.337 CC lib/bdev/scsi_nvme.o 00:09:04.337 CC lib/nvme/nvme_poll_group.o 00:09:04.337 CC lib/nvme/nvme_zns.o 00:09:04.594 CC lib/nvme/nvme_stubs.o 00:09:04.594 CC lib/event/app.o 00:09:04.594 CC lib/event/reactor.o 00:09:04.594 LIB libspdk_blob.a 00:09:04.594 CC lib/event/log_rpc.o 00:09:04.594 CC lib/nvme/nvme_auth.o 00:09:04.594 CC lib/blobfs/blobfs.o 00:09:04.594 CC lib/event/app_rpc.o 00:09:04.594 CC lib/blobfs/tree.o 00:09:04.594 CC lib/event/scheduler_static.o 00:09:04.594 CC lib/lvol/lvol.o 00:09:04.850 CC lib/nvme/nvme_rdma.o 00:09:04.850 LIB libspdk_event.a 00:09:04.850 LIB libspdk_blobfs.a 00:09:04.850 LIB libspdk_lvol.a 00:09:05.107 LIB libspdk_bdev.a 00:09:05.107 CC lib/scsi/lun.o 00:09:05.107 CC lib/scsi/port.o 00:09:05.107 CC lib/scsi/scsi.o 00:09:05.107 CC lib/scsi/dev.o 00:09:05.107 CC lib/scsi/scsi_bdev.o 00:09:05.107 CC lib/scsi/scsi_pr.o 00:09:05.107 CC lib/scsi/task.o 00:09:05.107 CC lib/scsi/scsi_rpc.o 00:09:05.364 LIB libspdk_nvme.a 00:09:05.364 LIB libspdk_scsi.a 00:09:05.364 CC lib/nvmf/ctrlr_discovery.o 00:09:05.364 CC lib/nvmf/ctrlr.o 00:09:05.364 CC lib/nvmf/ctrlr_bdev.o 00:09:05.364 CC lib/nvmf/subsystem.o 00:09:05.364 CC lib/nvmf/nvmf.o 00:09:05.364 CC lib/nvmf/transport.o 00:09:05.364 CC lib/nvmf/tcp.o 00:09:05.364 CC lib/nvmf/nvmf_rpc.o 00:09:05.364 CC lib/nvmf/stubs.o 00:09:05.364 CC lib/iscsi/conn.o 00:09:05.364 CC lib/iscsi/init_grp.o 00:09:05.364 CC lib/iscsi/iscsi.o 00:09:05.620 CC lib/nvmf/mdns_server.o 00:09:05.620 CC lib/nvmf/rdma.o 00:09:05.620 CC lib/iscsi/md5.o 00:09:05.620 CC lib/iscsi/param.o 00:09:05.620 CC lib/iscsi/portal_grp.o 00:09:05.620 CC lib/iscsi/tgt_node.o 00:09:05.620 CC lib/iscsi/iscsi_subsystem.o 00:09:05.620 CC lib/iscsi/iscsi_rpc.o 00:09:05.620 CC lib/iscsi/task.o 00:09:05.928 CC lib/nvmf/auth.o 00:09:05.928 LIB libspdk_iscsi.a 00:09:05.928 LIB libspdk_nvmf.a 00:09:06.185 CC module/env_dpdk/env_dpdk_rpc.o 00:09:06.185 CC module/blob/bdev/blob_bdev.o 00:09:06.185 CC module/accel/error/accel_error.o 00:09:06.185 CC module/accel/error/accel_error_rpc.o 00:09:06.185 CC module/accel/dsa/accel_dsa.o 00:09:06.185 CC module/keyring/file/keyring.o 00:09:06.185 CC module/sock/posix/posix.o 00:09:06.185 CC module/accel/iaa/accel_iaa.o 00:09:06.185 CC module/accel/ioat/accel_ioat.o 00:09:06.185 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:06.185 LIB libspdk_env_dpdk_rpc.a 00:09:06.185 CC module/keyring/file/keyring_rpc.o 00:09:06.185 CC module/accel/dsa/accel_dsa_rpc.o 00:09:06.185 LIB libspdk_accel_error.a 00:09:06.185 CC module/accel/iaa/accel_iaa_rpc.o 00:09:06.185 CC module/accel/ioat/accel_ioat_rpc.o 00:09:06.185 LIB libspdk_blob_bdev.a 00:09:06.185 LIB libspdk_accel_dsa.a 00:09:06.442 LIB libspdk_accel_iaa.a 00:09:06.442 LIB libspdk_scheduler_dynamic.a 00:09:06.442 LIB libspdk_accel_ioat.a 00:09:06.442 LIB libspdk_keyring_file.a 00:09:06.442 CC module/bdev/error/vbdev_error.o 00:09:06.442 CC module/bdev/gpt/gpt.o 00:09:06.442 CC module/blobfs/bdev/blobfs_bdev.o 00:09:06.442 CC module/bdev/delay/vbdev_delay.o 00:09:06.442 CC module/bdev/lvol/vbdev_lvol.o 00:09:06.442 CC module/bdev/malloc/bdev_malloc.o 00:09:06.442 LIB libspdk_sock_posix.a 00:09:06.443 CC module/bdev/null/bdev_null.o 00:09:06.443 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:06.443 CC module/bdev/passthru/vbdev_passthru.o 00:09:06.443 CC module/bdev/nvme/bdev_nvme.o 00:09:06.443 CC module/bdev/gpt/vbdev_gpt.o 00:09:06.443 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:06.443 CC module/bdev/null/bdev_null_rpc.o 00:09:06.443 CC module/bdev/error/vbdev_error_rpc.o 00:09:06.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:06.443 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:06.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:06.443 LIB libspdk_bdev_lvol.a 00:09:06.443 LIB libspdk_bdev_gpt.a 00:09:06.443 LIB libspdk_blobfs_bdev.a 00:09:06.700 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:06.700 CC module/bdev/nvme/nvme_rpc.o 00:09:06.700 LIB libspdk_bdev_null.a 00:09:06.700 CC module/bdev/raid/bdev_raid.o 00:09:06.700 LIB libspdk_bdev_passthru.a 00:09:06.700 LIB libspdk_bdev_malloc.a 00:09:06.700 LIB libspdk_bdev_delay.a 00:09:06.700 CC module/bdev/raid/bdev_raid_rpc.o 00:09:06.700 CC module/bdev/nvme/bdev_mdns_client.o 00:09:06.700 CC module/bdev/raid/bdev_raid_sb.o 00:09:06.700 CC module/bdev/raid/raid0.o 00:09:06.700 LIB libspdk_bdev_error.a 00:09:06.700 CC module/bdev/split/vbdev_split.o 00:09:06.700 CC module/bdev/raid/raid1.o 00:09:06.700 CC module/bdev/split/vbdev_split_rpc.o 00:09:06.700 CC module/bdev/raid/concat.o 00:09:06.700 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:06.700 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:06.700 CC module/bdev/aio/bdev_aio.o 00:09:06.700 CC module/bdev/aio/bdev_aio_rpc.o 00:09:06.700 LIB libspdk_bdev_split.a 00:09:06.958 LIB libspdk_bdev_raid.a 00:09:06.958 LIB libspdk_bdev_zone_block.a 00:09:06.958 LIB libspdk_bdev_nvme.a 00:09:06.958 LIB libspdk_bdev_aio.a 00:09:07.215 CC module/event/subsystems/iobuf/iobuf.o 00:09:07.215 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:07.215 CC module/event/subsystems/vmd/vmd.o 00:09:07.215 CC module/event/subsystems/keyring/keyring.o 00:09:07.215 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:07.215 CC module/event/subsystems/sock/sock.o 00:09:07.215 CC module/event/subsystems/scheduler/scheduler.o 00:09:07.215 LIB libspdk_event_keyring.a 00:09:07.215 LIB libspdk_event_scheduler.a 00:09:07.215 LIB libspdk_event_vmd.a 00:09:07.215 LIB libspdk_event_iobuf.a 00:09:07.215 LIB libspdk_event_sock.a 00:09:07.215 CC module/event/subsystems/accel/accel.o 00:09:07.472 LIB libspdk_event_accel.a 00:09:07.472 CC module/event/subsystems/bdev/bdev.o 00:09:07.730 LIB libspdk_event_bdev.a 00:09:07.730 CC module/event/subsystems/scsi/scsi.o 00:09:07.730 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:07.730 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:07.730 LIB libspdk_event_scsi.a 00:09:07.730 LIB libspdk_event_nvmf.a 00:09:07.988 CC module/event/subsystems/iscsi/iscsi.o 00:09:07.988 LIB libspdk_event_iscsi.a 00:09:07.988 TEST_HEADER include/spdk/accel.h 00:09:07.988 TEST_HEADER include/spdk/accel_module.h 00:09:07.988 CXX app/trace/trace.o 00:09:07.988 TEST_HEADER include/spdk/assert.h 00:09:07.988 TEST_HEADER include/spdk/barrier.h 00:09:07.988 TEST_HEADER include/spdk/base64.h 00:09:07.988 TEST_HEADER include/spdk/bdev.h 00:09:07.988 TEST_HEADER include/spdk/bdev_module.h 00:09:07.988 TEST_HEADER include/spdk/bdev_zone.h 00:09:07.988 CC app/trace_record/trace_record.o 00:09:07.988 TEST_HEADER include/spdk/bit_array.h 00:09:07.988 TEST_HEADER include/spdk/bit_pool.h 00:09:08.246 TEST_HEADER include/spdk/blob.h 00:09:08.246 TEST_HEADER include/spdk/blob_bdev.h 00:09:08.246 TEST_HEADER include/spdk/blobfs.h 00:09:08.246 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:08.246 TEST_HEADER include/spdk/conf.h 00:09:08.246 TEST_HEADER include/spdk/config.h 00:09:08.246 TEST_HEADER include/spdk/cpuset.h 00:09:08.246 TEST_HEADER include/spdk/crc16.h 00:09:08.246 TEST_HEADER include/spdk/crc32.h 00:09:08.246 TEST_HEADER include/spdk/crc64.h 00:09:08.246 TEST_HEADER include/spdk/dif.h 00:09:08.246 TEST_HEADER include/spdk/dma.h 00:09:08.246 CC app/iscsi_tgt/iscsi_tgt.o 00:09:08.246 TEST_HEADER include/spdk/endian.h 00:09:08.246 TEST_HEADER include/spdk/env.h 00:09:08.246 TEST_HEADER include/spdk/env_dpdk.h 00:09:08.246 TEST_HEADER include/spdk/event.h 00:09:08.246 TEST_HEADER include/spdk/fd.h 00:09:08.246 TEST_HEADER include/spdk/fd_group.h 00:09:08.246 TEST_HEADER include/spdk/file.h 00:09:08.246 TEST_HEADER include/spdk/ftl.h 00:09:08.246 TEST_HEADER include/spdk/gpt_spec.h 00:09:08.246 TEST_HEADER include/spdk/hexlify.h 00:09:08.246 TEST_HEADER include/spdk/histogram_data.h 00:09:08.246 TEST_HEADER include/spdk/idxd.h 00:09:08.246 TEST_HEADER include/spdk/idxd_spec.h 00:09:08.246 TEST_HEADER include/spdk/init.h 00:09:08.246 TEST_HEADER include/spdk/ioat.h 00:09:08.246 TEST_HEADER include/spdk/ioat_spec.h 00:09:08.246 TEST_HEADER include/spdk/iscsi_spec.h 00:09:08.246 TEST_HEADER include/spdk/json.h 00:09:08.246 TEST_HEADER include/spdk/jsonrpc.h 00:09:08.246 TEST_HEADER include/spdk/keyring.h 00:09:08.246 CC test/app/bdev_svc/bdev_svc.o 00:09:08.246 TEST_HEADER include/spdk/keyring_module.h 00:09:08.246 TEST_HEADER include/spdk/likely.h 00:09:08.246 TEST_HEADER include/spdk/log.h 00:09:08.246 TEST_HEADER include/spdk/lvol.h 00:09:08.246 CC test/bdev/bdevio/bdevio.o 00:09:08.246 TEST_HEADER include/spdk/memory.h 00:09:08.246 TEST_HEADER include/spdk/mmio.h 00:09:08.246 TEST_HEADER include/spdk/nbd.h 00:09:08.246 TEST_HEADER include/spdk/notify.h 00:09:08.246 TEST_HEADER include/spdk/nvme.h 00:09:08.246 TEST_HEADER include/spdk/nvme_intel.h 00:09:08.246 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:08.246 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:08.246 TEST_HEADER include/spdk/nvme_spec.h 00:09:08.246 CC examples/accel/perf/accel_perf.o 00:09:08.246 TEST_HEADER include/spdk/nvme_zns.h 00:09:08.246 TEST_HEADER include/spdk/nvmf.h 00:09:08.246 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:08.246 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:08.246 TEST_HEADER include/spdk/nvmf_spec.h 00:09:08.246 TEST_HEADER include/spdk/nvmf_transport.h 00:09:08.246 TEST_HEADER include/spdk/opal.h 00:09:08.246 CC app/nvmf_tgt/nvmf_main.o 00:09:08.246 TEST_HEADER include/spdk/opal_spec.h 00:09:08.246 TEST_HEADER include/spdk/pci_ids.h 00:09:08.246 TEST_HEADER include/spdk/pipe.h 00:09:08.246 TEST_HEADER include/spdk/queue.h 00:09:08.246 TEST_HEADER include/spdk/reduce.h 00:09:08.246 CC test/accel/dif/dif.o 00:09:08.246 TEST_HEADER include/spdk/rpc.h 00:09:08.246 TEST_HEADER include/spdk/scheduler.h 00:09:08.246 TEST_HEADER include/spdk/scsi.h 00:09:08.246 TEST_HEADER include/spdk/scsi_spec.h 00:09:08.246 TEST_HEADER include/spdk/sock.h 00:09:08.246 TEST_HEADER include/spdk/stdinc.h 00:09:08.246 TEST_HEADER include/spdk/string.h 00:09:08.246 TEST_HEADER include/spdk/thread.h 00:09:08.246 TEST_HEADER include/spdk/trace.h 00:09:08.246 TEST_HEADER include/spdk/trace_parser.h 00:09:08.246 TEST_HEADER include/spdk/tree.h 00:09:08.246 TEST_HEADER include/spdk/ublk.h 00:09:08.246 TEST_HEADER include/spdk/util.h 00:09:08.246 CC test/blobfs/mkfs/mkfs.o 00:09:08.246 TEST_HEADER include/spdk/uuid.h 00:09:08.246 TEST_HEADER include/spdk/version.h 00:09:08.246 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:08.246 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:08.246 TEST_HEADER include/spdk/vhost.h 00:09:08.246 TEST_HEADER include/spdk/vmd.h 00:09:08.246 TEST_HEADER include/spdk/xor.h 00:09:08.246 TEST_HEADER include/spdk/zipf.h 00:09:08.246 CXX test/cpp_headers/accel.o 00:09:08.246 LINK spdk_trace_record 00:09:08.246 LINK bdev_svc 00:09:08.246 LINK iscsi_tgt 00:09:08.246 LINK accel_perf 00:09:08.246 LINK bdevio 00:09:08.246 LINK nvmf_tgt 00:09:08.246 LINK dif 00:09:08.246 LINK mkfs 00:09:08.246 CXX test/cpp_headers/accel_module.o 00:09:08.503 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:08.503 CC test/app/histogram_perf/histogram_perf.o 00:09:08.503 CC test/app/jsoncat/jsoncat.o 00:09:08.503 LINK histogram_perf 00:09:08.503 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:08.503 CXX test/cpp_headers/assert.o 00:09:08.503 CC test/app/stub/stub.o 00:09:08.503 LINK jsoncat 00:09:08.503 CC examples/bdev/hello_world/hello_bdev.o 00:09:08.503 LINK nvme_fuzz 00:09:08.503 CXX test/cpp_headers/barrier.o 00:09:08.503 CC app/spdk_tgt/spdk_tgt.o 00:09:08.503 CC examples/bdev/bdevperf/bdevperf.o 00:09:08.503 LINK spdk_trace 00:09:08.761 LINK stub 00:09:08.761 LINK hello_bdev 00:09:08.761 CC app/spdk_lspci/spdk_lspci.o 00:09:08.762 CXX test/cpp_headers/base64.o 00:09:08.762 LINK spdk_tgt 00:09:08.762 CC test/dma/test_dma/test_dma.o 00:09:08.762 CC test/env/vtophys/vtophys.o 00:09:08.762 LINK spdk_lspci 00:09:08.762 CC examples/blob/hello_world/hello_blob.o 00:09:08.762 CC test/env/mem_callbacks/mem_callbacks.o 00:09:08.762 CXX test/cpp_headers/bdev.o 00:09:08.762 CC app/spdk_nvme_perf/perf.o 00:09:08.762 LINK vtophys 00:09:09.021 LINK bdevperf 00:09:09.021 CC examples/blob/cli/blobcli.o 00:09:09.021 LINK test_dma 00:09:09.021 CXX test/cpp_headers/bdev_module.o 00:09:09.021 CC examples/ioat/perf/perf.o 00:09:09.021 LINK hello_blob 00:09:09.021 LINK iscsi_fuzz 00:09:09.021 LINK ioat_perf 00:09:09.021 CC examples/nvme/hello_world/hello_world.o 00:09:09.279 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:09.279 CC examples/nvme/reconnect/reconnect.o 00:09:09.279 CC test/event/event_perf/event_perf.o 00:09:09.279 CXX test/cpp_headers/bdev_zone.o 00:09:09.279 LINK spdk_nvme_perf 00:09:09.279 CC examples/sock/hello_world/hello_sock.o 00:09:09.279 LINK blobcli 00:09:09.279 CC examples/ioat/verify/verify.o 00:09:09.279 LINK event_perf 00:09:09.279 LINK hello_world 00:09:09.279 LINK hello_sock 00:09:09.279 LINK mem_callbacks 00:09:09.279 CC app/spdk_nvme_identify/identify.o 00:09:09.279 LINK nvme_manage 00:09:09.279 CXX test/cpp_headers/bit_array.o 00:09:09.279 LINK verify 00:09:09.279 CC examples/nvme/arbitration/arbitration.o 00:09:09.279 gmake[2]: Nothing to be done for 'all'. 00:09:09.279 LINK reconnect 00:09:09.279 CC test/event/reactor/reactor.o 00:09:09.279 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:09.537 CC test/event/reactor_perf/reactor_perf.o 00:09:09.537 CC test/env/memory/memory_ut.o 00:09:09.537 LINK reactor 00:09:09.537 CC examples/nvme/hotplug/hotplug.o 00:09:09.537 CC test/env/pci/pci_ut.o 00:09:09.537 CXX test/cpp_headers/bit_pool.o 00:09:09.537 CC app/spdk_nvme_discover/discovery_aer.o 00:09:09.537 LINK reactor_perf 00:09:09.537 LINK env_dpdk_post_init 00:09:09.537 LINK arbitration 00:09:09.537 LINK spdk_nvme_identify 00:09:09.537 CC app/spdk_top/spdk_top.o 00:09:09.537 LINK hotplug 00:09:09.537 LINK pci_ut 00:09:09.537 LINK spdk_nvme_discover 00:09:09.537 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:09.537 CXX test/cpp_headers/blob.o 00:09:09.794 CC test/nvme/reset/reset.o 00:09:09.794 CC test/nvme/aer/aer.o 00:09:09.794 LINK cmb_copy 00:09:09.794 CC examples/vmd/lsvmd/lsvmd.o 00:09:09.794 CC examples/util/zipf/zipf.o 00:09:09.794 CC examples/nvmf/nvmf/nvmf.o 00:09:09.794 CXX test/cpp_headers/blob_bdev.o 00:09:09.794 LINK lsvmd 00:09:09.794 CC examples/thread/thread/thread_ex.o 00:09:09.794 LINK zipf 00:09:09.794 LINK aer 00:09:09.794 CC examples/nvme/abort/abort.o 00:09:09.794 LINK reset 00:09:09.794 LINK spdk_top 00:09:09.795 LINK memory_ut 00:09:09.795 CXX test/cpp_headers/blobfs.o 00:09:09.795 CC examples/vmd/led/led.o 00:09:09.795 LINK thread 00:09:09.795 LINK nvmf 00:09:09.795 CXX test/cpp_headers/blobfs_bdev.o 00:09:10.052 CC test/rpc_client/rpc_client_test.o 00:09:10.052 LINK abort 00:09:10.052 CC test/nvme/sgl/sgl.o 00:09:10.052 LINK led 00:09:10.052 CC app/fio/nvme/fio_plugin.o 00:09:10.052 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:10.052 CXX test/cpp_headers/conf.o 00:09:10.052 LINK rpc_client_test 00:09:10.052 LINK sgl 00:09:10.052 CXX test/cpp_headers/config.o 00:09:10.052 LINK pmr_persistence 00:09:10.052 CC test/nvme/e2edp/nvme_dp.o 00:09:10.052 CC app/fio/bdev/fio_plugin.o 00:09:10.052 CC test/nvme/overhead/overhead.o 00:09:10.052 CC test/thread/poller_perf/poller_perf.o 00:09:10.052 CXX test/cpp_headers/cpuset.o 00:09:10.052 CC test/thread/lock/spdk_lock.o 00:09:10.052 CC examples/idxd/perf/perf.o 00:09:10.310 fio_plugin.c:1559: LINK nvme_dp 00:09:10.310 29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:09:10.310 struct spdk_nvme_fdp_ruhs ruhs; 00:09:10.310 ^ 00:09:10.310 CC test/nvme/err_injection/err_injection.o 00:09:10.310 LINK poller_perf 00:09:10.310 LINK overhead 00:09:10.310 1 warning generated. 00:09:10.310 LINK spdk_nvme 00:09:10.310 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:09:10.310 LINK spdk_bdev 00:09:10.310 LINK idxd_perf 00:09:10.310 CXX test/cpp_headers/crc16.o 00:09:10.310 LINK err_injection 00:09:10.310 CC test/nvme/startup/startup.o 00:09:10.310 CXX test/cpp_headers/crc32.o 00:09:10.310 LINK histogram_ut 00:09:10.310 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:09:10.310 CC test/unit/lib/accel/accel.c/accel_ut.o 00:09:10.310 CXX test/cpp_headers/crc64.o 00:09:10.310 CC test/unit/lib/bdev/part.c/part_ut.o 00:09:10.310 LINK startup 00:09:10.310 LINK spdk_lock 00:09:10.569 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:09:10.569 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:09:10.569 CC test/nvme/reserve/reserve.o 00:09:10.569 CC test/unit/lib/blob/blob.c/blob_ut.o 00:09:10.569 CXX test/cpp_headers/dif.o 00:09:10.569 LINK scsi_nvme_ut 00:09:10.569 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:09:10.569 LINK reserve 00:09:10.569 CC test/unit/lib/dma/dma.c/dma_ut.o 00:09:10.569 LINK blob_bdev_ut 00:09:10.569 CC test/unit/lib/event/app.c/app_ut.o 00:09:10.569 LINK tree_ut 00:09:10.569 CXX test/cpp_headers/dma.o 00:09:10.881 CC test/nvme/simple_copy/simple_copy.o 00:09:10.881 LINK dma_ut 00:09:10.881 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:09:10.881 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:09:10.881 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:09:10.881 LINK simple_copy 00:09:10.881 CXX test/cpp_headers/endian.o 00:09:10.881 LINK app_ut 00:09:10.881 LINK accel_ut 00:09:10.881 LINK ioat_ut 00:09:10.881 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:09:10.881 CC test/nvme/connect_stress/connect_stress.o 00:09:10.881 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:09:10.881 CXX test/cpp_headers/env.o 00:09:10.881 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:09:11.157 LINK connect_stress 00:09:11.157 LINK part_ut 00:09:11.157 LINK reactor_ut 00:09:11.157 LINK blobfs_async_ut 00:09:11.157 LINK blobfs_bdev_ut 00:09:11.157 CXX test/cpp_headers/env_dpdk.o 00:09:11.157 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:09:11.157 CC test/nvme/boot_partition/boot_partition.o 00:09:11.157 LINK gpt_ut 00:09:11.157 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:09:11.157 CC test/nvme/compliance/nvme_compliance.o 00:09:11.157 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:09:11.157 LINK boot_partition 00:09:11.157 LINK blobfs_sync_ut 00:09:11.157 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:09:11.157 CXX test/cpp_headers/event.o 00:09:11.157 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:09:11.157 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:09:11.157 LINK nvme_compliance 00:09:11.416 CXX test/cpp_headers/fd.o 00:09:11.416 LINK bdev_ut 00:09:11.416 CC test/nvme/fused_ordering/fused_ordering.o 00:09:11.416 LINK conn_ut 00:09:11.416 LINK init_grp_ut 00:09:11.416 LINK json_util_ut 00:09:11.416 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:09:11.416 LINK vbdev_lvol_ut 00:09:11.416 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:11.416 LINK fused_ordering 00:09:11.416 CC test/nvme/fdp/fdp.o 00:09:11.416 CXX test/cpp_headers/fd_group.o 00:09:11.416 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:09:11.674 LINK doorbell_aers 00:09:11.674 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:09:11.674 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:09:11.674 LINK fdp 00:09:11.674 CXX test/cpp_headers/file.o 00:09:11.674 LINK json_parse_ut 00:09:11.674 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:09:11.674 LINK jsonrpc_server_ut 00:09:11.674 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:09:11.674 CC test/unit/lib/log/log.c/log_ut.o 00:09:11.674 CC test/unit/lib/iscsi/param.c/param_ut.o 00:09:11.674 CXX test/cpp_headers/ftl.o 00:09:11.932 LINK bdev_ut 00:09:11.932 LINK bdev_raid_sb_ut 00:09:11.932 LINK log_ut 00:09:11.932 CXX test/cpp_headers/gpt_spec.o 00:09:11.932 LINK json_write_ut 00:09:11.932 LINK concat_ut 00:09:11.932 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:09:11.932 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:09:11.932 LINK param_ut 00:09:11.932 LINK blob_ut 00:09:11.932 LINK iscsi_ut 00:09:11.932 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:09:11.932 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:09:11.932 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:09:11.932 CXX test/cpp_headers/hexlify.o 00:09:12.191 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:09:12.191 LINK bdev_zone_ut 00:09:12.191 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:09:12.191 CC test/unit/lib/notify/notify.c/notify_ut.o 00:09:12.191 LINK raid1_ut 00:09:12.191 LINK portal_grp_ut 00:09:12.191 LINK tgt_node_ut 00:09:12.191 CXX test/cpp_headers/histogram_data.o 00:09:12.191 LINK bdev_raid_ut 00:09:12.191 LINK vbdev_zone_block_ut 00:09:12.449 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:09:12.449 CC test/unit/lib/sock/sock.c/sock_ut.o 00:09:12.449 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:09:12.449 CC test/unit/lib/thread/thread.c/thread_ut.o 00:09:12.449 CXX test/cpp_headers/idxd.o 00:09:12.449 LINK notify_ut 00:09:12.449 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:09:12.449 CC test/unit/lib/util/base64.c/base64_ut.o 00:09:12.449 LINK base64_ut 00:09:12.449 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:09:12.706 CXX test/cpp_headers/idxd_spec.o 00:09:12.706 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:09:12.706 LINK dev_ut 00:09:12.706 LINK lvol_ut 00:09:12.706 CC test/unit/lib/sock/posix.c/posix_ut.o 00:09:12.706 CXX test/cpp_headers/init.o 00:09:12.706 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:09:12.706 LINK lun_ut 00:09:12.706 LINK sock_ut 00:09:12.706 LINK thread_ut 00:09:12.706 LINK bit_array_ut 00:09:12.966 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:09:12.966 CXX test/cpp_headers/ioat.o 00:09:12.966 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:09:12.966 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:09:12.966 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:09:12.966 LINK nvme_ut 00:09:12.966 LINK cpuset_ut 00:09:12.966 LINK scsi_ut 00:09:12.966 CXX test/cpp_headers/ioat_spec.o 00:09:12.966 LINK posix_ut 00:09:12.966 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:09:12.966 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:09:12.966 LINK iobuf_ut 00:09:12.966 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:09:12.966 CXX test/cpp_headers/iscsi_spec.o 00:09:12.966 LINK crc16_ut 00:09:13.243 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:09:13.243 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:09:13.243 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:09:13.243 CXX test/cpp_headers/json.o 00:09:13.243 LINK tcp_ut 00:09:13.243 LINK crc32_ieee_ut 00:09:13.243 LINK pci_event_ut 00:09:13.243 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:09:13.243 LINK ctrlr_ut 00:09:13.243 LINK bdev_nvme_ut 00:09:13.243 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:09:13.243 CXX test/cpp_headers/jsonrpc.o 00:09:13.243 LINK scsi_bdev_ut 00:09:13.501 LINK subsystem_ut 00:09:13.501 LINK crc32c_ut 00:09:13.501 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:09:13.501 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:09:13.501 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:09:13.501 CXX test/cpp_headers/keyring.o 00:09:13.501 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:09:13.501 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:09:13.501 LINK crc64_ut 00:09:13.501 LINK subsystem_ut 00:09:13.501 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:09:13.501 CC test/unit/lib/util/dif.c/dif_ut.o 00:09:13.501 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:09:13.501 CXX test/cpp_headers/keyring_module.o 00:09:13.501 LINK scsi_pr_ut 00:09:13.501 LINK nvme_ctrlr_cmd_ut 00:09:13.501 LINK keyring_ut 00:09:13.501 LINK rpc_ut 00:09:13.759 CXX test/cpp_headers/likely.o 00:09:13.759 CC test/unit/lib/util/iov.c/iov_ut.o 00:09:13.759 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:09:13.759 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:09:13.759 LINK rpc_ut 00:09:13.759 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:09:13.759 LINK iov_ut 00:09:13.759 CXX test/cpp_headers/log.o 00:09:13.759 LINK nvme_ctrlr_ocssd_cmd_ut 00:09:13.759 CC test/unit/lib/util/math.c/math_ut.o 00:09:13.759 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:09:13.759 LINK nvme_ctrlr_ut 00:09:14.017 LINK ctrlr_bdev_ut 00:09:14.017 CXX test/cpp_headers/lvol.o 00:09:14.017 LINK ctrlr_discovery_ut 00:09:14.017 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:09:14.017 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:09:14.017 LINK math_ut 00:09:14.017 LINK dif_ut 00:09:14.017 CC test/unit/lib/util/string.c/string_ut.o 00:09:14.017 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:09:14.017 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:09:14.017 LINK nvmf_ut 00:09:14.017 CXX test/cpp_headers/memory.o 00:09:14.017 LINK idxd_user_ut 00:09:14.017 LINK pipe_ut 00:09:14.017 CC test/unit/lib/rdma/common.c/common_ut.o 00:09:14.017 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:09:14.017 LINK nvme_ns_ut 00:09:14.017 LINK string_ut 00:09:14.275 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:09:14.275 CXX test/cpp_headers/mmio.o 00:09:14.275 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:09:14.275 LINK auth_ut 00:09:14.275 CC test/unit/lib/util/xor.c/xor_ut.o 00:09:14.275 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:09:14.275 CXX test/cpp_headers/nbd.o 00:09:14.275 LINK common_ut 00:09:14.275 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:09:14.275 CXX test/cpp_headers/notify.o 00:09:14.275 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:09:14.275 LINK xor_ut 00:09:14.275 LINK idxd_ut 00:09:14.275 CXX test/cpp_headers/nvme_intel.o 00:09:14.275 CXX test/cpp_headers/nvme.o 00:09:14.533 CXX test/cpp_headers/nvme_ocssd.o 00:09:14.533 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:09:14.533 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:09:14.533 LINK transport_ut 00:09:14.533 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:14.792 LINK rdma_ut 00:09:14.792 LINK nvme_poll_group_ut 00:09:14.792 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:09:14.792 LINK nvme_ns_cmd_ut 00:09:14.792 CXX test/cpp_headers/nvme_spec.o 00:09:14.792 CXX test/cpp_headers/nvme_zns.o 00:09:14.792 LINK nvme_ns_ocssd_cmd_ut 00:09:14.792 CXX test/cpp_headers/nvmf.o 00:09:14.792 CXX test/cpp_headers/nvmf_cmd.o 00:09:14.792 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:09:14.792 LINK nvme_qpair_ut 00:09:14.792 LINK nvme_pcie_ut 00:09:14.792 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:14.792 CXX test/cpp_headers/nvmf_spec.o 00:09:14.792 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:09:14.792 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:09:15.051 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:09:15.051 LINK nvme_quirks_ut 00:09:15.051 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:09:15.051 CXX test/cpp_headers/nvmf_transport.o 00:09:15.051 CXX test/cpp_headers/opal.o 00:09:15.051 CXX test/cpp_headers/opal_spec.o 00:09:15.051 CXX test/cpp_headers/pci_ids.o 00:09:15.051 CXX test/cpp_headers/pipe.o 00:09:15.310 LINK nvme_io_msg_ut 00:09:15.310 CXX test/cpp_headers/queue.o 00:09:15.310 CXX test/cpp_headers/reduce.o 00:09:15.310 CXX test/cpp_headers/rpc.o 00:09:15.310 LINK nvme_opal_ut 00:09:15.310 CXX test/cpp_headers/scheduler.o 00:09:15.310 LINK nvme_transport_ut 00:09:15.310 CXX test/cpp_headers/scsi.o 00:09:15.310 LINK nvme_fabric_ut 00:09:15.310 CXX test/cpp_headers/scsi_spec.o 00:09:15.310 CXX test/cpp_headers/sock.o 00:09:15.310 CXX test/cpp_headers/stdinc.o 00:09:15.310 CXX test/cpp_headers/string.o 00:09:15.310 LINK nvme_tcp_ut 00:09:15.310 LINK nvme_pcie_common_ut 00:09:15.569 CXX test/cpp_headers/thread.o 00:09:15.569 CXX test/cpp_headers/trace.o 00:09:15.569 CXX test/cpp_headers/trace_parser.o 00:09:15.569 CXX test/cpp_headers/tree.o 00:09:15.569 CXX test/cpp_headers/ublk.o 00:09:15.569 CXX test/cpp_headers/util.o 00:09:15.569 CXX test/cpp_headers/uuid.o 00:09:15.569 CXX test/cpp_headers/version.o 00:09:15.569 CXX test/cpp_headers/vfio_user_pci.o 00:09:15.569 CXX test/cpp_headers/vfio_user_spec.o 00:09:15.569 CXX test/cpp_headers/vhost.o 00:09:15.569 CXX test/cpp_headers/vmd.o 00:09:15.569 CXX test/cpp_headers/xor.o 00:09:15.569 CXX test/cpp_headers/zipf.o 00:09:15.827 LINK nvme_rdma_ut 00:09:15.827 00:09:15.827 real 1m2.647s 00:09:15.827 user 3m55.197s 00:09:15.827 sys 0m49.968s 00:09:15.827 02:10:03 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:09:15.827 02:10:03 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:09:15.827 ************************************ 00:09:15.827 END TEST unittest_build 00:09:15.827 ************************************ 00:09:15.827 02:10:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:16.086 02:10:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:16.086 02:10:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:16.086 02:10:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:16.086 02:10:03 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:16.086 02:10:03 -- pm/common@44 -- $ pid=1311 00:09:16.086 02:10:03 -- pm/common@50 -- $ kill -TERM 1311 00:09:16.086 02:10:03 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.086 02:10:03 -- nvmf/common.sh@7 -- # uname -s 00:09:16.086 02:10:03 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:09:16.086 02:10:03 -- nvmf/common.sh@7 -- # return 0 00:09:16.086 02:10:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:16.086 02:10:03 -- spdk/autotest.sh@32 -- # uname -s 00:09:16.086 02:10:03 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:09:16.086 02:10:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:16.086 02:10:03 -- pm/common@17 -- # local monitor 00:09:16.086 02:10:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:16.086 02:10:03 -- pm/common@25 -- # sleep 1 00:09:16.086 02:10:03 -- pm/common@21 -- # date +%s 00:09:16.086 02:10:03 -- pm/common@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715739003 00:09:16.086 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715739003_collect-vmstat.pm.log 00:09:17.462 02:10:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:17.462 02:10:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:17.462 02:10:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:17.462 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:09:17.462 02:10:05 -- spdk/autotest.sh@59 -- # create_test_list 00:09:17.462 02:10:05 -- common/autotest_common.sh@744 -- # xtrace_disable 00:09:17.462 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:09:17.462 02:10:05 -- spdk/autotest.sh@61 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:09:17.462 02:10:05 -- spdk/autotest.sh@61 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:09:17.462 02:10:05 -- spdk/autotest.sh@61 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:09:17.462 02:10:05 -- spdk/autotest.sh@62 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:09:17.462 02:10:05 -- spdk/autotest.sh@63 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:09:17.462 02:10:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:17.462 02:10:05 -- common/autotest_common.sh@1451 -- # uname 00:09:17.462 02:10:05 -- common/autotest_common.sh@1451 -- # '[' FreeBSD = FreeBSD ']' 00:09:17.462 02:10:05 -- common/autotest_common.sh@1452 -- # kldunload contigmem.ko 00:09:17.462 kldunload: can't find file contigmem.ko 00:09:17.462 02:10:05 -- common/autotest_common.sh@1452 -- # true 00:09:17.462 02:10:05 -- common/autotest_common.sh@1453 -- # '[' -n '' ']' 00:09:17.462 02:10:05 -- common/autotest_common.sh@1459 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:09:17.462 02:10:05 -- common/autotest_common.sh@1460 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:09:17.462 02:10:05 -- common/autotest_common.sh@1461 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:09:17.462 02:10:05 -- common/autotest_common.sh@1462 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:09:17.462 02:10:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:17.462 02:10:05 -- common/autotest_common.sh@1471 -- # uname 00:09:17.462 02:10:05 -- common/autotest_common.sh@1471 -- # [[ FreeBSD = FreeBSD ]] 00:09:17.462 02:10:05 -- common/autotest_common.sh@1471 -- # sysctl -n kern.ipc.maxsockbuf 00:09:17.462 02:10:05 -- common/autotest_common.sh@1471 -- # (( 2097152 < 4194304 )) 00:09:17.462 02:10:05 -- common/autotest_common.sh@1472 -- # sysctl kern.ipc.maxsockbuf=4194304 00:09:17.462 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:09:17.462 02:10:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:09:17.462 02:10:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:09:17.462 02:10:05 -- spdk/autotest.sh@72 -- # hash lcov 00:09:17.462 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:09:17.462 02:10:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:09:17.462 02:10:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:17.462 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:09:17.462 02:10:05 -- spdk/autotest.sh@91 -- # rm -f 00:09:17.462 02:10:05 -- spdk/autotest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:17.462 kldunload: can't find file contigmem.ko 00:09:17.462 kldunload: can't find file nic_uio.ko 00:09:17.462 02:10:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:09:17.462 02:10:05 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:17.462 02:10:05 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:17.462 02:10:05 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:17.462 02:10:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:09:17.462 02:10:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:17.462 02:10:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:17.462 02:10:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:09:17.462 02:10:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:09:17.462 02:10:05 -- scripts/common.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:09:17.462 nvme0ns1 is not a block device 00:09:17.462 02:10:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:09:17.462 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:09:17.462 02:10:05 -- scripts/common.sh@391 -- # pt= 00:09:17.462 02:10:05 -- scripts/common.sh@392 -- # return 1 00:09:17.462 02:10:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:09:17.462 1+0 records in 00:09:17.462 1+0 records out 00:09:17.462 1048576 bytes transferred in 0.006012 secs (174400060 bytes/sec) 00:09:17.462 02:10:05 -- spdk/autotest.sh@118 -- # sync 00:09:18.028 02:10:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:18.028 02:10:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:18.028 02:10:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:18.595 02:10:06 -- spdk/autotest.sh@124 -- # uname -s 00:09:18.595 02:10:06 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:09:18.595 02:10:06 -- spdk/autotest.sh@128 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:18.854 Contigmem (not present) 00:09:18.854 Buffer Size: not set 00:09:18.854 Num Buffers: not set 00:09:18.854 00:09:18.854 00:09:18.854 Type BDF Vendor Device Driver 00:09:18.854 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:09:18.854 02:10:06 -- spdk/autotest.sh@130 -- # uname -s 00:09:18.854 02:10:06 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:09:18.854 02:10:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:09:18.854 02:10:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.854 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:18.854 02:10:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:09:18.854 02:10:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:18.855 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:18.855 02:10:06 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.855 kldunload: can't find file nic_uio.ko 00:09:18.855 hw.nic_uio.bdfs="0:16:0" 00:09:18.855 hw.contigmem.num_buffers="8" 00:09:18.855 hw.contigmem.buffer_size="268435456" 00:09:19.451 02:10:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:09:19.451 02:10:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.451 02:10:07 -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 02:10:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:09:19.451 02:10:07 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:09:19.451 02:10:07 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:09:19.451 02:10:07 -- common/autotest_common.sh@1573 -- # bdfs=() 00:09:19.451 02:10:07 -- common/autotest_common.sh@1573 -- # local bdfs 00:09:19.451 02:10:07 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:09:19.451 02:10:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:19.451 02:10:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:09:19.451 02:10:07 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:19.451 02:10:07 -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:19.451 02:10:07 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:19.451 02:10:07 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:09:19.451 02:10:07 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:09:19.451 02:10:07 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:09:19.451 02:10:07 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:19.451 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:09:19.451 02:10:07 -- common/autotest_common.sh@1576 -- # device= 00:09:19.451 02:10:07 -- common/autotest_common.sh@1576 -- # true 00:09:19.451 02:10:07 -- common/autotest_common.sh@1577 -- # [[ '' == \0\x\0\a\5\4 ]] 00:09:19.451 02:10:07 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:09:19.451 02:10:07 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:09:19.451 02:10:07 -- common/autotest_common.sh@1589 -- # return 0 00:09:19.451 02:10:07 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:09:19.451 02:10:07 -- spdk/autotest.sh@151 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:19.451 02:10:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.451 02:10:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.451 02:10:07 -- common/autotest_common.sh@10 -- # set +x 00:09:19.710 ************************************ 00:09:19.710 START TEST unittest 00:09:19.710 ************************************ 00:09:19.710 02:10:07 unittest -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:19.710 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:19.710 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.710 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.710 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:09:19.710 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:09:19.710 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:09:19.710 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:19.710 ++ rpc_py=rpc_cmd 00:09:19.710 ++ set -e 00:09:19.710 ++ shopt -s nullglob 00:09:19.710 ++ shopt -s extglob 00:09:19.711 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:09:19.711 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:19.711 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:19.711 +++ CONFIG_WPDK_DIR= 00:09:19.711 +++ CONFIG_ASAN=n 00:09:19.711 +++ CONFIG_VBDEV_COMPRESS=n 00:09:19.711 +++ CONFIG_HAVE_EXECINFO_H=y 00:09:19.711 +++ CONFIG_USDT=n 00:09:19.711 +++ CONFIG_CUSTOMOCF=n 00:09:19.711 +++ CONFIG_PREFIX=/usr/local 00:09:19.711 +++ CONFIG_RBD=n 00:09:19.711 +++ CONFIG_LIBDIR= 00:09:19.711 +++ CONFIG_IDXD=y 00:09:19.711 +++ CONFIG_NVME_CUSE=n 00:09:19.711 +++ CONFIG_SMA=n 00:09:19.711 +++ CONFIG_VTUNE=n 00:09:19.711 +++ CONFIG_TSAN=n 00:09:19.711 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:19.711 +++ CONFIG_VFIO_USER_DIR= 00:09:19.711 +++ CONFIG_PGO_CAPTURE=n 00:09:19.711 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:09:19.711 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:19.711 +++ CONFIG_LTO=n 00:09:19.711 +++ CONFIG_ISCSI_INITIATOR=n 00:09:19.711 +++ CONFIG_CET=n 00:09:19.711 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:19.711 +++ CONFIG_OCF_PATH= 00:09:19.711 +++ CONFIG_RDMA_SET_TOS=y 00:09:19.711 +++ CONFIG_HAVE_ARC4RANDOM=y 00:09:19.711 +++ CONFIG_HAVE_LIBARCHIVE=n 00:09:19.711 +++ CONFIG_UBLK=n 00:09:19.711 +++ CONFIG_ISAL_CRYPTO=y 00:09:19.711 +++ CONFIG_OPENSSL_PATH= 00:09:19.711 +++ CONFIG_OCF=n 00:09:19.711 +++ CONFIG_FUSE=n 00:09:19.711 +++ CONFIG_VTUNE_DIR= 00:09:19.711 +++ CONFIG_FUZZER_LIB= 00:09:19.711 +++ CONFIG_FUZZER=n 00:09:19.711 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:19.711 +++ CONFIG_CRYPTO=n 00:09:19.711 +++ CONFIG_PGO_USE=n 00:09:19.711 +++ CONFIG_VHOST=n 00:09:19.711 +++ CONFIG_DAOS=n 00:09:19.711 +++ CONFIG_DPDK_INC_DIR= 00:09:19.711 +++ CONFIG_DAOS_DIR= 00:09:19.711 +++ CONFIG_UNIT_TESTS=y 00:09:19.711 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:09:19.711 +++ CONFIG_VIRTIO=n 00:09:19.711 +++ CONFIG_DPDK_UADK=n 00:09:19.711 +++ CONFIG_COVERAGE=n 00:09:19.711 +++ CONFIG_RDMA=y 00:09:19.711 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:19.711 +++ CONFIG_URING_PATH= 00:09:19.711 +++ CONFIG_XNVME=n 00:09:19.711 +++ CONFIG_VFIO_USER=n 00:09:19.711 +++ CONFIG_ARCH=native 00:09:19.711 +++ CONFIG_HAVE_EVP_MAC=y 00:09:19.711 +++ CONFIG_URING_ZNS=n 00:09:19.711 +++ CONFIG_WERROR=y 00:09:19.711 +++ CONFIG_HAVE_LIBBSD=n 00:09:19.711 +++ CONFIG_UBSAN=n 00:09:19.711 +++ CONFIG_IPSEC_MB_DIR= 00:09:19.711 +++ CONFIG_GOLANG=n 00:09:19.711 +++ CONFIG_ISAL=y 00:09:19.711 +++ CONFIG_IDXD_KERNEL=n 00:09:19.711 +++ CONFIG_DPDK_LIB_DIR= 00:09:19.711 +++ CONFIG_RDMA_PROV=verbs 00:09:19.711 +++ CONFIG_APPS=y 00:09:19.711 +++ CONFIG_SHARED=n 00:09:19.711 +++ CONFIG_HAVE_KEYUTILS=n 00:09:19.711 +++ CONFIG_FC_PATH= 00:09:19.711 +++ CONFIG_DPDK_PKG_CONFIG=n 00:09:19.711 +++ CONFIG_FC=n 00:09:19.711 +++ CONFIG_AVAHI=n 00:09:19.711 +++ CONFIG_FIO_PLUGIN=y 00:09:19.711 +++ CONFIG_RAID5F=n 00:09:19.711 +++ CONFIG_EXAMPLES=y 00:09:19.711 +++ CONFIG_TESTS=y 00:09:19.711 +++ CONFIG_CRYPTO_MLX5=n 00:09:19.711 +++ CONFIG_MAX_LCORES= 00:09:19.711 +++ CONFIG_IPSEC_MB=n 00:09:19.711 +++ CONFIG_PGO_DIR= 00:09:19.711 +++ CONFIG_DEBUG=y 00:09:19.711 +++ CONFIG_DPDK_COMPRESSDEV=n 00:09:19.711 +++ CONFIG_CROSS_PREFIX= 00:09:19.711 +++ CONFIG_URING=n 00:09:19.711 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:19.711 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:19.711 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:09:19.711 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:09:19.711 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:09:19.711 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:09:19.711 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:09:19.711 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:09:19.711 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:19.711 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:19.711 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:19.711 +++ VHOST_APP=("$_app_dir/vhost") 00:09:19.711 +++ DD_APP=("$_app_dir/spdk_dd") 00:09:19.711 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:09:19.711 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:19.711 +++ [[ #ifndef SPDK_CONFIG_H 00:09:19.711 #define SPDK_CONFIG_H 00:09:19.711 #define SPDK_CONFIG_APPS 1 00:09:19.711 #define SPDK_CONFIG_ARCH native 00:09:19.711 #undef SPDK_CONFIG_ASAN 00:09:19.711 #undef SPDK_CONFIG_AVAHI 00:09:19.711 #undef SPDK_CONFIG_CET 00:09:19.711 #undef SPDK_CONFIG_COVERAGE 00:09:19.711 #define SPDK_CONFIG_CROSS_PREFIX 00:09:19.711 #undef SPDK_CONFIG_CRYPTO 00:09:19.711 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:19.711 #undef SPDK_CONFIG_CUSTOMOCF 00:09:19.711 #undef SPDK_CONFIG_DAOS 00:09:19.711 #define SPDK_CONFIG_DAOS_DIR 00:09:19.711 #define SPDK_CONFIG_DEBUG 1 00:09:19.711 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:19.711 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:19.711 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:19.711 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:19.711 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:19.711 #undef SPDK_CONFIG_DPDK_UADK 00:09:19.711 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:19.711 #define SPDK_CONFIG_EXAMPLES 1 00:09:19.711 #undef SPDK_CONFIG_FC 00:09:19.711 #define SPDK_CONFIG_FC_PATH 00:09:19.711 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:19.711 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:19.711 #undef SPDK_CONFIG_FUSE 00:09:19.711 #undef SPDK_CONFIG_FUZZER 00:09:19.711 #define SPDK_CONFIG_FUZZER_LIB 00:09:19.711 #undef SPDK_CONFIG_GOLANG 00:09:19.711 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:19.711 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:19.711 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:19.711 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:09:19.711 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:19.711 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:19.711 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:09:19.711 #define SPDK_CONFIG_IDXD 1 00:09:19.711 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:19.711 #undef SPDK_CONFIG_IPSEC_MB 00:09:19.711 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:19.711 #define SPDK_CONFIG_ISAL 1 00:09:19.711 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:19.711 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:09:19.711 #define SPDK_CONFIG_LIBDIR 00:09:19.711 #undef SPDK_CONFIG_LTO 00:09:19.711 #define SPDK_CONFIG_MAX_LCORES 00:09:19.711 #undef SPDK_CONFIG_NVME_CUSE 00:09:19.711 #undef SPDK_CONFIG_OCF 00:09:19.711 #define SPDK_CONFIG_OCF_PATH 00:09:19.711 #define SPDK_CONFIG_OPENSSL_PATH 00:09:19.711 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:19.711 #define SPDK_CONFIG_PGO_DIR 00:09:19.711 #undef SPDK_CONFIG_PGO_USE 00:09:19.711 #define SPDK_CONFIG_PREFIX /usr/local 00:09:19.711 #undef SPDK_CONFIG_RAID5F 00:09:19.711 #undef SPDK_CONFIG_RBD 00:09:19.711 #define SPDK_CONFIG_RDMA 1 00:09:19.711 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:19.711 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:19.711 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:09:19.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:19.711 #undef SPDK_CONFIG_SHARED 00:09:19.711 #undef SPDK_CONFIG_SMA 00:09:19.711 #define SPDK_CONFIG_TESTS 1 00:09:19.711 #undef SPDK_CONFIG_TSAN 00:09:19.711 #undef SPDK_CONFIG_UBLK 00:09:19.711 #undef SPDK_CONFIG_UBSAN 00:09:19.711 #define SPDK_CONFIG_UNIT_TESTS 1 00:09:19.711 #undef SPDK_CONFIG_URING 00:09:19.711 #define SPDK_CONFIG_URING_PATH 00:09:19.711 #undef SPDK_CONFIG_URING_ZNS 00:09:19.711 #undef SPDK_CONFIG_USDT 00:09:19.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:19.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:19.711 #undef SPDK_CONFIG_VFIO_USER 00:09:19.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:19.711 #undef SPDK_CONFIG_VHOST 00:09:19.711 #undef SPDK_CONFIG_VIRTIO 00:09:19.711 #undef SPDK_CONFIG_VTUNE 00:09:19.711 #define SPDK_CONFIG_VTUNE_DIR 00:09:19.711 #define SPDK_CONFIG_WERROR 1 00:09:19.711 #define SPDK_CONFIG_WPDK_DIR 00:09:19.711 #undef SPDK_CONFIG_XNVME 00:09:19.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:19.711 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:19.711 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.711 +++ [[ -e /bin/wpdk_common.sh ]] 00:09:19.711 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.711 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.711 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:09:19.712 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:09:19.712 ++++ export PATH 00:09:19.712 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:09:19.712 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:19.712 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:19.712 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:19.712 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:19.712 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:19.712 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:09:19.712 +++ TEST_TAG=N/A 00:09:19.712 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:19.712 +++ PM_OUTPUTDIR=/usr/home/vagrant/spdk_repo/spdk/../output/power 00:09:19.712 ++++ uname -s 00:09:19.712 +++ PM_OS=FreeBSD 00:09:19.712 +++ MONITOR_RESOURCES_SUDO=() 00:09:19.712 +++ declare -A MONITOR_RESOURCES_SUDO 00:09:19.712 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:19.712 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:19.712 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:19.712 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:19.712 +++ SUDO[0]= 00:09:19.712 +++ SUDO[1]='sudo -E' 00:09:19.712 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:19.712 +++ [[ FreeBSD == FreeBSD ]] 00:09:19.712 +++ MONITOR_RESOURCES=(collect-vmstat) 00:09:19.712 +++ [[ ! -d /usr/home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:19.712 ++ : 0 00:09:19.712 ++ export RUN_NIGHTLY 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_RUN_VALGRIND 00:09:19.712 ++ : 1 00:09:19.712 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:09:19.712 ++ : 1 00:09:19.712 ++ export SPDK_TEST_UNITTEST 00:09:19.712 ++ : 00:09:19.712 ++ export SPDK_TEST_AUTOBUILD 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_RELEASE_BUILD 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_ISAL 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_ISCSI 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_ISCSI_INITIATOR 00:09:19.712 ++ : 1 00:09:19.712 ++ export SPDK_TEST_NVME 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVME_PMR 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVME_BP 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVME_CLI 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVME_CUSE 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVME_FDP 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVMF 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VFIOUSER 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VFIOUSER_QEMU 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_FUZZER 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_FUZZER_SHORT 00:09:19.712 ++ : rdma 00:09:19.712 ++ export SPDK_TEST_NVMF_TRANSPORT 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_RBD 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VHOST 00:09:19.712 ++ : 1 00:09:19.712 ++ export SPDK_TEST_BLOCKDEV 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_IOAT 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_BLOBFS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VHOST_INIT 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_LVOL 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VBDEV_COMPRESS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_RUN_ASAN 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_RUN_UBSAN 00:09:19.712 ++ : 00:09:19.712 ++ export SPDK_RUN_EXTERNAL_DPDK 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_RUN_NON_ROOT 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_CRYPTO 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_FTL 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_OCF 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_VMD 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_OPAL 00:09:19.712 ++ : 00:09:19.712 ++ export SPDK_TEST_NATIVE_DPDK 00:09:19.712 ++ : true 00:09:19.712 ++ export SPDK_AUTOTEST_X 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_RAID5 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_URING 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_USDT 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_USE_IGB_UIO 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_SCHEDULER 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_SCANBUILD 00:09:19.712 ++ : 00:09:19.712 ++ export SPDK_TEST_NVMF_NICS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_SMA 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_DAOS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_XNVME 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_ACCEL_DSA 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_ACCEL_IAA 00:09:19.712 ++ : 00:09:19.712 ++ export SPDK_TEST_FUZZER_TARGET 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_TEST_NVMF_MDNS 00:09:19.712 ++ : 0 00:09:19.712 ++ export SPDK_JSONRPC_GO_CLIENT 00:09:19.712 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:09:19.712 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:09:19.712 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:19.712 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:19.712 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:19.712 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:19.712 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:19.712 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:19.712 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:19.712 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:09:19.712 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:09:19.712 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:09:19.712 ++ export PYTHONDONTWRITEBYTECODE=1 00:09:19.712 ++ PYTHONDONTWRITEBYTECODE=1 00:09:19.712 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:19.712 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:19.712 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:19.712 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:19.712 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:09:19.712 ++ rm -rf /var/tmp/asan_suppression_file 00:09:19.712 ++ cat 00:09:19.712 ++ echo leak:libfuse3.so 00:09:19.712 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:19.712 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:19.712 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:19.712 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:19.712 ++ '[' -z /var/spdk/dependencies ']' 00:09:19.712 ++ export DEPENDENCY_DIR 00:09:19.712 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:09:19.712 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:09:19.712 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:09:19.712 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:09:19.712 ++ export QEMU_BIN= 00:09:19.712 ++ QEMU_BIN= 00:09:19.712 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:09:19.712 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:09:19.712 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:19.712 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:19.712 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:19.712 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:19.712 ++ '[' 0 -eq 0 ']' 00:09:19.712 ++ export valgrind= 00:09:19.712 ++ valgrind= 00:09:19.712 +++ uname -s 00:09:19.712 ++ '[' FreeBSD = Linux ']' 00:09:19.712 +++ uname -s 00:09:19.712 ++ '[' FreeBSD = FreeBSD ']' 00:09:19.712 ++ MAKE=gmake 00:09:19.712 +++ sysctl -a 00:09:19.712 +++ grep -E -i hw.ncpu 00:09:19.712 +++ awk '{print $2}' 00:09:19.712 ++ MAKEFLAGS=-j10 00:09:19.712 ++ HUGEMEM=2048 00:09:19.712 ++ export HUGEMEM=2048 00:09:19.712 ++ HUGEMEM=2048 00:09:19.712 ++ NO_HUGE=() 00:09:19.712 ++ TEST_MODE= 00:09:19.712 ++ [[ -z '' ]] 00:09:19.712 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:09:19.712 ++ exec 00:09:19.712 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:09:19.712 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:09:19.712 ++ set_test_storage 2147483648 00:09:19.712 ++ [[ -v testdir ]] 00:09:19.712 ++ local requested_size=2147483648 00:09:19.712 ++ local mount target_dir 00:09:19.712 ++ local -A mounts fss sizes avails uses 00:09:19.712 ++ local source fs size avail mount use 00:09:19.712 ++ local storage_fallback storage_candidates 00:09:19.712 +++ mktemp -udt spdk.XXXXXX 00:09:19.712 ++ storage_fallback=/tmp/spdk.XXXXXX.cMG2r5VL 00:09:19.712 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:19.712 ++ [[ -n '' ]] 00:09:19.712 ++ [[ -n '' ]] 00:09:19.712 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.cMG2r5VL/tests/unit /tmp/spdk.XXXXXX.cMG2r5VL 00:09:19.712 ++ requested_size=2214592512 00:09:19.712 ++ read -r source fs size use avail _ mount 00:09:19.713 +++ df -T 00:09:19.713 +++ grep -v Filesystem 00:09:19.713 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:09:19.713 ++ fss["$mount"]=ufs 00:09:19.713 ++ avails["$mount"]=17238077440 00:09:19.713 ++ sizes["$mount"]=31182712832 00:09:19.713 ++ uses["$mount"]=11450019840 00:09:19.713 ++ read -r source fs size use avail _ mount 00:09:19.713 ++ mounts["$mount"]=devfs 00:09:19.713 ++ fss["$mount"]=devfs 00:09:19.713 ++ avails["$mount"]=0 00:09:19.713 ++ sizes["$mount"]=1024 00:09:19.713 ++ uses["$mount"]=1024 00:09:19.713 ++ read -r source fs size use avail _ mount 00:09:19.713 ++ mounts["$mount"]=tmpfs 00:09:19.713 ++ fss["$mount"]=tmpfs 00:09:19.713 ++ avails["$mount"]=2147442688 00:09:19.713 ++ sizes["$mount"]=2147483648 00:09:19.713 ++ uses["$mount"]=40960 00:09:19.713 ++ read -r source fs size use avail _ mount 00:09:19.713 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:09:19.713 ++ fss["$mount"]=fusefs.sshfs 00:09:19.713 ++ avails["$mount"]=89783533568 00:09:19.713 ++ sizes["$mount"]=105088212992 00:09:19.713 ++ uses["$mount"]=9919246336 00:09:19.713 ++ read -r source fs size use avail _ mount 00:09:19.713 ++ printf '* Looking for test storage...\n' 00:09:19.713 * Looking for test storage... 00:09:19.713 ++ local target_space new_size 00:09:19.713 ++ for target_dir in "${storage_candidates[@]}" 00:09:19.713 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.713 +++ awk '$1 !~ /Filesystem/{print $6}' 00:09:19.713 ++ mount=/ 00:09:19.713 ++ target_space=17238077440 00:09:19.713 ++ (( target_space == 0 || target_space < requested_size )) 00:09:19.713 ++ (( target_space >= requested_size )) 00:09:19.713 ++ [[ ufs == tmpfs ]] 00:09:19.713 ++ [[ ufs == ramfs ]] 00:09:19.713 ++ [[ / == / ]] 00:09:19.713 ++ new_size=13664612352 00:09:19.713 ++ (( new_size * 100 / sizes[/] > 95 )) 00:09:19.713 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.713 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.713 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.713 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:09:19.713 ++ return 0 00:09:19.713 ++ set -o errtrace 00:09:19.713 ++ shopt -s extdebug 00:09:19.713 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:09:19.713 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@1683 -- # true 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@29 -- # exec 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@18 -- # set -x 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@158 -- # '[' -z x ']' 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@179 -- # hash lcov 00:09:19.713 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@182 -- # cov_avail=no 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@206 -- # uname -m 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:09:19.713 02:10:07 unittest -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.713 02:10:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.713 ************************************ 00:09:19.713 START TEST unittest_pci_event 00:09:19.713 ************************************ 00:09:19.713 02:10:07 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:09:19.713 00:09:19.713 00:09:19.713 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.713 http://cunit.sourceforge.net/ 00:09:19.713 00:09:19.713 00:09:19.713 Suite: pci_event 00:09:19.713 Test: test_pci_parse_event ...passed 00:09:19.713 00:09:19.713 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.713 suites 1 1 n/a 0 0 00:09:19.713 tests 1 1 1 0 0 00:09:19.713 asserts 1 1 1 0 n/a 00:09:19.713 00:09:19.713 Elapsed time = 0.000 seconds 00:09:19.713 00:09:19.713 real 0m0.028s 00:09:19.713 user 0m0.006s 00:09:19.713 sys 0m0.007s 00:09:19.713 02:10:07 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.713 02:10:07 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:09:19.713 ************************************ 00:09:19.713 END TEST unittest_pci_event 00:09:19.713 ************************************ 00:09:19.973 02:10:07 unittest -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 ************************************ 00:09:19.973 START TEST unittest_include 00:09:19.973 ************************************ 00:09:19.973 02:10:07 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:09:19.973 00:09:19.973 00:09:19.973 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.973 http://cunit.sourceforge.net/ 00:09:19.973 00:09:19.973 00:09:19.973 Suite: histogram 00:09:19.973 Test: histogram_test ...passed 00:09:19.973 Test: histogram_merge ...passed 00:09:19.973 00:09:19.973 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.973 suites 1 1 n/a 0 0 00:09:19.973 tests 2 2 2 0 0 00:09:19.973 asserts 50 50 50 0 n/a 00:09:19.973 00:09:19.973 Elapsed time = 0.000 seconds 00:09:19.973 00:09:19.973 real 0m0.008s 00:09:19.973 user 0m0.008s 00:09:19.973 sys 0m0.000s 00:09:19.973 02:10:07 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.973 02:10:07 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 ************************************ 00:09:19.973 END TEST unittest_include 00:09:19.973 ************************************ 00:09:19.973 02:10:07 unittest -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.973 02:10:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 ************************************ 00:09:19.973 START TEST unittest_bdev 00:09:19.973 ************************************ 00:09:19.973 02:10:07 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:09:19.973 02:10:07 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:09:19.973 00:09:19.973 00:09:19.973 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.973 http://cunit.sourceforge.net/ 00:09:19.973 00:09:19.973 00:09:19.973 Suite: bdev 00:09:19.973 Test: bytes_to_blocks_test ...passed 00:09:19.973 Test: num_blocks_test ...passed 00:09:19.973 Test: io_valid_test ...passed 00:09:19.973 Test: open_write_test ...[2024-05-15 02:10:07.803232] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:09:19.973 [2024-05-15 02:10:07.803530] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:09:19.973 [2024-05-15 02:10:07.803547] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:09:19.973 passed 00:09:19.973 Test: claim_test ...passed 00:09:19.973 Test: alias_add_del_test ...[2024-05-15 02:10:07.807144] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:09:19.973 [2024-05-15 02:10:07.807198] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4605:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:09:19.973 [2024-05-15 02:10:07.807213] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:09:19.973 passed 00:09:19.973 Test: get_device_stat_test ...passed 00:09:19.973 Test: bdev_io_types_test ...passed 00:09:19.973 Test: bdev_io_wait_test ...passed 00:09:19.973 Test: bdev_io_spans_split_test ...passed 00:09:19.973 Test: bdev_io_boundary_split_test ...passed 00:09:19.973 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-15 02:10:07.815390] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:09:19.973 passed 00:09:19.973 Test: bdev_io_mix_split_test ...passed 00:09:19.973 Test: bdev_io_split_with_io_wait ...passed 00:09:19.973 Test: bdev_io_write_unit_split_test ...[2024-05-15 02:10:07.820726] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:19.973 [2024-05-15 02:10:07.820778] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:09:19.973 [2024-05-15 02:10:07.820801] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:09:19.973 [2024-05-15 02:10:07.820818] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:09:19.973 passed 00:09:19.973 Test: bdev_io_alignment_with_boundary ...passed 00:09:19.973 Test: bdev_io_alignment ...passed 00:09:19.973 Test: bdev_histograms ...passed 00:09:19.973 Test: bdev_write_zeroes ...passed 00:09:19.973 Test: bdev_compare_and_write ...passed 00:09:19.973 Test: bdev_compare ...passed 00:09:19.973 Test: bdev_compare_emulated ...passed 00:09:19.973 Test: bdev_zcopy_write ...passed 00:09:19.973 Test: bdev_zcopy_read ...passed 00:09:19.973 Test: bdev_open_while_hotremove ...passed 00:09:19.973 Test: bdev_close_while_hotremove ...passed 00:09:19.974 Test: bdev_open_ext_test ...passed 00:09:19.974 Test: bdev_open_ext_unregister ...passed 00:09:19.974 Test: bdev_set_io_timeout ...[2024-05-15 02:10:07.841054] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:19.974 [2024-05-15 02:10:07.841126] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:09:19.974 passed 00:09:19.974 Test: bdev_set_qd_sampling ...passed 00:09:19.974 Test: lba_range_overlap ...passed 00:09:19.974 Test: lock_lba_range_check_ranges ...passed 00:09:19.974 Test: lock_lba_range_with_io_outstanding ...passed 00:09:19.974 Test: lock_lba_range_overlapped ...passed 00:09:19.974 Test: bdev_quiesce ...[2024-05-15 02:10:07.850915] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10059:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:09:19.974 passed 00:09:19.974 Test: bdev_io_abort ...passed 00:09:19.974 Test: bdev_unmap ...passed 00:09:19.974 Test: bdev_write_zeroes_split_test ...passed 00:09:19.974 Test: bdev_set_options_test ...passed 00:09:19.974 Test: bdev_get_memory_domains ...passed 00:09:19.974 Test: bdev_io_ext ...[2024-05-15 02:10:07.856867] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:09:19.974 passed 00:09:19.974 Test: bdev_io_ext_no_opts ...passed 00:09:19.974 Test: bdev_io_ext_invalid_opts ...passed 00:09:19.974 Test: bdev_io_ext_split ...passed 00:09:19.974 Test: bdev_io_ext_bounce_buffer ...passed 00:09:19.974 Test: bdev_register_uuid_alias ...[2024-05-15 02:10:07.866929] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 406ab16f-1260-11ef-99fd-bfc7c66e2865 already exists 00:09:19.974 [2024-05-15 02:10:07.866986] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:406ab16f-1260-11ef-99fd-bfc7c66e2865 alias for bdev bdev0 00:09:19.974 passed 00:09:19.974 Test: bdev_unregister_by_name ...[2024-05-15 02:10:07.867448] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7926:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:09:19.974 [2024-05-15 02:10:07.867468] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:09:19.974 passed 00:09:19.974 Test: for_each_bdev_test ...passed 00:09:19.974 Test: bdev_seek_test ...passed 00:09:19.974 Test: bdev_copy ...passed 00:09:19.974 Test: bdev_copy_split_test ...passed 00:09:19.974 Test: examine_locks ...passed 00:09:19.974 Test: claim_v2_rwo ...[2024-05-15 02:10:07.873004] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873045] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873059] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873080] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873093] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873109] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8656:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:09:19.974 passed 00:09:19.974 Test: claim_v2_rom ...[2024-05-15 02:10:07.873156] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873171] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873183] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873200] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873214] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:09:19.974 [2024-05-15 02:10:07.873227] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:19.974 passed 00:09:19.974 Test: claim_v2_rwm ...passed 00:09:19.974 Test: claim_v2_existing_writer ...passed 00:09:19.974 Test: claim_v2_existing_v1 ...[2024-05-15 02:10:07.873255] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:19.974 [2024-05-15 02:10:07.873269] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873281] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873293] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873305] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873318] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8748:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873340] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:09:19.974 [2024-05-15 02:10:07.873372] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:19.974 [2024-05-15 02:10:07.873384] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:09:19.974 [2024-05-15 02:10:07.873412] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873424] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873436] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:09:19.974 passed 00:09:19.974 Test: claim_v1_existing_v2 ...passed 00:09:19.974 Test: examine_claimed ...passed 00:09:19.974 00:09:19.974 [2024-05-15 02:10:07.873463] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873478] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873491] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:09:19.974 [2024-05-15 02:10:07.873572] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:09:19.974 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.974 suites 1 1 n/a 0 0 00:09:19.974 tests 59 59 59 0 0 00:09:19.974 asserts 4599 4599 4599 0 n/a 00:09:19.974 00:09:19.974 Elapsed time = 0.070 seconds 00:09:19.974 02:10:07 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:09:19.974 00:09:19.974 00:09:19.974 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.974 http://cunit.sourceforge.net/ 00:09:19.974 00:09:19.974 00:09:19.974 Suite: nvme 00:09:19.974 Test: test_create_ctrlr ...passed 00:09:19.974 Test: test_reset_ctrlr ...[2024-05-15 02:10:07.883530] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 passed 00:09:19.974 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:09:19.974 Test: test_failover_ctrlr ...passed 00:09:19.974 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:09:19.974 Test: test_pending_reset ...[2024-05-15 02:10:07.883976] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 [2024-05-15 02:10:07.884004] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 [2024-05-15 02:10:07.884027] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 passed 00:09:19.974 Test: test_attach_ctrlr ...[2024-05-15 02:10:07.884161] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 [2024-05-15 02:10:07.884193] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 passed 00:09:19.974 Test: test_aer_cb ...[2024-05-15 02:10:07.884295] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:09:19.974 passed 00:09:19.974 Test: test_submit_nvme_cmd ...passed 00:09:19.974 Test: test_add_remove_trid ...passed 00:09:19.974 Test: test_abort ...passed 00:09:19.974 Test: test_get_io_qpair ...passed 00:09:19.974 Test: test_bdev_unregister ...passed 00:09:19.974 Test: test_compare_ns ...passed 00:09:19.974 Test: test_init_ana_log_page ...[2024-05-15 02:10:07.884563] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7436:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:09:19.974 passed 00:09:19.974 Test: test_get_memory_domains ...passed 00:09:19.974 Test: test_reconnect_qpair ...passed 00:09:19.974 Test: test_create_bdev_ctrlr ...[2024-05-15 02:10:07.884825] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.974 passed 00:09:19.974 Test: test_add_multi_ns_to_bdev ...[2024-05-15 02:10:07.884873] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5362:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:09:19.974 [2024-05-15 02:10:07.885001] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4553:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:09:19.974 passed 00:09:19.974 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:09:19.974 Test: test_admin_path ...passed 00:09:19.974 Test: test_reset_bdev_ctrlr ...passed 00:09:19.974 Test: test_find_io_path ...passed 00:09:19.974 Test: test_retry_io_if_ana_state_is_updating ...passed 00:09:19.974 Test: test_retry_io_for_io_path_error ...passed 00:09:19.974 Test: test_retry_io_count ...passed 00:09:19.974 Test: test_concurrent_read_ana_log_page ...passed 00:09:19.974 Test: test_retry_io_for_ana_error ...passed 00:09:19.974 Test: test_check_io_error_resiliency_params ...[2024-05-15 02:10:07.885836] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6056:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:09:19.975 [2024-05-15 02:10:07.885865] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6060:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:19.975 [2024-05-15 02:10:07.885888] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6069:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:09:19.975 [2024-05-15 02:10:07.885910] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6072:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:09:19.975 [2024-05-15 02:10:07.885933] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:19.975 [2024-05-15 02:10:07.885962] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:09:19.975 [2024-05-15 02:10:07.885985] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6064:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:09:19.975 [2024-05-15 02:10:07.886007] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6079:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:09:19.975 [2024-05-15 02:10:07.886029] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:09:19.975 passed 00:09:19.975 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:09:19.975 Test: test_reconnect_ctrlr ...[2024-05-15 02:10:07.886153] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 passed 00:09:19.975 Test: test_retry_failover_ctrlr ...passed 00:09:19.975 Test: test_fail_path ...[2024-05-15 02:10:07.886190] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886256] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886300] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886335] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886381] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886440] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886460] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886478] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.886496] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 passed 00:09:19.975 Test: test_nvme_ns_cmp ...passed 00:09:19.975 Test: test_ana_transition ...passed 00:09:19.975 Test: test_set_preferred_path ...passed 00:09:19.975 Test: test_find_next_io_path ...passed 00:09:19.975 Test: test_find_io_path_min_qd ...passed 00:09:19.975 Test: test_disable_auto_failback ...[2024-05-15 02:10:07.886513] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 passed 00:09:19.975 Test: test_set_multipath_policy ...passed 00:09:19.975 Test: test_uuid_generation ...[2024-05-15 02:10:07.886687] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 passed 00:09:19.975 Test: test_retry_io_to_same_path ...passed 00:09:19.975 Test: test_race_between_reset_and_disconnected ...passed 00:09:19.975 Test: test_ctrlr_op_rpc ...passed 00:09:19.975 Test: test_bdev_ctrlr_op_rpc ...passed 00:09:19.975 Test: test_disable_enable_ctrlr ...passed 00:09:19.975 Test: test_delete_ctrlr_done ...passed[2024-05-15 02:10:07.924412] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 [2024-05-15 02:10:07.924497] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:19.975 00:09:19.975 Test: test_ns_remove_during_reset ...passed 00:09:19.975 00:09:19.975 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.975 suites 1 1 n/a 0 0 00:09:19.975 tests 48 48 48 0 0 00:09:19.975 asserts 3565 3565 3565 0 n/a 00:09:19.975 00:09:19.975 Elapsed time = 0.016 seconds 00:09:19.975 02:10:07 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:09:19.975 00:09:19.975 00:09:19.975 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.975 http://cunit.sourceforge.net/ 00:09:19.975 00:09:19.975 Test Options 00:09:19.975 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:09:19.975 00:09:19.975 Suite: raid 00:09:19.975 Test: test_create_raid ...passed 00:09:19.975 Test: test_create_raid_superblock ...passed 00:09:19.975 Test: test_delete_raid ...passed 00:09:19.975 Test: test_create_raid_invalid_args ...[2024-05-15 02:10:07.936716] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1498:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:09:19.975 [2024-05-15 02:10:07.936941] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1492:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:09:19.975 [2024-05-15 02:10:07.937004] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1482:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:09:19.975 [2024-05-15 02:10:07.937027] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3133:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:19.975 [2024-05-15 02:10:07.937036] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3309:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:19.975 [2024-05-15 02:10:07.937140] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3133:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:19.975 [2024-05-15 02:10:07.937149] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3309:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:19.975 passed 00:09:19.975 Test: test_delete_raid_invalid_args ...passed 00:09:19.975 Test: test_io_channel ...passed 00:09:19.975 Test: test_reset_io ...passed 00:09:19.975 Test: test_write_io ...passed 00:09:19.975 Test: test_read_io ...passed 00:09:21.353 Test: test_unmap_io ...passed 00:09:21.353 Test: test_io_failure ...[2024-05-15 02:10:09.121893] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 966:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:09:21.353 passed 00:09:21.353 Test: test_multi_raid_no_io ...passed 00:09:21.353 Test: test_multi_raid_with_io ...passed 00:09:21.353 Test: test_io_type_supported ...passed 00:09:21.353 Test: test_raid_json_dump_info ...passed 00:09:21.353 Test: test_context_size ...passed 00:09:21.353 Test: test_raid_level_conversions ...passed 00:09:21.353 Test: test_raid_io_split ...passed 00:09:21.353 Test: test_raid_process ...passedTest Options 00:09:21.353 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:09:21.353 00:09:21.353 Suite: raid_dif 00:09:21.353 Test: test_create_raid ...passed 00:09:21.353 Test: test_create_raid_superblock ...passed 00:09:21.353 Test: test_delete_raid ...passed 00:09:21.353 Test: test_create_raid_invalid_args ...[2024-05-15 02:10:09.123102] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1498:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:09:21.353 [2024-05-15 02:10:09.123129] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1492:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:09:21.353 [2024-05-15 02:10:09.123187] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1482:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:09:21.353 [2024-05-15 02:10:09.123201] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3133:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:21.353 [2024-05-15 02:10:09.123208] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3309:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:21.353 passed 00:09:21.353 Test: test_delete_raid_invalid_args ...[2024-05-15 02:10:09.123290] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3133:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:09:21.353 [2024-05-15 02:10:09.123296] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3309:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:09:21.353 passed 00:09:21.353 Test: test_io_channel ...passed 00:09:21.353 Test: test_reset_io ...passed 00:09:21.353 Test: test_write_io ...passed 00:09:21.353 Test: test_read_io ...passed 00:09:21.921 Test: test_unmap_io ...passed 00:09:21.921 Test: test_io_failure ...[2024-05-15 02:10:09.897733] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 966:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:09:21.921 passed 00:09:21.921 Test: test_multi_raid_no_io ...passed 00:09:21.921 Test: test_multi_raid_with_io ...passed 00:09:21.921 Test: test_io_type_supported ...passed 00:09:21.921 Test: test_raid_json_dump_info ...passed 00:09:21.921 Test: test_context_size ...passed 00:09:21.921 Test: test_raid_level_conversions ...passed 00:09:21.921 Test: test_raid_io_split ...passed 00:09:21.921 Test: test_raid_process ...passed 00:09:21.921 00:09:21.921 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.921 suites 2 2 n/a 0 0 00:09:21.921 tests 38 38 38 0 0 00:09:21.921 asserts 355741 355741 355741 0 n/a 00:09:21.921 00:09:21.921 Elapsed time = 1.961 seconds 00:09:21.921 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:09:21.921 00:09:21.921 00:09:21.921 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.921 http://cunit.sourceforge.net/ 00:09:21.921 00:09:21.921 00:09:21.921 Suite: raid_sb 00:09:21.921 Test: test_raid_bdev_write_superblock ...passed 00:09:21.921 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:21.921 Test: test_raid_bdev_parse_superblock ...[2024-05-15 02:10:09.911257] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:21.921 passed 00:09:21.921 Suite: raid_sb_md 00:09:21.921 Test: test_raid_bdev_write_superblock ...passed 00:09:21.921 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:21.921 Test: test_raid_bdev_parse_superblock ...passed 00:09:21.921 Suite: raid_sb_md_interleaved 00:09:21.921 Test: test_raid_bdev_write_superblock ...passed 00:09:21.921 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:09:21.921 Test: test_raid_bdev_parse_superblock ...passed 00:09:21.921 00:09:21.921 [2024-05-15 02:10:09.912112] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:21.921 [2024-05-15 02:10:09.912282] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:09:21.921 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.921 suites 3 3 n/a 0 0 00:09:21.921 tests 9 9 9 0 0 00:09:21.921 asserts 139 139 139 0 n/a 00:09:21.921 00:09:21.921 Elapsed time = 0.000 seconds 00:09:21.921 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:09:21.921 00:09:21.921 00:09:21.921 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.921 http://cunit.sourceforge.net/ 00:09:21.921 00:09:21.921 00:09:21.921 Suite: concat 00:09:21.921 Test: test_concat_start ...passed 00:09:21.921 Test: test_concat_rw ...passed 00:09:21.921 Test: test_concat_null_payload ...passed 00:09:21.921 00:09:21.921 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.921 suites 1 1 n/a 0 0 00:09:21.921 tests 3 3 3 0 0 00:09:21.921 asserts 8460 8460 8460 0 n/a 00:09:21.921 00:09:21.921 Elapsed time = 0.000 seconds 00:09:21.921 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: raid1 00:09:22.181 Test: test_raid1_start ...passed 00:09:22.181 Test: test_raid1_read_balancing ...passed 00:09:22.181 00:09:22.181 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.181 suites 1 1 n/a 0 0 00:09:22.181 tests 2 2 2 0 0 00:09:22.181 asserts 2880 2880 2880 0 n/a 00:09:22.181 00:09:22.181 Elapsed time = 0.000 seconds 00:09:22.181 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: zone 00:09:22.181 Test: test_zone_get_operation ...passed 00:09:22.181 Test: test_bdev_zone_get_info ...passed 00:09:22.181 Test: test_bdev_zone_management ...passed 00:09:22.181 Test: test_bdev_zone_append ...passed 00:09:22.181 Test: test_bdev_zone_append_with_md ...passed 00:09:22.181 Test: test_bdev_zone_appendv ...passed 00:09:22.181 Test: test_bdev_zone_appendv_with_md ...passed 00:09:22.181 Test: test_bdev_io_get_append_location ...passed 00:09:22.181 00:09:22.181 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.181 suites 1 1 n/a 0 0 00:09:22.181 tests 8 8 8 0 0 00:09:22.181 asserts 94 94 94 0 n/a 00:09:22.181 00:09:22.181 Elapsed time = 0.000 seconds 00:09:22.181 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: gpt_parse 00:09:22.181 Test: test_parse_mbr_and_primary ...[2024-05-15 02:10:09.936266] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:22.181 [2024-05-15 02:10:09.936615] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:22.181 [2024-05-15 02:10:09.936662] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:22.181 [2024-05-15 02:10:09.936682] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:22.181 [2024-05-15 02:10:09.936703] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:22.181 [2024-05-15 02:10:09.936722] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:22.181 passed 00:09:22.181 Test: test_parse_secondary ...[2024-05-15 02:10:09.936970] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:09:22.181 [2024-05-15 02:10:09.936987] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:09:22.181 [2024-05-15 02:10:09.937006] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:09:22.181 [2024-05-15 02:10:09.937023] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:09:22.181 passed 00:09:22.181 Test: test_check_mbr ...passed 00:09:22.181 Test: test_read_header ...[2024-05-15 02:10:09.937265] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:22.181 [2024-05-15 02:10:09.937283] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:09:22.181 [2024-05-15 02:10:09.937310] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:09:22.181 [2024-05-15 02:10:09.937329] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:09:22.181 passed 00:09:22.181 Test: test_read_partitions ...[2024-05-15 02:10:09.937348] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:09:22.181 [2024-05-15 02:10:09.937367] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:09:22.181 [2024-05-15 02:10:09.937386] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:09:22.181 [2024-05-15 02:10:09.937403] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:09:22.181 [2024-05-15 02:10:09.937429] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:09:22.181 [2024-05-15 02:10:09.937447] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:09:22.181 [2024-05-15 02:10:09.937464] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:09:22.181 [2024-05-15 02:10:09.937480] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:09:22.181 [2024-05-15 02:10:09.937630] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:09:22.181 passed 00:09:22.181 00:09:22.181 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.181 suites 1 1 n/a 0 0 00:09:22.181 tests 5 5 5 0 0 00:09:22.181 asserts 33 33 33 0 n/a 00:09:22.181 00:09:22.181 Elapsed time = 0.000 seconds 00:09:22.181 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: bdev_part 00:09:22.181 Test: part_test ...[2024-05-15 02:10:09.947092] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:09:22.181 passed 00:09:22.181 Test: part_free_test ...passed 00:09:22.181 Test: part_get_io_channel_test ...passed 00:09:22.181 Test: part_construct_ext ...passed 00:09:22.181 00:09:22.181 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.181 suites 1 1 n/a 0 0 00:09:22.181 tests 4 4 4 0 0 00:09:22.181 asserts 48 48 48 0 n/a 00:09:22.181 00:09:22.181 Elapsed time = 0.008 seconds 00:09:22.181 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: scsi_nvme_suite 00:09:22.181 Test: scsi_nvme_translate_test ...passed 00:09:22.181 00:09:22.181 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.181 suites 1 1 n/a 0 0 00:09:22.181 tests 1 1 1 0 0 00:09:22.181 asserts 104 104 104 0 n/a 00:09:22.181 00:09:22.181 Elapsed time = 0.000 seconds 00:09:22.181 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:09:22.181 00:09:22.181 00:09:22.181 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.181 http://cunit.sourceforge.net/ 00:09:22.181 00:09:22.181 00:09:22.181 Suite: lvol 00:09:22.181 Test: ut_lvs_init ...passed 00:09:22.181 Test: ut_lvol_init ...passed 00:09:22.181 Test: ut_lvol_snapshot ...passed 00:09:22.181 Test: ut_lvol_clone ...passed 00:09:22.181 Test: ut_lvs_destroy ...passed 00:09:22.181 Test: ut_lvs_unload ...passed 00:09:22.181 Test: ut_lvol_resize ...passed 00:09:22.181 Test: ut_lvol_set_read_only ...passed 00:09:22.181 Test: ut_lvol_hotremove ...passed 00:09:22.181 Test: ut_vbdev_lvol_get_io_channel ...[2024-05-15 02:10:09.963519] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:09:22.181 [2024-05-15 02:10:09.963684] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:09:22.181 [2024-05-15 02:10:09.963751] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:09:22.181 passed 00:09:22.182 Test: ut_vbdev_lvol_io_type_supported ...passed 00:09:22.182 Test: ut_lvol_read_write ...passed 00:09:22.182 Test: ut_vbdev_lvol_submit_request ...passed 00:09:22.182 Test: ut_lvol_examine_config ...passed 00:09:22.182 Test: ut_lvol_examine_disk ...passed 00:09:22.182 Test: ut_lvol_rename ...passed 00:09:22.182 Test: ut_bdev_finish ...passed 00:09:22.182 Test: ut_lvs_rename ...passed 00:09:22.182 Test: ut_lvol_seek ...passed 00:09:22.182 Test: ut_esnap_dev_create ...[2024-05-15 02:10:09.963837] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:09:22.182 [2024-05-15 02:10:09.963897] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:09:22.182 [2024-05-15 02:10:09.963909] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:09:22.182 [2024-05-15 02:10:09.963945] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:09:22.182 [2024-05-15 02:10:09.963956] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:09:22.182 [2024-05-15 02:10:09.963981] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:09:22.182 passed 00:09:22.182 Test: ut_lvol_esnap_clone_bad_args ...passed 00:09:22.182 Test: ut_lvol_shallow_copy ...passed 00:09:22.182 00:09:22.182 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.182 suites 1 1 n/a 0 0 00:09:22.182 tests 22 22 22 0 0 00:09:22.182 asserts 793 793 793 0 n/a 00:09:22.182 00:09:22.182 Elapsed time = 0.000 seconds 00:09:22.182 [2024-05-15 02:10:09.964017] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:09:22.182 [2024-05-15 02:10:09.964038] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:09:22.182 [2024-05-15 02:10:09.964047] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:09:22.182 [2024-05-15 02:10:09.964069] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:22.182 [2024-05-15 02:10:09.964077] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:09:22.182 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:09:22.182 00:09:22.182 00:09:22.182 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.182 http://cunit.sourceforge.net/ 00:09:22.182 00:09:22.182 00:09:22.182 Suite: zone_block 00:09:22.182 Test: test_zone_block_create ...passed 00:09:22.182 Test: test_zone_block_create_invalid ...[2024-05-15 02:10:09.974497] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:09:22.182 passed 00:09:22.182 Test: test_get_zone_info ...passed 00:09:22.182 Test: test_supported_io_types ...passed 00:09:22.182 Test: test_reset_zone ...[2024-05-15 02:10:09.974660] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 02:10:09.974678] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:09:22.182 [2024-05-15 02:10:09.974690] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 02:10:09.974701] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:09:22.182 [2024-05-15 02:10:09.974710] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 02:10:09.974719] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:09:22.182 [2024-05-15 02:10:09.974728] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 02:10:09.974783] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.974810] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.974821] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.974870] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 passed 00:09:22.182 Test: test_open_zone ...passed 00:09:22.182 Test: test_zone_write ...[2024-05-15 02:10:09.974882] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.974919] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975122] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975134] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975168] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:22.182 [2024-05-15 02:10:09.975178] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975189] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:22.182 [2024-05-15 02:10:09.975197] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975665] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:09:22.182 [2024-05-15 02:10:09.975674] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.975686] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:09:22.182 [2024-05-15 02:10:09.975694] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 passed 00:09:22.182 Test: test_zone_read ...[2024-05-15 02:10:09.976218] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:22.182 [2024-05-15 02:10:09.976238] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976269] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:09:22.182 [2024-05-15 02:10:09.976279] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 passed 00:09:22.182 Test: test_close_zone ...passed 00:09:22.182 Test: test_finish_zone ...[2024-05-15 02:10:09.976291] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:09:22.182 [2024-05-15 02:10:09.976300] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976342] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:09:22.182 [2024-05-15 02:10:09.976351] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976376] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976390] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976424] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976434] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976483] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 passed 00:09:22.182 Test: test_append_zone ...[2024-05-15 02:10:09.976495] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976522] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:09:22.182 [2024-05-15 02:10:09.976531] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 [2024-05-15 02:10:09.976542] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:09:22.182 [2024-05-15 02:10:09.976550] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 passed 00:09:22.182 00:09:22.182 [2024-05-15 02:10:09.977548] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:09:22.182 [2024-05-15 02:10:09.977567] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:09:22.182 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.182 suites 1 1 n/a 0 0 00:09:22.182 tests 11 11 11 0 0 00:09:22.182 asserts 3437 3437 3437 0 n/a 00:09:22.182 00:09:22.182 Elapsed time = 0.008 seconds 00:09:22.182 02:10:09 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:09:22.182 00:09:22.182 00:09:22.182 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.182 http://cunit.sourceforge.net/ 00:09:22.182 00:09:22.182 00:09:22.182 Suite: bdev 00:09:22.182 Test: basic ...[2024-05-15 02:10:09.986146] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:09:22.182 [2024-05-15 02:10:09.986298] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82c974480 (0x248db0): Operation not permitted (rc=-1) 00:09:22.182 [2024-05-15 02:10:09.986310] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:09:22.182 passed 00:09:22.182 Test: unregister_and_close ...passed 00:09:22.183 Test: unregister_and_close_different_threads ...passed 00:09:22.183 Test: basic_qos ...passed 00:09:22.183 Test: put_channel_during_reset ...passed 00:09:22.183 Test: aborted_reset ...passed 00:09:22.183 Test: aborted_reset_no_outstanding_io ...passed 00:09:22.183 Test: io_during_reset ...passed 00:09:22.183 Test: reset_completions ...passed 00:09:22.183 Test: io_during_qos_queue ...passed 00:09:22.183 Test: io_during_qos_reset ...passed 00:09:22.183 Test: enomem ...passed 00:09:22.183 Test: enomem_multi_bdev ...passed 00:09:22.183 Test: enomem_multi_bdev_unregister ...passed 00:09:22.183 Test: enomem_multi_io_target ...passed 00:09:22.183 Test: qos_dynamic_enable ...passed 00:09:22.183 Test: bdev_histograms_mt ...passed 00:09:22.183 Test: bdev_set_io_timeout_mt ...[2024-05-15 02:10:10.015190] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x82c974600 not unregistered 00:09:22.183 passed 00:09:22.183 Test: lock_lba_range_then_submit_io ...[2024-05-15 02:10:10.016048] thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x248d98 already registered (old:0x82c974600 new:0x82c974780) 00:09:22.183 passed 00:09:22.183 Test: unregister_during_reset ...passed 00:09:22.183 Test: event_notify_and_close ...passed 00:09:22.183 Suite: bdev_wrong_thread 00:09:22.183 Test: spdk_bdev_register_wt ...passed 00:09:22.183 Test: spdk_bdev_examine_wt ...passed[2024-05-15 02:10:10.019406] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x82c93d700 (0x82c93d700) 00:09:22.183 [2024-05-15 02:10:10.019442] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82c93d700 (0x82c93d700) 00:09:22.183 00:09:22.183 00:09:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.183 suites 2 2 n/a 0 0 00:09:22.183 tests 23 23 23 0 0 00:09:22.183 asserts 601 601 601 0 n/a 00:09:22.183 00:09:22.183 Elapsed time = 0.039 seconds 00:09:22.183 00:09:22.183 real 0m2.233s 00:09:22.183 user 0m1.889s 00:09:22.183 sys 0m0.319s 00:09:22.183 02:10:10 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.183 02:10:10 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:22.183 ************************************ 00:09:22.183 END TEST unittest_bdev 00:09:22.183 ************************************ 00:09:22.183 02:10:10 unittest -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:22.183 02:10:10 unittest -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:22.183 02:10:10 unittest -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:22.183 02:10:10 unittest -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:22.183 02:10:10 unittest -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:09:22.183 02:10:10 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.183 02:10:10 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.183 02:10:10 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:22.183 ************************************ 00:09:22.183 START TEST unittest_blob_blobfs 00:09:22.183 ************************************ 00:09:22.183 02:10:10 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:09:22.183 02:10:10 unittest.unittest_blob_blobfs -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:09:22.183 02:10:10 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:09:22.183 00:09:22.183 00:09:22.183 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.183 http://cunit.sourceforge.net/ 00:09:22.183 00:09:22.183 00:09:22.183 Suite: blob_nocopy_noextent 00:09:22.183 Test: blob_init ...[2024-05-15 02:10:10.079746] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:22.183 passed 00:09:22.183 Test: blob_thin_provision ...passed 00:09:22.183 Test: blob_read_only ...passed 00:09:22.183 Test: bs_load ...[2024-05-15 02:10:10.158172] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:22.183 passed 00:09:22.183 Test: bs_load_custom_cluster_size ...passed 00:09:22.183 Test: bs_load_after_failed_grow ...passed 00:09:22.183 Test: bs_cluster_sz ...[2024-05-15 02:10:10.179216] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:22.183 [2024-05-15 02:10:10.179271] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:22.183 [2024-05-15 02:10:10.179287] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:22.442 passed 00:09:22.442 Test: bs_resize_md ...passed 00:09:22.442 Test: bs_destroy ...passed 00:09:22.442 Test: bs_type ...passed 00:09:22.442 Test: bs_super_block ...passed 00:09:22.442 Test: bs_test_recover_cluster_count ...passed 00:09:22.442 Test: bs_grow_live ...passed 00:09:22.442 Test: bs_grow_live_no_space ...passed 00:09:22.442 Test: bs_test_grow ...passed 00:09:22.442 Test: blob_serialize_test ...passed 00:09:22.442 Test: super_block_crc ...passed 00:09:22.442 Test: blob_thin_prov_write_count_io ...passed 00:09:22.442 Test: blob_thin_prov_unmap_cluster ...passed 00:09:22.442 Test: bs_load_iter_test ...passed 00:09:22.442 Test: blob_relations ...[2024-05-15 02:10:10.336662] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.336777] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 [2024-05-15 02:10:10.336977] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.336998] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 passed 00:09:22.442 Test: blob_relations2 ...[2024-05-15 02:10:10.348597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.348662] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 [2024-05-15 02:10:10.348683] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.348701] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 [2024-05-15 02:10:10.348956] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.348977] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 [2024-05-15 02:10:10.349048] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:22.442 [2024-05-15 02:10:10.349067] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.442 passed 00:09:22.442 Test: blob_relations3 ...passed 00:09:22.701 Test: blobstore_clean_power_failure ...passed 00:09:22.701 Test: blob_delete_snapshot_power_failure ...[2024-05-15 02:10:10.489956] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:22.701 [2024-05-15 02:10:10.499657] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:22.701 [2024-05-15 02:10:10.499716] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:22.701 [2024-05-15 02:10:10.499726] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.701 [2024-05-15 02:10:10.509356] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:22.702 [2024-05-15 02:10:10.509393] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:22.702 [2024-05-15 02:10:10.509402] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:22.702 [2024-05-15 02:10:10.509411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.702 [2024-05-15 02:10:10.519088] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:22.702 [2024-05-15 02:10:10.519131] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.702 [2024-05-15 02:10:10.528830] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:22.702 [2024-05-15 02:10:10.528868] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.702 [2024-05-15 02:10:10.538477] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:22.702 [2024-05-15 02:10:10.538524] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:22.702 passed 00:09:22.702 Test: blob_create_snapshot_power_failure ...[2024-05-15 02:10:10.567223] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:22.702 [2024-05-15 02:10:10.586407] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:22.702 [2024-05-15 02:10:10.596021] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:22.702 passed 00:09:22.702 Test: blob_io_unit ...passed 00:09:22.702 Test: blob_io_unit_compatibility ...passed 00:09:22.702 Test: blob_ext_md_pages ...passed 00:09:22.702 Test: blob_esnap_io_4096_4096 ...passed 00:09:22.702 Test: blob_esnap_io_512_512 ...passed 00:09:22.960 Test: blob_esnap_io_4096_512 ...passed 00:09:22.960 Test: blob_esnap_io_512_4096 ...passed 00:09:22.960 Test: blob_esnap_clone_resize ...passed 00:09:22.960 Suite: blob_bs_nocopy_noextent 00:09:22.960 Test: blob_open ...passed 00:09:22.960 Test: blob_create ...[2024-05-15 02:10:10.798458] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:22.960 passed 00:09:22.960 Test: blob_create_loop ...passed 00:09:22.960 Test: blob_create_fail ...[2024-05-15 02:10:10.867487] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:22.960 passed 00:09:22.960 Test: blob_create_internal ...passed 00:09:22.960 Test: blob_create_zero_extent ...passed 00:09:22.960 Test: blob_snapshot ...passed 00:09:23.218 Test: blob_clone ...passed 00:09:23.218 Test: blob_inflate ...[2024-05-15 02:10:11.015543] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:23.218 passed 00:09:23.218 Test: blob_delete ...passed 00:09:23.218 Test: blob_resize_test ...[2024-05-15 02:10:11.071695] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:23.218 passed 00:09:23.218 Test: blob_resize_thin_test ...passed 00:09:23.218 Test: channel_ops ...passed 00:09:23.218 Test: blob_super ...passed 00:09:23.218 Test: blob_rw_verify_iov ...passed 00:09:23.477 Test: blob_unmap ...passed 00:09:23.477 Test: blob_iter ...passed 00:09:23.477 Test: blob_parse_md ...passed 00:09:23.477 Test: bs_load_pending_removal ...passed 00:09:23.477 Test: bs_unload ...[2024-05-15 02:10:11.328896] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:23.477 passed 00:09:23.477 Test: bs_usable_clusters ...passed 00:09:23.477 Test: blob_crc ...[2024-05-15 02:10:11.386101] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:23.477 [2024-05-15 02:10:11.386150] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:23.477 passed 00:09:23.477 Test: blob_flags ...passed 00:09:23.477 Test: bs_version ...passed 00:09:23.477 Test: blob_set_xattrs_test ...[2024-05-15 02:10:11.472157] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:23.477 [2024-05-15 02:10:11.472231] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:23.736 passed 00:09:23.736 Test: blob_thin_prov_alloc ...passed 00:09:23.736 Test: blob_insert_cluster_msg_test ...passed 00:09:23.736 Test: blob_thin_prov_rw ...passed 00:09:23.736 Test: blob_thin_prov_rle ...passed 00:09:23.736 Test: blob_thin_prov_rw_iov ...passed 00:09:23.736 Test: blob_snapshot_rw ...passed 00:09:23.736 Test: blob_snapshot_rw_iov ...passed 00:09:23.995 Test: blob_inflate_rw ...passed 00:09:23.995 Test: blob_snapshot_freeze_io ...passed 00:09:23.995 Test: blob_operation_split_rw ...passed 00:09:23.995 Test: blob_operation_split_rw_iov ...passed 00:09:23.995 Test: blob_simultaneous_operations ...[2024-05-15 02:10:11.904484] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:23.995 [2024-05-15 02:10:11.904550] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:23.995 [2024-05-15 02:10:11.904810] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:23.995 [2024-05-15 02:10:11.904822] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:23.995 [2024-05-15 02:10:11.907891] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:23.995 [2024-05-15 02:10:11.907911] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:23.995 [2024-05-15 02:10:11.907927] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:23.995 [2024-05-15 02:10:11.907934] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:23.995 passed 00:09:23.995 Test: blob_persist_test ...passed 00:09:23.995 Test: blob_decouple_snapshot ...passed 00:09:24.254 Test: blob_seek_io_unit ...passed 00:09:24.254 Test: blob_nested_freezes ...passed 00:09:24.254 Test: blob_clone_resize ...passed 00:09:24.254 Test: blob_shallow_copy ...[2024-05-15 02:10:12.096849] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:24.254 [2024-05-15 02:10:12.096916] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:24.254 [2024-05-15 02:10:12.096925] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:24.254 passed 00:09:24.254 Suite: blob_blob_nocopy_noextent 00:09:24.254 Test: blob_write ...passed 00:09:24.254 Test: blob_read ...passed 00:09:24.254 Test: blob_rw_verify ...passed 00:09:24.254 Test: blob_rw_verify_iov_nomem ...passed 00:09:24.512 Test: blob_rw_iov_read_only ...passed 00:09:24.512 Test: blob_xattr ...passed 00:09:24.512 Test: blob_dirty_shutdown ...passed 00:09:24.512 Test: blob_is_degraded ...passed 00:09:24.512 Suite: blob_esnap_bs_nocopy_noextent 00:09:24.512 Test: blob_esnap_create ...passed 00:09:24.512 Test: blob_esnap_thread_add_remove ...passed 00:09:24.512 Test: blob_esnap_clone_snapshot ...passed 00:09:24.512 Test: blob_esnap_clone_inflate ...passed 00:09:24.512 Test: blob_esnap_clone_decouple ...passed 00:09:24.771 Test: blob_esnap_clone_reload ...passed 00:09:24.771 Test: blob_esnap_hotplug ...passed 00:09:24.771 Suite: blob_nocopy_extent 00:09:24.771 Test: blob_init ...[2024-05-15 02:10:12.547449] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:24.771 passed 00:09:24.771 Test: blob_thin_provision ...passed 00:09:24.771 Test: blob_read_only ...passed 00:09:24.771 Test: bs_load ...[2024-05-15 02:10:12.584948] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:24.771 passed 00:09:24.771 Test: bs_load_custom_cluster_size ...passed 00:09:24.771 Test: bs_load_after_failed_grow ...passed 00:09:24.771 Test: bs_cluster_sz ...[2024-05-15 02:10:12.604219] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:24.771 [2024-05-15 02:10:12.604274] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:24.771 [2024-05-15 02:10:12.604302] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:24.771 passed 00:09:24.771 Test: bs_resize_md ...passed 00:09:24.771 Test: bs_destroy ...passed 00:09:24.771 Test: bs_type ...passed 00:09:24.771 Test: bs_super_block ...passed 00:09:24.771 Test: bs_test_recover_cluster_count ...passed 00:09:24.771 Test: bs_grow_live ...passed 00:09:24.771 Test: bs_grow_live_no_space ...passed 00:09:24.771 Test: bs_test_grow ...passed 00:09:24.771 Test: blob_serialize_test ...passed 00:09:24.771 Test: super_block_crc ...passed 00:09:24.771 Test: blob_thin_prov_write_count_io ...passed 00:09:24.771 Test: blob_thin_prov_unmap_cluster ...passed 00:09:24.771 Test: bs_load_iter_test ...passed 00:09:24.771 Test: blob_relations ...[2024-05-15 02:10:12.742140] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.771 [2024-05-15 02:10:12.742200] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 [2024-05-15 02:10:12.742286] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.772 [2024-05-15 02:10:12.742295] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 passed 00:09:24.772 Test: blob_relations2 ...[2024-05-15 02:10:12.752337] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.772 [2024-05-15 02:10:12.752358] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 [2024-05-15 02:10:12.752367] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.772 [2024-05-15 02:10:12.752375] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 [2024-05-15 02:10:12.752489] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.772 [2024-05-15 02:10:12.752498] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 [2024-05-15 02:10:12.752538] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:24.772 [2024-05-15 02:10:12.752547] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:24.772 passed 00:09:24.772 Test: blob_relations3 ...passed 00:09:25.030 Test: blobstore_clean_power_failure ...passed 00:09:25.030 Test: blob_delete_snapshot_power_failure ...[2024-05-15 02:10:12.885693] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:25.030 [2024-05-15 02:10:12.895316] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:25.030 [2024-05-15 02:10:12.904922] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:25.030 [2024-05-15 02:10:12.904964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:25.030 [2024-05-15 02:10:12.904973] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 [2024-05-15 02:10:12.914531] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:25.030 [2024-05-15 02:10:12.914575] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:25.030 [2024-05-15 02:10:12.914584] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:25.030 [2024-05-15 02:10:12.914593] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 [2024-05-15 02:10:12.924246] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:25.030 [2024-05-15 02:10:12.924281] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:25.030 [2024-05-15 02:10:12.924290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:25.030 [2024-05-15 02:10:12.924298] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 [2024-05-15 02:10:12.933948] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:25.030 [2024-05-15 02:10:12.933980] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 [2024-05-15 02:10:12.943775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:25.030 [2024-05-15 02:10:12.943815] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 [2024-05-15 02:10:12.953937] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:25.030 [2024-05-15 02:10:12.953998] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:25.030 passed 00:09:25.030 Test: blob_create_snapshot_power_failure ...[2024-05-15 02:10:12.984945] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:25.030 [2024-05-15 02:10:12.995123] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:25.030 [2024-05-15 02:10:13.014914] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:25.030 [2024-05-15 02:10:13.024640] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:25.289 passed 00:09:25.289 Test: blob_io_unit ...passed 00:09:25.289 Test: blob_io_unit_compatibility ...passed 00:09:25.289 Test: blob_ext_md_pages ...passed 00:09:25.289 Test: blob_esnap_io_4096_4096 ...passed 00:09:25.289 Test: blob_esnap_io_512_512 ...passed 00:09:25.289 Test: blob_esnap_io_4096_512 ...passed 00:09:25.289 Test: blob_esnap_io_512_4096 ...passed 00:09:25.289 Test: blob_esnap_clone_resize ...passed 00:09:25.289 Suite: blob_bs_nocopy_extent 00:09:25.289 Test: blob_open ...passed 00:09:25.289 Test: blob_create ...[2024-05-15 02:10:13.230810] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:25.289 passed 00:09:25.289 Test: blob_create_loop ...passed 00:09:25.576 Test: blob_create_fail ...[2024-05-15 02:10:13.301763] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:25.576 passed 00:09:25.576 Test: blob_create_internal ...passed 00:09:25.576 Test: blob_create_zero_extent ...passed 00:09:25.576 Test: blob_snapshot ...passed 00:09:25.576 Test: blob_clone ...passed 00:09:25.576 Test: blob_inflate ...[2024-05-15 02:10:13.449827] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:25.576 passed 00:09:25.576 Test: blob_delete ...passed 00:09:25.576 Test: blob_resize_test ...[2024-05-15 02:10:13.506340] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:25.576 passed 00:09:25.576 Test: blob_resize_thin_test ...passed 00:09:25.848 Test: channel_ops ...passed 00:09:25.848 Test: blob_super ...passed 00:09:25.848 Test: blob_rw_verify_iov ...passed 00:09:25.848 Test: blob_unmap ...passed 00:09:25.848 Test: blob_iter ...passed 00:09:25.848 Test: blob_parse_md ...passed 00:09:25.848 Test: bs_load_pending_removal ...passed 00:09:25.848 Test: bs_unload ...[2024-05-15 02:10:13.765305] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:25.848 passed 00:09:25.848 Test: bs_usable_clusters ...passed 00:09:25.848 Test: blob_crc ...[2024-05-15 02:10:13.822869] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:25.848 [2024-05-15 02:10:13.822937] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:25.848 passed 00:09:26.107 Test: blob_flags ...passed 00:09:26.107 Test: bs_version ...passed 00:09:26.107 Test: blob_set_xattrs_test ...[2024-05-15 02:10:13.908965] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:26.107 [2024-05-15 02:10:13.909020] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:26.107 passed 00:09:26.107 Test: blob_thin_prov_alloc ...passed 00:09:26.107 Test: blob_insert_cluster_msg_test ...passed 00:09:26.107 Test: blob_thin_prov_rw ...passed 00:09:26.107 Test: blob_thin_prov_rle ...passed 00:09:26.107 Test: blob_thin_prov_rw_iov ...passed 00:09:26.107 Test: blob_snapshot_rw ...passed 00:09:26.366 Test: blob_snapshot_rw_iov ...passed 00:09:26.366 Test: blob_inflate_rw ...passed 00:09:26.366 Test: blob_snapshot_freeze_io ...passed 00:09:26.366 Test: blob_operation_split_rw ...passed 00:09:26.366 Test: blob_operation_split_rw_iov ...passed 00:09:26.366 Test: blob_simultaneous_operations ...[2024-05-15 02:10:14.343072] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:26.366 [2024-05-15 02:10:14.343142] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:26.366 [2024-05-15 02:10:14.343411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:26.366 [2024-05-15 02:10:14.343426] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:26.366 [2024-05-15 02:10:14.346537] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:26.366 [2024-05-15 02:10:14.346570] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:26.366 [2024-05-15 02:10:14.346588] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:26.366 [2024-05-15 02:10:14.346604] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:26.366 passed 00:09:26.624 Test: blob_persist_test ...passed 00:09:26.624 Test: blob_decouple_snapshot ...passed 00:09:26.624 Test: blob_seek_io_unit ...passed 00:09:26.624 Test: blob_nested_freezes ...passed 00:09:26.624 Test: blob_clone_resize ...passed 00:09:26.624 Test: blob_shallow_copy ...[2024-05-15 02:10:14.535622] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:26.624 [2024-05-15 02:10:14.535687] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:26.624 [2024-05-15 02:10:14.535697] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:26.624 passed 00:09:26.624 Suite: blob_blob_nocopy_extent 00:09:26.624 Test: blob_write ...passed 00:09:26.624 Test: blob_read ...passed 00:09:26.882 Test: blob_rw_verify ...passed 00:09:26.882 Test: blob_rw_verify_iov_nomem ...passed 00:09:26.882 Test: blob_rw_iov_read_only ...passed 00:09:26.882 Test: blob_xattr ...passed 00:09:26.882 Test: blob_dirty_shutdown ...passed 00:09:26.882 Test: blob_is_degraded ...passed 00:09:26.882 Suite: blob_esnap_bs_nocopy_extent 00:09:26.882 Test: blob_esnap_create ...passed 00:09:26.882 Test: blob_esnap_thread_add_remove ...passed 00:09:26.882 Test: blob_esnap_clone_snapshot ...passed 00:09:27.141 Test: blob_esnap_clone_inflate ...passed 00:09:27.141 Test: blob_esnap_clone_decouple ...passed 00:09:27.141 Test: blob_esnap_clone_reload ...passed 00:09:27.141 Test: blob_esnap_hotplug ...passed 00:09:27.141 Suite: blob_copy_noextent 00:09:27.141 Test: blob_init ...[2024-05-15 02:10:14.978594] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:27.141 passed 00:09:27.141 Test: blob_thin_provision ...passed 00:09:27.141 Test: blob_read_only ...passed 00:09:27.141 Test: bs_load ...[2024-05-15 02:10:15.016571] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:27.141 passed 00:09:27.141 Test: bs_load_custom_cluster_size ...passed 00:09:27.141 Test: bs_load_after_failed_grow ...passed 00:09:27.141 Test: bs_cluster_sz ...[2024-05-15 02:10:15.035478] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:27.141 [2024-05-15 02:10:15.035526] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:27.141 [2024-05-15 02:10:15.035537] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:27.141 passed 00:09:27.141 Test: bs_resize_md ...passed 00:09:27.141 Test: bs_destroy ...passed 00:09:27.141 Test: bs_type ...passed 00:09:27.141 Test: bs_super_block ...passed 00:09:27.141 Test: bs_test_recover_cluster_count ...passed 00:09:27.141 Test: bs_grow_live ...passed 00:09:27.141 Test: bs_grow_live_no_space ...passed 00:09:27.141 Test: bs_test_grow ...passed 00:09:27.141 Test: blob_serialize_test ...passed 00:09:27.141 Test: super_block_crc ...passed 00:09:27.141 Test: blob_thin_prov_write_count_io ...passed 00:09:27.400 Test: blob_thin_prov_unmap_cluster ...passed 00:09:27.400 Test: bs_load_iter_test ...passed 00:09:27.400 Test: blob_relations ...[2024-05-15 02:10:15.173768] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.400 [2024-05-15 02:10:15.173829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.173895] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.401 [2024-05-15 02:10:15.173904] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 passed 00:09:27.401 Test: blob_relations2 ...[2024-05-15 02:10:15.183906] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.401 [2024-05-15 02:10:15.183931] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.183938] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.401 [2024-05-15 02:10:15.183945] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.184036] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.401 [2024-05-15 02:10:15.184044] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.184073] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:27.401 [2024-05-15 02:10:15.184080] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 passed 00:09:27.401 Test: blob_relations3 ...passed 00:09:27.401 Test: blobstore_clean_power_failure ...passed 00:09:27.401 Test: blob_delete_snapshot_power_failure ...[2024-05-15 02:10:15.316337] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:27.401 [2024-05-15 02:10:15.325886] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:27.401 [2024-05-15 02:10:15.325935] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:27.401 [2024-05-15 02:10:15.325944] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.335365] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:27.401 [2024-05-15 02:10:15.335396] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:27.401 [2024-05-15 02:10:15.335403] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:27.401 [2024-05-15 02:10:15.335411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.344873] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:27.401 [2024-05-15 02:10:15.344926] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.354420] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:27.401 [2024-05-15 02:10:15.354453] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 [2024-05-15 02:10:15.363989] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:27.401 [2024-05-15 02:10:15.364030] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:27.401 passed 00:09:27.401 Test: blob_create_snapshot_power_failure ...[2024-05-15 02:10:15.392500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:27.659 [2024-05-15 02:10:15.411434] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:27.659 [2024-05-15 02:10:15.420934] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:27.659 passed 00:09:27.659 Test: blob_io_unit ...passed 00:09:27.659 Test: blob_io_unit_compatibility ...passed 00:09:27.659 Test: blob_ext_md_pages ...passed 00:09:27.659 Test: blob_esnap_io_4096_4096 ...passed 00:09:27.659 Test: blob_esnap_io_512_512 ...passed 00:09:27.659 Test: blob_esnap_io_4096_512 ...passed 00:09:27.659 Test: blob_esnap_io_512_4096 ...passed 00:09:27.659 Test: blob_esnap_clone_resize ...passed 00:09:27.659 Suite: blob_bs_copy_noextent 00:09:27.659 Test: blob_open ...passed 00:09:27.659 Test: blob_create ...[2024-05-15 02:10:15.622472] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:27.659 passed 00:09:27.917 Test: blob_create_loop ...passed 00:09:27.917 Test: blob_create_fail ...[2024-05-15 02:10:15.690834] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:27.917 passed 00:09:27.917 Test: blob_create_internal ...passed 00:09:27.917 Test: blob_create_zero_extent ...passed 00:09:27.917 Test: blob_snapshot ...passed 00:09:27.917 Test: blob_clone ...passed 00:09:27.917 Test: blob_inflate ...[2024-05-15 02:10:15.835098] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:27.917 passed 00:09:27.917 Test: blob_delete ...passed 00:09:27.917 Test: blob_resize_test ...[2024-05-15 02:10:15.889577] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:27.917 passed 00:09:28.174 Test: blob_resize_thin_test ...passed 00:09:28.174 Test: channel_ops ...passed 00:09:28.174 Test: blob_super ...passed 00:09:28.174 Test: blob_rw_verify_iov ...passed 00:09:28.174 Test: blob_unmap ...passed 00:09:28.174 Test: blob_iter ...passed 00:09:28.174 Test: blob_parse_md ...passed 00:09:28.174 Test: bs_load_pending_removal ...passed 00:09:28.174 Test: bs_unload ...[2024-05-15 02:10:16.141643] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:28.174 passed 00:09:28.433 Test: bs_usable_clusters ...passed 00:09:28.433 Test: blob_crc ...[2024-05-15 02:10:16.197531] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:28.433 [2024-05-15 02:10:16.197592] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:28.433 passed 00:09:28.433 Test: blob_flags ...passed 00:09:28.433 Test: bs_version ...passed 00:09:28.433 Test: blob_set_xattrs_test ...[2024-05-15 02:10:16.281789] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:28.433 [2024-05-15 02:10:16.281854] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:28.433 passed 00:09:28.433 Test: blob_thin_prov_alloc ...passed 00:09:28.433 Test: blob_insert_cluster_msg_test ...passed 00:09:28.433 Test: blob_thin_prov_rw ...passed 00:09:28.433 Test: blob_thin_prov_rle ...passed 00:09:28.691 Test: blob_thin_prov_rw_iov ...passed 00:09:28.691 Test: blob_snapshot_rw ...passed 00:09:28.691 Test: blob_snapshot_rw_iov ...passed 00:09:28.691 Test: blob_inflate_rw ...passed 00:09:28.691 Test: blob_snapshot_freeze_io ...passed 00:09:28.949 Test: blob_operation_split_rw ...passed 00:09:28.949 Test: blob_operation_split_rw_iov ...passed 00:09:28.949 Test: blob_simultaneous_operations ...[2024-05-15 02:10:16.814560] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:28.949 [2024-05-15 02:10:16.814644] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.949 [2024-05-15 02:10:16.814921] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:28.949 [2024-05-15 02:10:16.814936] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.949 [2024-05-15 02:10:16.817198] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:28.949 [2024-05-15 02:10:16.817240] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.949 [2024-05-15 02:10:16.817273] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:28.949 [2024-05-15 02:10:16.817291] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:28.949 passed 00:09:28.949 Test: blob_persist_test ...passed 00:09:28.949 Test: blob_decouple_snapshot ...passed 00:09:28.949 Test: blob_seek_io_unit ...passed 00:09:29.212 Test: blob_nested_freezes ...passed 00:09:29.212 Test: blob_clone_resize ...passed 00:09:29.212 Test: blob_shallow_copy ...[2024-05-15 02:10:17.016747] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:29.212 [2024-05-15 02:10:17.016808] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:29.212 [2024-05-15 02:10:17.016817] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:29.212 passed 00:09:29.212 Suite: blob_blob_copy_noextent 00:09:29.212 Test: blob_write ...passed 00:09:29.212 Test: blob_read ...passed 00:09:29.212 Test: blob_rw_verify ...passed 00:09:29.212 Test: blob_rw_verify_iov_nomem ...passed 00:09:29.212 Test: blob_rw_iov_read_only ...passed 00:09:29.212 Test: blob_xattr ...passed 00:09:29.471 Test: blob_dirty_shutdown ...passed 00:09:29.471 Test: blob_is_degraded ...passed 00:09:29.471 Suite: blob_esnap_bs_copy_noextent 00:09:29.471 Test: blob_esnap_create ...passed 00:09:29.471 Test: blob_esnap_thread_add_remove ...passed 00:09:29.471 Test: blob_esnap_clone_snapshot ...passed 00:09:29.471 Test: blob_esnap_clone_inflate ...passed 00:09:29.471 Test: blob_esnap_clone_decouple ...passed 00:09:29.471 Test: blob_esnap_clone_reload ...passed 00:09:29.471 Test: blob_esnap_hotplug ...passed 00:09:29.471 Suite: blob_copy_extent 00:09:29.471 Test: blob_init ...[2024-05-15 02:10:17.454716] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:29.471 passed 00:09:29.471 Test: blob_thin_provision ...passed 00:09:29.731 Test: blob_read_only ...passed 00:09:29.731 Test: bs_load ...[2024-05-15 02:10:17.492298] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:29.731 passed 00:09:29.731 Test: bs_load_custom_cluster_size ...passed 00:09:29.731 Test: bs_load_after_failed_grow ...passed 00:09:29.731 Test: bs_cluster_sz ...[2024-05-15 02:10:17.511185] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:29.731 [2024-05-15 02:10:17.511253] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:29.731 [2024-05-15 02:10:17.511263] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:29.731 passed 00:09:29.731 Test: bs_resize_md ...passed 00:09:29.731 Test: bs_destroy ...passed 00:09:29.731 Test: bs_type ...passed 00:09:29.731 Test: bs_super_block ...passed 00:09:29.731 Test: bs_test_recover_cluster_count ...passed 00:09:29.731 Test: bs_grow_live ...passed 00:09:29.731 Test: bs_grow_live_no_space ...passed 00:09:29.731 Test: bs_test_grow ...passed 00:09:29.731 Test: blob_serialize_test ...passed 00:09:29.731 Test: super_block_crc ...passed 00:09:29.731 Test: blob_thin_prov_write_count_io ...passed 00:09:29.731 Test: blob_thin_prov_unmap_cluster ...passed 00:09:29.731 Test: bs_load_iter_test ...passed 00:09:29.731 Test: blob_relations ...[2024-05-15 02:10:17.709069] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.709126] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 [2024-05-15 02:10:17.709222] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.709231] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 passed 00:09:29.731 Test: blob_relations2 ...[2024-05-15 02:10:17.719381] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.719403] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 [2024-05-15 02:10:17.719411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.719417] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 [2024-05-15 02:10:17.719516] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.719524] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 [2024-05-15 02:10:17.719558] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:29.731 [2024-05-15 02:10:17.719565] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.731 passed 00:09:29.731 Test: blob_relations3 ...passed 00:09:29.990 Test: blobstore_clean_power_failure ...passed 00:09:29.990 Test: blob_delete_snapshot_power_failure ...[2024-05-15 02:10:17.852084] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:29.990 [2024-05-15 02:10:17.861829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:29.990 [2024-05-15 02:10:17.871410] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:29.990 [2024-05-15 02:10:17.871454] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:29.990 [2024-05-15 02:10:17.871461] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 [2024-05-15 02:10:17.881089] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:29.990 [2024-05-15 02:10:17.881114] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:29.990 [2024-05-15 02:10:17.881138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:29.990 [2024-05-15 02:10:17.881145] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 [2024-05-15 02:10:17.890809] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:29.990 [2024-05-15 02:10:17.890829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:29.990 [2024-05-15 02:10:17.890836] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:29.990 [2024-05-15 02:10:17.890845] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 [2024-05-15 02:10:17.900324] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:29.990 [2024-05-15 02:10:17.900362] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 [2024-05-15 02:10:17.909907] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:29.990 [2024-05-15 02:10:17.909932] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 [2024-05-15 02:10:17.920285] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:29.990 [2024-05-15 02:10:17.920310] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:29.990 passed 00:09:29.990 Test: blob_create_snapshot_power_failure ...[2024-05-15 02:10:17.952221] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:29.990 [2024-05-15 02:10:17.971279] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:30.249 [2024-05-15 02:10:18.012837] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:30.249 [2024-05-15 02:10:18.034308] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:30.249 passed 00:09:30.249 Test: blob_io_unit ...passed 00:09:30.249 Test: blob_io_unit_compatibility ...passed 00:09:30.249 Test: blob_ext_md_pages ...passed 00:09:30.249 Test: blob_esnap_io_4096_4096 ...passed 00:09:30.249 Test: blob_esnap_io_512_512 ...passed 00:09:30.508 Test: blob_esnap_io_4096_512 ...passed 00:09:30.508 Test: blob_esnap_io_512_4096 ...passed 00:09:30.508 Test: blob_esnap_clone_resize ...passed 00:09:30.508 Suite: blob_bs_copy_extent 00:09:30.508 Test: blob_open ...passed 00:09:30.508 Test: blob_create ...[2024-05-15 02:10:18.430338] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:30.508 passed 00:09:30.766 Test: blob_create_loop ...passed 00:09:30.766 Test: blob_create_fail ...[2024-05-15 02:10:18.553610] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.766 passed 00:09:30.766 Test: blob_create_internal ...passed 00:09:30.766 Test: blob_create_zero_extent ...passed 00:09:30.766 Test: blob_snapshot ...passed 00:09:31.023 Test: blob_clone ...passed 00:09:31.023 Test: blob_inflate ...[2024-05-15 02:10:18.839085] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:31.023 passed 00:09:31.023 Test: blob_delete ...passed 00:09:31.023 Test: blob_resize_test ...[2024-05-15 02:10:18.949365] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:31.023 passed 00:09:31.282 Test: blob_resize_thin_test ...passed 00:09:31.282 Test: channel_ops ...passed 00:09:31.282 Test: blob_super ...passed 00:09:31.282 Test: blob_rw_verify_iov ...passed 00:09:31.282 Test: blob_unmap ...passed 00:09:31.540 Test: blob_iter ...passed 00:09:31.540 Test: blob_parse_md ...passed 00:09:31.540 Test: bs_load_pending_removal ...passed 00:09:31.540 Test: bs_unload ...[2024-05-15 02:10:19.449280] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:31.540 passed 00:09:31.540 Test: bs_usable_clusters ...passed 00:09:31.797 Test: blob_crc ...[2024-05-15 02:10:19.562743] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:31.797 [2024-05-15 02:10:19.562856] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:31.797 passed 00:09:31.797 Test: blob_flags ...passed 00:09:31.797 Test: bs_version ...passed 00:09:31.797 Test: blob_set_xattrs_test ...[2024-05-15 02:10:19.734628] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:31.797 [2024-05-15 02:10:19.734701] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:31.797 passed 00:09:32.056 Test: blob_thin_prov_alloc ...passed 00:09:32.057 Test: blob_insert_cluster_msg_test ...passed 00:09:32.057 Test: blob_thin_prov_rw ...passed 00:09:32.057 Test: blob_thin_prov_rle ...passed 00:09:32.057 Test: blob_thin_prov_rw_iov ...passed 00:09:32.316 Test: blob_snapshot_rw ...passed 00:09:32.316 Test: blob_snapshot_rw_iov ...passed 00:09:32.316 Test: blob_inflate_rw ...passed 00:09:32.575 Test: blob_snapshot_freeze_io ...passed 00:09:32.575 Test: blob_operation_split_rw ...passed 00:09:32.575 Test: blob_operation_split_rw_iov ...passed 00:09:32.575 Test: blob_simultaneous_operations ...[2024-05-15 02:10:20.538789] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:32.575 [2024-05-15 02:10:20.538858] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:32.575 [2024-05-15 02:10:20.539247] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:32.575 [2024-05-15 02:10:20.539260] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:32.575 [2024-05-15 02:10:20.542569] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:32.575 [2024-05-15 02:10:20.542612] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:32.575 [2024-05-15 02:10:20.542633] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:32.575 [2024-05-15 02:10:20.542641] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:32.575 passed 00:09:32.833 Test: blob_persist_test ...passed 00:09:32.833 Test: blob_decouple_snapshot ...passed 00:09:32.833 Test: blob_seek_io_unit ...passed 00:09:32.833 Test: blob_nested_freezes ...passed 00:09:33.092 Test: blob_clone_resize ...passed 00:09:33.092 Test: blob_shallow_copy ...[2024-05-15 02:10:20.907227] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:33.092 [2024-05-15 02:10:20.907296] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:33.092 [2024-05-15 02:10:20.907307] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:33.092 passed 00:09:33.092 Suite: blob_blob_copy_extent 00:09:33.092 Test: blob_write ...passed 00:09:33.092 Test: blob_read ...passed 00:09:33.350 Test: blob_rw_verify ...passed 00:09:33.350 Test: blob_rw_verify_iov_nomem ...passed 00:09:33.350 Test: blob_rw_iov_read_only ...passed 00:09:33.350 Test: blob_xattr ...passed 00:09:33.350 Test: blob_dirty_shutdown ...passed 00:09:33.350 Test: blob_is_degraded ...passed 00:09:33.350 Suite: blob_esnap_bs_copy_extent 00:09:33.608 Test: blob_esnap_create ...passed 00:09:33.608 Test: blob_esnap_thread_add_remove ...passed 00:09:33.608 Test: blob_esnap_clone_snapshot ...passed 00:09:33.608 Test: blob_esnap_clone_inflate ...passed 00:09:33.868 Test: blob_esnap_clone_decouple ...passed 00:09:33.868 Test: blob_esnap_clone_reload ...passed 00:09:33.868 Test: blob_esnap_hotplug ...passed 00:09:33.868 00:09:33.868 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.868 suites 16 16 n/a 0 0 00:09:33.868 tests 368 368 368 0 0 00:09:33.868 asserts 142985 142985 142985 0 n/a 00:09:33.868 00:09:33.868 Elapsed time = 11.609 seconds 00:09:33.868 02:10:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:33.868 00:09:33.868 00:09:33.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.868 http://cunit.sourceforge.net/ 00:09:33.868 00:09:33.868 00:09:33.868 Suite: blob_bdev 00:09:33.868 Test: create_bs_dev ...passed 00:09:33.868 Test: create_bs_dev_ro ...passed 00:09:33.868 Test: create_bs_dev_rw ...passed 00:09:33.868 Test: claim_bs_dev ...passed 00:09:33.868 Test: claim_bs_dev_ro ...passed 00:09:33.868 Test: deferred_destroy_refs ...passed 00:09:33.868 Test: deferred_destroy_channels ...passed 00:09:33.868 Test: deferred_destroy_threads ...passed 00:09:33.868 00:09:33.868 [2024-05-15 02:10:21.711549] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:33.868 [2024-05-15 02:10:21.711782] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:33.868 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.868 suites 1 1 n/a 0 0 00:09:33.868 tests 8 8 8 0 0 00:09:33.868 asserts 119 119 119 0 n/a 00:09:33.868 00:09:33.868 Elapsed time = 0.000 seconds 00:09:33.868 02:10:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:33.868 00:09:33.868 00:09:33.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.868 http://cunit.sourceforge.net/ 00:09:33.868 00:09:33.868 00:09:33.868 Suite: tree 00:09:33.868 Test: blobfs_tree_op_test ...passed 00:09:33.868 00:09:33.868 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.868 suites 1 1 n/a 0 0 00:09:33.868 tests 1 1 1 0 0 00:09:33.868 asserts 27 27 27 0 n/a 00:09:33.868 00:09:33.868 Elapsed time = 0.000 seconds 00:09:33.868 02:10:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:33.868 00:09:33.868 00:09:33.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.868 http://cunit.sourceforge.net/ 00:09:33.868 00:09:33.868 00:09:33.868 Suite: blobfs_async_ut 00:09:33.868 Test: fs_init ...passed 00:09:33.868 Test: fs_open ...passed 00:09:33.868 Test: fs_create ...passed 00:09:33.868 Test: fs_truncate ...passed 00:09:33.868 Test: fs_rename ...[2024-05-15 02:10:21.807992] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:33.868 passed 00:09:33.868 Test: fs_rw_async ...passed 00:09:33.868 Test: fs_writev_readv_async ...passed 00:09:33.868 Test: tree_find_buffer_ut ...passed 00:09:33.868 Test: channel_ops ...passed 00:09:33.868 Test: channel_ops_sync ...passed 00:09:33.868 00:09:33.868 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.868 suites 1 1 n/a 0 0 00:09:33.868 tests 10 10 10 0 0 00:09:33.868 asserts 292 292 292 0 n/a 00:09:33.868 00:09:33.868 Elapsed time = 0.125 seconds 00:09:33.868 02:10:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:33.868 00:09:33.868 00:09:33.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.868 http://cunit.sourceforge.net/ 00:09:33.868 00:09:33.868 00:09:33.868 Suite: blobfs_sync_ut 00:09:34.128 Test: cache_read_after_write ...[2024-05-15 02:10:21.911478] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:34.128 passed 00:09:34.128 Test: file_length ...passed 00:09:34.128 Test: append_write_to_extend_blob ...passed 00:09:34.128 Test: partial_buffer ...passed 00:09:34.128 Test: cache_write_null_buffer ...passed 00:09:34.128 Test: fs_create_sync ...passed 00:09:34.128 Test: fs_rename_sync ...passed 00:09:34.128 Test: cache_append_no_cache ...passed 00:09:34.128 Test: fs_delete_file_without_close ...passed 00:09:34.128 00:09:34.128 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.128 suites 1 1 n/a 0 0 00:09:34.128 tests 9 9 9 0 0 00:09:34.128 asserts 345 345 345 0 n/a 00:09:34.128 00:09:34.128 Elapsed time = 0.266 seconds 00:09:34.128 02:10:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:34.128 00:09:34.128 00:09:34.128 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.128 http://cunit.sourceforge.net/ 00:09:34.128 00:09:34.128 00:09:34.128 Suite: blobfs_bdev_ut 00:09:34.128 Test: spdk_blobfs_bdev_detect_test ...[2024-05-15 02:10:22.008121] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:34.128 passed 00:09:34.128 Test: spdk_blobfs_bdev_create_test ...passed 00:09:34.128 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:34.128 00:09:34.128 [2024-05-15 02:10:22.008421] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:34.128 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.128 suites 1 1 n/a 0 0 00:09:34.128 tests 3 3 3 0 0 00:09:34.128 asserts 9 9 9 0 n/a 00:09:34.128 00:09:34.128 Elapsed time = 0.000 seconds 00:09:34.128 00:09:34.128 real 0m11.940s 00:09:34.128 user 0m11.895s 00:09:34.128 sys 0m0.179s 00:09:34.128 02:10:22 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.128 02:10:22 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 ************************************ 00:09:34.128 END TEST unittest_blob_blobfs 00:09:34.128 ************************************ 00:09:34.128 02:10:22 unittest -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:09:34.128 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.128 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.128 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 ************************************ 00:09:34.128 START TEST unittest_event 00:09:34.128 ************************************ 00:09:34.128 02:10:22 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:09:34.128 02:10:22 unittest.unittest_event -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:34.128 00:09:34.128 00:09:34.128 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.128 http://cunit.sourceforge.net/ 00:09:34.128 00:09:34.128 00:09:34.128 Suite: app_suite 00:09:34.128 Test: test_spdk_app_parse_args ...app_ut: invalid option -- z 00:09:34.128 app_ut [options] 00:09:34.128 00:09:34.128 CPU options: 00:09:34.128 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:34.128 (like [0,1,10]) 00:09:34.128 --lcores lcore to CPU mapping list. The list is in the format: 00:09:34.128 [<,lcores[@CPUs]>...] 00:09:34.128 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:34.128 Within the group, '-' is used for range separator, 00:09:34.128 ',' is used for single number separator. 00:09:34.128 '( )' can be omitted for single element group, 00:09:34.128 '@' can be omitted if cpus and lcores have the same value 00:09:34.128 --disable-cpumask-locks Disable CPU core lock files. 00:09:34.128 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:34.128 pollers in the app support interrupt mode) 00:09:34.128 -p, --main-core main (primary) core for DPDK 00:09:34.128 00:09:34.128 Configuration options: 00:09:34.128 -c, --config, --json JSON config file 00:09:34.128 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:34.128 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:34.128 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:34.128 --rpcs-allowed comma-separated list of permitted RPCS 00:09:34.128 --json-ignore-init-errors don't exit on invalid config entry 00:09:34.128 00:09:34.128 Memory options: 00:09:34.128 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:34.128 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:34.128 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:34.128 -R, --huge-unlink unlink huge files after initialization 00:09:34.128 -n, --mem-channels number of memory channels used for DPDK 00:09:34.128 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:09:34.128 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:34.128 --no-huge run without using hugepages 00:09:34.128 -i, --shm-id shared memory ID (optional) 00:09:34.128 -g, --single-file-segments force creating just one hugetlbfs file 00:09:34.128 00:09:34.128 PCI options: 00:09:34.129 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:34.129 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:34.129 -u, --no-pci disable PCI access 00:09:34.129 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:34.129 00:09:34.129 Log options: 00:09:34.129 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:34.129 --silence-noticelog disable notice level logging to stderr 00:09:34.129 00:09:34.129 Trace options: 00:09:34.129 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:34.129 setting 0 to disable trace (default 32768) 00:09:34.129 Tracepoints vary in size and can use more than one trace entry. 00:09:34.129 -e, --tpoint-group [:] 00:09:34.129 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:34.129 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:34.129 a tracepoint group. First tpoint inside a group can be enabled by 00:09:34.129 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:34.129 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:34.129 in /include/spdk_internal/trace_defs.h 00:09:34.129 00:09:34.129 Other options: 00:09:34.129 -h, --help show this usage 00:09:34.129 -v, --version print SPDK version 00:09:34.129 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:34.129 --env-context Opaque context for use of the env implementation 00:09:34.129 app_ut [options] 00:09:34.129 00:09:34.129 CPU options: 00:09:34.129 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:34.129 (like [0,1,10]) 00:09:34.129 --lcores lcore to CPU mapping list. The list is in the format: 00:09:34.129 [<,lcores[@CPUs]>...] 00:09:34.129 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:34.129 Within the group, '-' is used for range separator, 00:09:34.129 ',' is used for single number separator. 00:09:34.129 '( )' can be omitted for single element group, 00:09:34.129 '@' can be omitted if cpus and lcores have the same value 00:09:34.129 --disable-cpumask-locks Disable CPU core lock files. 00:09:34.129 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:34.129 pollers in the app support interrupt mode) 00:09:34.129 -p, --main-core main (primary) core for DPDK 00:09:34.129 00:09:34.129 Configuration options: 00:09:34.129 -c, --config, --json JSON config file 00:09:34.129 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:34.129 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:34.129 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:34.129 --rpcs-allowed comma-separated list of permitted RPCS 00:09:34.129 --json-ignore-init-errors don't exit on invalid config entry 00:09:34.129 00:09:34.129 Memory options: 00:09:34.129 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:34.129 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:34.129 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:34.129 -R, --huge-unlink unlink huge files after initialization 00:09:34.129 -n, --mem-channels number of memory channels used for DPDK 00:09:34.129 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:09:34.129 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:34.129 --no-huge run without using hugepages 00:09:34.129 -i, --shm-id shared memory ID (optional) 00:09:34.129 -g, --single-file-segments force creating just one hugetlbfs file 00:09:34.129 00:09:34.129 PCI options: 00:09:34.129 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:34.129 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:34.129 -u, --no-pci disable PCI access 00:09:34.129 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:34.129 00:09:34.129 Log options: 00:09:34.129 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:34.129 --silence-noticelog disable notice level logging to stderr 00:09:34.129 00:09:34.129 Trace options: 00:09:34.129 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:34.129 setting 0 to disable trace (default 32768) 00:09:34.129 Tracepoints vary in size and can use more than one trace entry. 00:09:34.129 -e, --tpoint-group [:] 00:09:34.129 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:34.129 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:34.129 a tracepoint group. First tpoint inside a group can be enabled by 00:09:34.129 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:34.129 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:34.129 in /include/spdk_internal/trace_defs.h 00:09:34.129 00:09:34.129 Other options: 00:09:34.129 -h, --help show this usage 00:09:34.129 -v, --version print SPDK version 00:09:34.129 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:34.129 --env-context Opaque context for use of the env implementation 00:09:34.129 app_ut [options] 00:09:34.129 00:09:34.129 CPU options: 00:09:34.129 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:34.129 (like [0,1,10]) 00:09:34.129 --lcores lcore to CPU mapping list. The list is in the format: 00:09:34.129 [<,lcores[@CPUs]>...] 00:09:34.129 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:34.129 Within the group, '-' is used for range separator, 00:09:34.129 ',' is used for single number separator. 00:09:34.129 '( )' can be omitted for single element group, 00:09:34.129 '@' can be omitted if cpus and lcores have the same value 00:09:34.129 --disable-cpumask-locks Disable CPU core lock files. 00:09:34.129 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:34.129 pollers in the app support interrupt mode) 00:09:34.129 -p, --main-core main (primary) core for DPDK 00:09:34.129 00:09:34.129 Configuration options: 00:09:34.129 -c, --config, --json JSON config file 00:09:34.129 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:34.129 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:34.129 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:34.129 --rpcs-allowed comma-separated list of permitted RPCS 00:09:34.129 --json-ignore-init-errors don't exit on invalid config entry 00:09:34.129 00:09:34.129 Memory options: 00:09:34.129 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:34.129 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:34.129 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:34.129 -R, --huge-unlink unlink huge files after initialization 00:09:34.129 -n, --mem-channels number of memory channels used for DPDK 00:09:34.129 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:09:34.129 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:34.129 --no-huge run without using hugepages 00:09:34.129 -i, --shm-id shared memory ID (optional) 00:09:34.129 -g, --single-file-segments force creating just one hugetlbfs file 00:09:34.129 00:09:34.129 PCI options: 00:09:34.129 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:34.129 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:34.129 -u, --no-pci disable PCI access 00:09:34.129 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:34.129 00:09:34.129 Log options: 00:09:34.129 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:34.129 --silence-noticelog disable notice level logging to stderr 00:09:34.129 00:09:34.129 Trace options: 00:09:34.129 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:34.129 setting 0 to disable trace (default 32768) 00:09:34.129 Tracepoints vary in size and can use more than one trace entry. 00:09:34.129 -e, --tpoint-group [:] 00:09:34.129 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:34.129 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:34.129 a tracepoint group. First tpoint inside a group can be enabled by 00:09:34.129 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:34.129 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:34.129 in /include/spdk_internal/trace_defs.h 00:09:34.129 00:09:34.129 Other options: 00:09:34.129 -h, --help show this usage 00:09:34.129 -v, --version print SPDK version 00:09:34.129 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:34.129 --env-context Opaque context for use of the env implementation 00:09:34.129 passed 00:09:34.129 00:09:34.129 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.129 suites 1 1 n/a 0 0 00:09:34.129 tests 1 1 1 0 0 00:09:34.129 asserts 8 8 8 0 n/a 00:09:34.129 00:09:34.129 Elapsed time = 0.000 seconds 00:09:34.129 app_ut: unrecognized option `--test-long-opt' 00:09:34.129 [2024-05-15 02:10:22.055958] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:34.129 [2024-05-15 02:10:22.056167] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:34.130 [2024-05-15 02:10:22.056256] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:34.130 02:10:22 unittest.unittest_event -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:34.130 00:09:34.130 00:09:34.130 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.130 http://cunit.sourceforge.net/ 00:09:34.130 00:09:34.130 00:09:34.130 Suite: app_suite 00:09:34.130 Test: test_create_reactor ...passed 00:09:34.130 Test: test_init_reactors ...passed 00:09:34.130 Test: test_event_call ...passed 00:09:34.130 Test: test_schedule_thread ...passed 00:09:34.130 Test: test_reschedule_thread ...passed 00:09:34.130 Test: test_bind_thread ...passed 00:09:34.130 Test: test_for_each_reactor ...passed 00:09:34.130 Test: test_reactor_stats ...passed 00:09:34.130 Test: test_scheduler ...passed 00:09:34.130 Test: test_governor ...passed 00:09:34.130 00:09:34.130 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.130 suites 1 1 n/a 0 0 00:09:34.130 tests 10 10 10 0 0 00:09:34.130 asserts 336 336 336 0 n/a 00:09:34.130 00:09:34.130 Elapsed time = 0.000 seconds 00:09:34.130 00:09:34.130 real 0m0.013s 00:09:34.130 user 0m0.010s 00:09:34.130 sys 0m0.003s 00:09:34.130 02:10:22 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.130 02:10:22 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 ************************************ 00:09:34.130 END TEST unittest_event 00:09:34.130 ************************************ 00:09:34.130 02:10:22 unittest -- unit/unittest.sh@233 -- # uname -s 00:09:34.130 02:10:22 unittest -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:09:34.130 02:10:22 unittest -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:34.130 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.130 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.130 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 ************************************ 00:09:34.130 START TEST unittest_accel 00:09:34.130 ************************************ 00:09:34.130 02:10:22 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:34.130 00:09:34.130 00:09:34.130 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.130 http://cunit.sourceforge.net/ 00:09:34.130 00:09:34.130 00:09:34.130 Suite: accel_sequence 00:09:34.130 Test: test_sequence_fill_copy ...passed 00:09:34.130 Test: test_sequence_abort ...passed 00:09:34.130 Test: test_sequence_append_error ...passed 00:09:34.130 Test: test_sequence_completion_error ...[2024-05-15 02:10:22.110488] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d39e340 00:09:34.130 [2024-05-15 02:10:22.110943] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82d39e340 00:09:34.130 [2024-05-15 02:10:22.111015] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82d39e340 00:09:34.130 passed 00:09:34.130 Test: test_sequence_decompress ...[2024-05-15 02:10:22.111049] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82d39e340 00:09:34.130 passed 00:09:34.130 Test: test_sequence_reverse ...passed 00:09:34.130 Test: test_sequence_copy_elision ...passed 00:09:34.130 Test: test_sequence_accel_buffers ...passed 00:09:34.130 Test: test_sequence_memory_domain ...[2024-05-15 02:10:22.113383] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1704:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:34.130 [2024-05-15 02:10:22.113484] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1743:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:09:34.130 passed 00:09:34.130 Test: test_sequence_module_memory_domain ...passed 00:09:34.130 Test: test_sequence_crypto ...passed 00:09:34.130 Test: test_sequence_driver ...[2024-05-15 02:10:22.114349] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1851:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82d39e400 using driver: ut 00:09:34.130 passed 00:09:34.130 Test: test_sequence_same_iovs ...[2024-05-15 02:10:22.114391] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1916:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d39e400 through driver: ut 00:09:34.130 passed 00:09:34.130 Test: test_sequence_crc32 ...passed 00:09:34.130 Suite: accel 00:09:34.130 Test: test_spdk_accel_task_complete ...passed 00:09:34.130 Test: test_get_task ...passed 00:09:34.130 Test: test_spdk_accel_submit_copy ...passed 00:09:34.130 Test: test_spdk_accel_submit_dualcast ...[2024-05-15 02:10:22.115041] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:34.130 passed 00:09:34.130 Test: test_spdk_accel_submit_compare ...passed 00:09:34.130 Test: test_spdk_accel_submit_fill ...passed 00:09:34.130 Test: test_spdk_accel_submit_crc32c ...passed 00:09:34.130 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:34.130 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:34.130 Test: test_spdk_accel_submit_xor ...passed[2024-05-15 02:10:22.115064] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:34.130 00:09:34.130 Test: test_spdk_accel_module_find_by_name ...passed 00:09:34.130 Test: test_spdk_accel_module_register ...passed 00:09:34.130 00:09:34.130 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.130 suites 2 2 n/a 0 0 00:09:34.130 tests 26 26 26 0 0 00:09:34.130 asserts 827 827 827 0 n/a 00:09:34.130 00:09:34.130 Elapsed time = 0.008 seconds 00:09:34.130 00:09:34.130 real 0m0.017s 00:09:34.130 user 0m0.001s 00:09:34.130 sys 0m0.014s 00:09:34.130 02:10:22 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.130 02:10:22 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 ************************************ 00:09:34.130 END TEST unittest_accel 00:09:34.130 ************************************ 00:09:34.390 02:10:22 unittest -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 START TEST unittest_ioat 00:09:34.390 ************************************ 00:09:34.390 02:10:22 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:34.390 00:09:34.390 00:09:34.390 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.390 http://cunit.sourceforge.net/ 00:09:34.390 00:09:34.390 00:09:34.390 Suite: ioat 00:09:34.390 Test: ioat_state_check ...passed 00:09:34.390 00:09:34.390 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.390 suites 1 1 n/a 0 0 00:09:34.390 tests 1 1 1 0 0 00:09:34.390 asserts 32 32 32 0 n/a 00:09:34.390 00:09:34.390 Elapsed time = 0.000 seconds 00:09:34.390 00:09:34.390 real 0m0.004s 00:09:34.390 user 0m0.000s 00:09:34.390 sys 0m0.008s 00:09:34.390 02:10:22 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.390 02:10:22 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 END TEST unittest_ioat 00:09:34.390 ************************************ 00:09:34.390 02:10:22 unittest -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:34.390 02:10:22 unittest -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 START TEST unittest_idxd_user 00:09:34.390 ************************************ 00:09:34.390 02:10:22 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:34.390 00:09:34.390 00:09:34.390 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.390 http://cunit.sourceforge.net/ 00:09:34.390 00:09:34.390 00:09:34.390 Suite: idxd_user 00:09:34.390 Test: test_idxd_wait_cmd ...[2024-05-15 02:10:22.201391] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:34.390 [2024-05-15 02:10:22.201723] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:34.390 passed 00:09:34.390 Test: test_idxd_reset_dev ...passed 00:09:34.390 Test: test_idxd_group_config ...passed 00:09:34.390 Test: test_idxd_wq_config ...passed 00:09:34.390 00:09:34.390 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.390 suites 1 1 n/a 0 0 00:09:34.390 tests 4 4 4 0 0 00:09:34.390 asserts 20 20 20 0 n/a 00:09:34.390 00:09:34.390 Elapsed time = 0.000 seconds 00:09:34.390 [2024-05-15 02:10:22.201778] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:34.390 [2024-05-15 02:10:22.201798] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:34.390 00:09:34.390 real 0m0.007s 00:09:34.390 user 0m0.003s 00:09:34.390 sys 0m0.006s 00:09:34.390 02:10:22 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.390 02:10:22 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 END TEST unittest_idxd_user 00:09:34.390 ************************************ 00:09:34.390 02:10:22 unittest -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.390 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 START TEST unittest_iscsi 00:09:34.390 ************************************ 00:09:34.390 02:10:22 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:09:34.390 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:34.390 00:09:34.390 00:09:34.390 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.390 http://cunit.sourceforge.net/ 00:09:34.390 00:09:34.390 00:09:34.390 Suite: conn_suite 00:09:34.390 Test: read_task_split_in_order_case ...passed 00:09:34.390 Test: read_task_split_reverse_order_case ...passed 00:09:34.390 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:34.390 Test: process_non_read_task_completion_test ...passed 00:09:34.390 Test: free_tasks_on_connection ...passed 00:09:34.390 Test: free_tasks_with_queued_datain ...passed 00:09:34.390 Test: abort_queued_datain_task_test ...passed 00:09:34.390 Test: abort_queued_datain_tasks_test ...passed 00:09:34.390 00:09:34.390 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.390 suites 1 1 n/a 0 0 00:09:34.390 tests 8 8 8 0 0 00:09:34.390 asserts 230 230 230 0 n/a 00:09:34.390 00:09:34.390 Elapsed time = 0.000 seconds 00:09:34.391 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:34.391 00:09:34.391 00:09:34.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.391 http://cunit.sourceforge.net/ 00:09:34.391 00:09:34.391 00:09:34.391 Suite: iscsi_suite 00:09:34.391 Test: param_negotiation_test ...passed 00:09:34.391 Test: list_negotiation_test ...passed 00:09:34.391 Test: parse_valid_test ...passed 00:09:34.391 Test: parse_invalid_test ...[2024-05-15 02:10:22.256311] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:34.391 [2024-05-15 02:10:22.256545] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:34.391 passed 00:09:34.391 00:09:34.391 [2024-05-15 02:10:22.256565] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:09:34.391 [2024-05-15 02:10:22.256598] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:34.391 [2024-05-15 02:10:22.256614] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:34.391 [2024-05-15 02:10:22.256629] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:34.391 [2024-05-15 02:10:22.256644] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:34.391 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.391 suites 1 1 n/a 0 0 00:09:34.391 tests 4 4 4 0 0 00:09:34.391 asserts 161 161 161 0 n/a 00:09:34.391 00:09:34.391 Elapsed time = 0.000 seconds 00:09:34.391 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:34.391 00:09:34.391 00:09:34.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.391 http://cunit.sourceforge.net/ 00:09:34.391 00:09:34.391 00:09:34.391 Suite: iscsi_target_node_suite 00:09:34.391 Test: add_lun_test_cases ...[2024-05-15 02:10:22.263580] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:34.391 [2024-05-15 02:10:22.263865] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:34.391 [2024-05-15 02:10:22.263892] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:34.391 [2024-05-15 02:10:22.263911] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:34.391 [2024-05-15 02:10:22.263928] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:34.391 passed 00:09:34.391 Test: allow_any_allowed ...passed 00:09:34.391 Test: allow_ipv6_allowed ...passed 00:09:34.391 Test: allow_ipv6_denied ...passed 00:09:34.391 Test: allow_ipv6_invalid ...passed 00:09:34.391 Test: allow_ipv4_allowed ...passed 00:09:34.391 Test: allow_ipv4_denied ...passed 00:09:34.391 Test: allow_ipv4_invalid ...passed 00:09:34.391 Test: node_access_allowed ...passed 00:09:34.391 Test: node_access_denied_by_empty_netmask ...passed 00:09:34.391 Test: node_access_multi_initiator_groups_cases ...passed 00:09:34.391 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:34.391 Test: chap_param_test_cases ...[2024-05-15 02:10:22.264084] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:34.391 [2024-05-15 02:10:22.264110] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:34.391 [2024-05-15 02:10:22.264127] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:34.391 [2024-05-15 02:10:22.264144] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:34.391 passed 00:09:34.391 00:09:34.391 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.391 suites 1 1 n/a 0 0 00:09:34.391 tests 13 13 13 0 0 00:09:34.391 asserts 50 50 50 0 n/a 00:09:34.391 00:09:34.391 Elapsed time = 0.000 seconds 00:09:34.391 [2024-05-15 02:10:22.264160] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:34.391 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:34.391 00:09:34.391 00:09:34.391 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.391 http://cunit.sourceforge.net/ 00:09:34.391 00:09:34.391 00:09:34.391 Suite: iscsi_suite 00:09:34.391 Test: op_login_check_target_test ...passed 00:09:34.391 Test: op_login_session_normal_test ...passed 00:09:34.391 Test: maxburstlength_test ...[2024-05-15 02:10:22.269439] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:09:34.391 [2024-05-15 02:10:22.269576] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.391 [2024-05-15 02:10:22.269590] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.391 [2024-05-15 02:10:22.269598] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:34.391 [2024-05-15 02:10:22.269625] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:34.391 [2024-05-15 02:10:22.269634] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:34.391 [2024-05-15 02:10:22.269649] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:34.391 [2024-05-15 02:10:22.269667] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:34.391 [2024-05-15 02:10:22.269715] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:34.391 [2024-05-15 02:10:22.269726] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4557:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:34.391 passed 00:09:34.391 Test: underflow_for_read_transfer_test ...passed 00:09:34.391 Test: underflow_for_zero_read_transfer_test ...passed 00:09:34.391 Test: underflow_for_request_sense_test ...passed 00:09:34.391 Test: underflow_for_check_condition_test ...passed 00:09:34.391 Test: add_transfer_task_test ...passed 00:09:34.391 Test: get_transfer_task_test ...passed 00:09:34.391 Test: del_transfer_task_test ...passed 00:09:34.391 Test: clear_all_transfer_tasks_test ...passed 00:09:34.391 Test: build_iovs_test ...passed 00:09:34.391 Test: build_iovs_with_md_test ...passed 00:09:34.391 Test: pdu_hdr_op_login_test ...passed 00:09:34.391 Test: pdu_hdr_op_text_test ...passed 00:09:34.391 Test: pdu_hdr_op_logout_test ...passed 00:09:34.391 Test: pdu_hdr_op_scsi_test ...passed 00:09:34.391 Test: pdu_hdr_op_task_mgmt_test ...passed 00:09:34.391 Test: pdu_hdr_op_nopout_test ...passed 00:09:34.391 Test: pdu_hdr_op_data_test ...[2024-05-15 02:10:22.269851] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:34.391 [2024-05-15 02:10:22.269863] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:34.391 [2024-05-15 02:10:22.269871] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:34.391 [2024-05-15 02:10:22.269881] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2247:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:34.391 [2024-05-15 02:10:22.269889] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:34.391 [2024-05-15 02:10:22.269897] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2292:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:34.391 [2024-05-15 02:10:22.269906] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2523:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:34.391 [2024-05-15 02:10:22.269918] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:34.391 [2024-05-15 02:10:22.269928] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:34.391 [2024-05-15 02:10:22.269935] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:34.391 [2024-05-15 02:10:22.269943] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:34.391 [2024-05-15 02:10:22.269951] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3411:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:34.391 [2024-05-15 02:10:22.269959] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:34.391 [2024-05-15 02:10:22.269969] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:34.391 [2024-05-15 02:10:22.269978] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:34.391 [2024-05-15 02:10:22.269989] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:34.391 [2024-05-15 02:10:22.269997] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:34.391 [2024-05-15 02:10:22.270004] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:34.391 [2024-05-15 02:10:22.270011] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:34.391 [2024-05-15 02:10:22.270020] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:34.391 [2024-05-15 02:10:22.270028] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:34.391 [2024-05-15 02:10:22.270035] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:34.391 [2024-05-15 02:10:22.270042] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4223:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:34.391 [2024-05-15 02:10:22.270054] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:34.391 [2024-05-15 02:10:22.270062] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:34.391 [2024-05-15 02:10:22.270069] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:34.391 passed 00:09:34.391 Test: empty_text_with_cbit_test ...passed 00:09:34.391 Test: pdu_payload_read_test ...[2024-05-15 02:10:22.270341] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4638:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:34.391 passed 00:09:34.391 Test: data_out_pdu_sequence_test ...passed 00:09:34.392 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 24 24 24 0 0 00:09:34.392 asserts 150253 150253 150253 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: init_grp_suite 00:09:34.392 Test: create_initiator_group_success_case ...passed 00:09:34.392 Test: find_initiator_group_success_case ...passed 00:09:34.392 Test: register_initiator_group_twice_case ...passed 00:09:34.392 Test: add_initiator_name_success_case ...passed 00:09:34.392 Test: add_initiator_name_fail_case ...passed 00:09:34.392 Test: delete_all_initiator_names_success_case ...passed 00:09:34.392 Test: add_netmask_success_case ...passed 00:09:34.392 Test: add_netmask_fail_case ...passed 00:09:34.392 Test: delete_all_netmasks_success_case ...passed 00:09:34.392 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:34.392 Test: netmask_overwrite_all_to_any_case ...passed 00:09:34.392 Test: add_delete_initiator_names_case ...passed 00:09:34.392 Test: add_duplicated_initiator_names_case ...passed 00:09:34.392 Test: delete_nonexisting_initiator_names_case ...passed 00:09:34.392 Test: add_delete_netmasks_case ...passed 00:09:34.392 Test: add_duplicated_netmasks_case ...passed 00:09:34.392 Test: delete_nonexisting_netmasks_case ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 17 17 17 0 0 00:09:34.392 asserts 108 108 108 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 [2024-05-15 02:10:22.276290] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:34.392 [2024-05-15 02:10:22.276462] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:34.392 02:10:22 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: portal_grp_suite 00:09:34.392 Test: portal_create_ipv4_normal_case ...passed 00:09:34.392 Test: portal_create_ipv6_normal_case ...passed 00:09:34.392 Test: portal_create_ipv4_wildcard_case ...passed 00:09:34.392 Test: portal_create_ipv6_wildcard_case ...passed 00:09:34.392 Test: portal_create_twice_case ...passed 00:09:34.392 Test: portal_grp_register_unregister_case ...passed 00:09:34.392 Test: portal_grp_register_twice_case ...passed 00:09:34.392 Test: portal_grp_add_delete_case ...passed 00:09:34.392 Test: portal_grp_add_delete_twice_case ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 9 9 9 0 0 00:09:34.392 asserts 44 44 44 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 [2024-05-15 02:10:22.281952] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:34.392 00:09:34.392 real 0m0.041s 00:09:34.392 user 0m0.027s 00:09:34.392 sys 0m0.028s 00:09:34.392 02:10:22 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.392 02:10:22 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 ************************************ 00:09:34.392 END TEST unittest_iscsi 00:09:34.392 ************************************ 00:09:34.392 02:10:22 unittest -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 ************************************ 00:09:34.392 START TEST unittest_json 00:09:34.392 ************************************ 00:09:34.392 02:10:22 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:09:34.392 02:10:22 unittest.unittest_json -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: json 00:09:34.392 Test: test_parse_literal ...passed 00:09:34.392 Test: test_parse_string_simple ...passed 00:09:34.392 Test: test_parse_string_control_chars ...passed 00:09:34.392 Test: test_parse_string_utf8 ...passed 00:09:34.392 Test: test_parse_string_escapes_twochar ...passed 00:09:34.392 Test: test_parse_string_escapes_unicode ...passed 00:09:34.392 Test: test_parse_number ...passed 00:09:34.392 Test: test_parse_array ...passed 00:09:34.392 Test: test_parse_object ...passed 00:09:34.392 Test: test_parse_nesting ...passed 00:09:34.392 Test: test_parse_comment ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 11 11 11 0 0 00:09:34.392 asserts 1516 1516 1516 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 02:10:22 unittest.unittest_json -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: json 00:09:34.392 Test: test_strequal ...passed 00:09:34.392 Test: test_num_to_uint16 ...passed 00:09:34.392 Test: test_num_to_int32 ...passed 00:09:34.392 Test: test_num_to_uint64 ...passed 00:09:34.392 Test: test_decode_object ...passed 00:09:34.392 Test: test_decode_array ...passed 00:09:34.392 Test: test_decode_bool ...passed 00:09:34.392 Test: test_decode_uint16 ...passed 00:09:34.392 Test: test_decode_int32 ...passed 00:09:34.392 Test: test_decode_uint32 ...passed 00:09:34.392 Test: test_decode_uint64 ...passed 00:09:34.392 Test: test_decode_string ...passed 00:09:34.392 Test: test_decode_uuid ...passed 00:09:34.392 Test: test_find ...passed 00:09:34.392 Test: test_find_array ...passed 00:09:34.392 Test: test_iterating ...passed 00:09:34.392 Test: test_free_object ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 17 17 17 0 0 00:09:34.392 asserts 236 236 236 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 02:10:22 unittest.unittest_json -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: json 00:09:34.392 Test: test_write_literal ...passed 00:09:34.392 Test: test_write_string_simple ...passed 00:09:34.392 Test: test_write_string_escapes ...passed 00:09:34.392 Test: test_write_string_utf16le ...passed 00:09:34.392 Test: test_write_number_int32 ...passed 00:09:34.392 Test: test_write_number_uint32 ...passed 00:09:34.392 Test: test_write_number_uint128 ...passed 00:09:34.392 Test: test_write_string_number_uint128 ...passed 00:09:34.392 Test: test_write_number_int64 ...passed 00:09:34.392 Test: test_write_number_uint64 ...passed 00:09:34.392 Test: test_write_number_double ...passed 00:09:34.392 Test: test_write_uuid ...passed 00:09:34.392 Test: test_write_array ...passed 00:09:34.392 Test: test_write_object ...passed 00:09:34.392 Test: test_write_nesting ...passed 00:09:34.392 Test: test_write_val ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 16 16 16 0 0 00:09:34.392 asserts 918 918 918 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 02:10:22 unittest.unittest_json -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:34.392 00:09:34.392 00:09:34.392 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.392 http://cunit.sourceforge.net/ 00:09:34.392 00:09:34.392 00:09:34.392 Suite: jsonrpc 00:09:34.392 Test: test_parse_request ...passed 00:09:34.392 Test: test_parse_request_streaming ...passed 00:09:34.392 00:09:34.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.392 suites 1 1 n/a 0 0 00:09:34.392 tests 2 2 2 0 0 00:09:34.392 asserts 289 289 289 0 n/a 00:09:34.392 00:09:34.392 Elapsed time = 0.000 seconds 00:09:34.392 00:09:34.392 real 0m0.028s 00:09:34.392 user 0m0.017s 00:09:34.392 sys 0m0.014s 00:09:34.392 02:10:22 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.392 02:10:22 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 ************************************ 00:09:34.392 END TEST unittest_json 00:09:34.392 ************************************ 00:09:34.392 02:10:22 unittest -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.392 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 ************************************ 00:09:34.392 START TEST unittest_rpc 00:09:34.392 ************************************ 00:09:34.393 02:10:22 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:09:34.393 02:10:22 unittest.unittest_rpc -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:34.652 00:09:34.652 00:09:34.652 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.652 http://cunit.sourceforge.net/ 00:09:34.652 00:09:34.652 00:09:34.652 Suite: rpc 00:09:34.652 Test: test_jsonrpc_handler ...passed 00:09:34.652 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:34.652 Test: test_rpc_get_methods ...[2024-05-15 02:10:22.392012] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:34.652 passed 00:09:34.652 Test: test_rpc_spdk_get_version ...passed 00:09:34.652 Test: test_spdk_rpc_listen_close ...passed 00:09:34.652 Test: test_rpc_run_multiple_servers ...passed 00:09:34.652 00:09:34.652 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.652 suites 1 1 n/a 0 0 00:09:34.652 tests 6 6 6 0 0 00:09:34.652 asserts 23 23 23 0 n/a 00:09:34.652 00:09:34.652 Elapsed time = 0.000 seconds 00:09:34.652 00:09:34.652 real 0m0.007s 00:09:34.652 user 0m0.000s 00:09:34.652 sys 0m0.008s 00:09:34.652 02:10:22 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.652 02:10:22 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 ************************************ 00:09:34.653 END TEST unittest_rpc 00:09:34.653 ************************************ 00:09:34.653 02:10:22 unittest -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 ************************************ 00:09:34.653 START TEST unittest_notify 00:09:34.653 ************************************ 00:09:34.653 02:10:22 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:34.653 00:09:34.653 00:09:34.653 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.653 http://cunit.sourceforge.net/ 00:09:34.653 00:09:34.653 00:09:34.653 Suite: app_suite 00:09:34.653 Test: notify ...passed 00:09:34.653 00:09:34.653 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.653 suites 1 1 n/a 0 0 00:09:34.653 tests 1 1 1 0 0 00:09:34.653 asserts 13 13 13 0 n/a 00:09:34.653 00:09:34.653 Elapsed time = 0.000 seconds 00:09:34.653 00:09:34.653 real 0m0.005s 00:09:34.653 user 0m0.005s 00:09:34.653 sys 0m0.004s 00:09:34.653 02:10:22 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.653 ************************************ 00:09:34.653 END TEST unittest_notify 00:09:34.653 ************************************ 00:09:34.653 02:10:22 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 02:10:22 unittest -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.653 02:10:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 ************************************ 00:09:34.653 START TEST unittest_nvme 00:09:34.653 ************************************ 00:09:34.653 02:10:22 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:09:34.653 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:34.653 00:09:34.653 00:09:34.653 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.653 http://cunit.sourceforge.net/ 00:09:34.653 00:09:34.653 00:09:34.653 Suite: nvme 00:09:34.653 Test: test_opc_data_transfer ...passed 00:09:34.653 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:34.653 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:34.653 Test: test_trid_parse_and_compare ...[2024-05-15 02:10:22.474128] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:34.653 passed 00:09:34.653 Test: test_trid_trtype_str ...passed 00:09:34.653 Test: test_trid_adrfam_str ...passed 00:09:34.653 Test: test_nvme_ctrlr_probe ...passed 00:09:34.653 Test: test_spdk_nvme_probe ...passed 00:09:34.653 Test: test_spdk_nvme_connect ...passed 00:09:34.653 Test: test_nvme_ctrlr_probe_internal ...[2024-05-15 02:10:22.474297] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.653 [2024-05-15 02:10:22.474310] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1189:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:34.653 [2024-05-15 02:10:22.474318] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.653 [2024-05-15 02:10:22.474326] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:09:34.653 [2024-05-15 02:10:22.474333] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:34.653 [2024-05-15 02:10:22.474399] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:34.653 [2024-05-15 02:10:22.474414] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.653 [2024-05-15 02:10:22.474421] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:34.653 [2024-05-15 02:10:22.474430] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:34.653 [2024-05-15 02:10:22.474437] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:34.653 [2024-05-15 02:10:22.474451] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:34.653 [2024-05-15 02:10:22.474483] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.653 [2024-05-15 02:10:22.474493] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:09:34.653 [2024-05-15 02:10:22.474507] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:34.653 [2024-05-15 02:10:22.474515] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:34.653 passed 00:09:34.653 Test: test_nvme_init_controllers ...passed 00:09:34.653 Test: test_nvme_driver_init ...[2024-05-15 02:10:22.474525] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:34.653 [2024-05-15 02:10:22.474537] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:34.653 [2024-05-15 02:10:22.474545] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:34.653 passed 00:09:34.653 Test: test_spdk_nvme_detach ...passed 00:09:34.653 Test: test_nvme_completion_poll_cb ...passed 00:09:34.653 Test: test_nvme_user_copy_cmd_complete ...[2024-05-15 02:10:22.585591] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:34.653 passed 00:09:34.653 Test: test_nvme_allocate_request_null ...passed 00:09:34.653 Test: test_nvme_allocate_request ...passed 00:09:34.653 Test: test_nvme_free_request ...passed 00:09:34.653 Test: test_nvme_allocate_request_user_copy ...passed 00:09:34.653 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:34.653 Test: test_nvme_request_check_timeout ...passed 00:09:34.653 Test: test_nvme_wait_for_completion ...passed 00:09:34.653 Test: test_spdk_nvme_parse_func ...passed 00:09:34.653 Test: test_spdk_nvme_detach_async ...passed 00:09:34.653 Test: test_nvme_parse_addr ...passed 00:09:34.653 00:09:34.653 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.653 suites 1 1 n/a 0 0 00:09:34.653 tests 25 25 25 0 0 00:09:34.653 asserts 326 326 326 0 n/a 00:09:34.653 00:09:34.653 Elapsed time = 0.000 seconds 00:09:34.653 [2024-05-15 02:10:22.585928] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:34.653 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:34.653 00:09:34.653 00:09:34.653 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.653 http://cunit.sourceforge.net/ 00:09:34.653 00:09:34.653 00:09:34.653 Suite: nvme_ctrlr 00:09:34.653 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-15 02:10:22.592959] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-15 02:10:22.594313] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-15 02:10:22.595575] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-15 02:10:22.596770] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-15 02:10:22.597973] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 [2024-05-15 02:10:22.599117] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 02:10:22.600273] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 02:10:22.601440] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-15 02:10:22.603697] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 [2024-05-15 02:10:22.605911] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 02:10:22.607037] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:34.653 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-15 02:10:22.609278] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 [2024-05-15 02:10:22.610401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 02:10:22.612654] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:34.653 Test: test_nvme_ctrlr_init_delay ...[2024-05-15 02:10:22.614945] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 passed 00:09:34.653 Test: test_alloc_io_qpair_rr_1 ...[2024-05-15 02:10:22.616102] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.653 [2024-05-15 02:10:22.616145] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:34.653 passed 00:09:34.653 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-05-15 02:10:22.616171] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:34.653 [2024-05-15 02:10:22.616181] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:34.653 [2024-05-15 02:10:22.616190] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:34.653 passed 00:09:34.653 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:34.653 Test: test_alloc_io_qpair_wrr_1 ...passed 00:09:34.653 Test: test_alloc_io_qpair_wrr_2 ...passed 00:09:34.654 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:09:34.654 Test: test_nvme_ctrlr_fail ...passed 00:09:34.654 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:34.654 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:34.654 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:34.654 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-15 02:10:22.616247] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.654 [2024-05-15 02:10:22.616269] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.654 [2024-05-15 02:10:22.616281] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:34.654 [2024-05-15 02:10:22.616304] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:34.654 [2024-05-15 02:10:22.616319] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:34.654 [2024-05-15 02:10:22.616329] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:34.654 [2024-05-15 02:10:22.616338] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:34.654 [2024-05-15 02:10:22.616348] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:34.654 [2024-05-15 02:10:22.616389] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.914 passed 00:09:34.914 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:34.914 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:34.914 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:34.914 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-15 02:10:22.656358] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.914 passed 00:09:34.914 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-15 02:10:22.663007] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.914 passed 00:09:34.914 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-15 02:10:22.664170] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.914 [2024-05-15 02:10:22.664203] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:34.914 passed 00:09:34.914 Test: test_alloc_io_qpair_fail ...[2024-05-15 02:10:22.665361] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.914 passed 00:09:34.914 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:34.914 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:34.914 Test: test_nvme_ctrlr_set_state ...passed 00:09:34.914 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-15 02:10:22.665398] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:34.914 [2024-05-15 02:10:22.665470] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:34.915 [2024-05-15 02:10:22.665499] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-15 02:10:22.670563] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-15 02:10:22.680855] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_reset ...[2024-05-15 02:10:22.682079] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_aer_callback ...[2024-05-15 02:10:22.682170] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-15 02:10:22.683369] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:34.915 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:34.915 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-15 02:10:22.684665] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:34.915 Test: test_nvme_ctrlr_ana_resize ...[2024-05-15 02:10:22.685847] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:34.915 Test: test_nvme_transport_ctrlr_ready ...[2024-05-15 02:10:22.687040] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:34.915 [2024-05-15 02:10:22.687071] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ctrlr_disable ...[2024-05-15 02:10:22.687099] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:34.915 passed 00:09:34.915 00:09:34.915 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.915 suites 1 1 n/a 0 0 00:09:34.915 tests 43 43 43 0 0 00:09:34.915 asserts 10418 10418 10418 0 n/a 00:09:34.915 00:09:34.915 Elapsed time = 0.047 seconds 00:09:34.915 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:34.915 00:09:34.915 00:09:34.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.915 http://cunit.sourceforge.net/ 00:09:34.915 00:09:34.915 00:09:34.915 Suite: nvme_ctrlr_cmd 00:09:34.915 Test: test_get_log_pages ...passed 00:09:34.915 Test: test_set_feature_cmd ...passed 00:09:34.915 Test: test_set_feature_ns_cmd ...passed 00:09:34.915 Test: test_get_feature_cmd ...passed 00:09:34.915 Test: test_get_feature_ns_cmd ...passed 00:09:34.915 Test: test_abort_cmd ...passed 00:09:34.915 Test: test_set_host_id_cmds ...[2024-05-15 02:10:22.698952] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:34.915 passed 00:09:34.915 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:34.915 Test: test_io_raw_cmd ...passed 00:09:34.915 Test: test_io_raw_cmd_with_md ...passed 00:09:34.915 Test: test_namespace_attach ...passed 00:09:34.915 Test: test_namespace_detach ...passed 00:09:34.915 Test: test_namespace_create ...passed 00:09:34.915 Test: test_namespace_delete ...passed 00:09:34.915 Test: test_doorbell_buffer_config ...passed 00:09:34.915 Test: test_format_nvme ...passed 00:09:34.915 Test: test_fw_commit ...passed 00:09:34.915 Test: test_fw_image_download ...passed 00:09:34.915 Test: test_sanitize ...passed 00:09:34.915 Test: test_directive ...passed 00:09:34.915 Test: test_nvme_request_add_abort ...passed 00:09:34.915 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:34.915 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:34.915 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:34.915 00:09:34.915 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.915 suites 1 1 n/a 0 0 00:09:34.915 tests 24 24 24 0 0 00:09:34.915 asserts 198 198 198 0 n/a 00:09:34.915 00:09:34.915 Elapsed time = 0.000 seconds 00:09:34.915 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:34.915 00:09:34.915 00:09:34.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.915 http://cunit.sourceforge.net/ 00:09:34.915 00:09:34.915 00:09:34.915 Suite: nvme_ctrlr_cmd 00:09:34.915 Test: test_geometry_cmd ...passed 00:09:34.915 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:34.915 00:09:34.915 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.915 suites 1 1 n/a 0 0 00:09:34.915 tests 2 2 2 0 0 00:09:34.915 asserts 7 7 7 0 n/a 00:09:34.915 00:09:34.915 Elapsed time = 0.000 seconds 00:09:34.915 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:34.915 00:09:34.915 00:09:34.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.915 http://cunit.sourceforge.net/ 00:09:34.915 00:09:34.915 00:09:34.915 Suite: nvme 00:09:34.915 Test: test_nvme_ns_construct ...passed 00:09:34.915 Test: test_nvme_ns_uuid ...passed 00:09:34.915 Test: test_nvme_ns_csi ...passed 00:09:34.915 Test: test_nvme_ns_data ...passed 00:09:34.915 Test: test_nvme_ns_set_identify_data ...passed 00:09:34.915 Test: test_spdk_nvme_ns_get_values ...passed 00:09:34.915 Test: test_spdk_nvme_ns_is_active ...passed 00:09:34.915 Test: spdk_nvme_ns_supports ...passed 00:09:34.915 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:34.915 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:34.915 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:34.915 Test: test_nvme_ns_find_id_desc ...passed 00:09:34.915 00:09:34.915 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.915 suites 1 1 n/a 0 0 00:09:34.915 tests 12 12 12 0 0 00:09:34.915 asserts 83 83 83 0 n/a 00:09:34.915 00:09:34.915 Elapsed time = 0.000 seconds 00:09:34.915 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:34.915 00:09:34.915 00:09:34.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.915 http://cunit.sourceforge.net/ 00:09:34.915 00:09:34.915 00:09:34.915 Suite: nvme_ns_cmd 00:09:34.915 Test: split_test ...passed 00:09:34.915 Test: split_test2 ...passed 00:09:34.915 Test: split_test3 ...passed 00:09:34.915 Test: split_test4 ...passed 00:09:34.915 Test: test_nvme_ns_cmd_flush ...passed 00:09:34.915 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:34.915 Test: test_nvme_ns_cmd_copy ...passed 00:09:34.915 Test: test_io_flags ...passed 00:09:34.915 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:34.915 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:34.915 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:34.915 Test: test_nvme_ns_cmd_reservation_release ...[2024-05-15 02:10:22.716541] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:34.915 passed 00:09:34.915 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:34.915 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:34.915 Test: test_cmd_child_request ...passed 00:09:34.915 Test: test_nvme_ns_cmd_readv ...passed 00:09:34.915 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_writev ...passed 00:09:34.915 Test: test_nvme_ns_cmd_write_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_comparev ...passed 00:09:34.915 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:34.915 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:34.915 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:34.915 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:34.915 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-05-15 02:10:22.716776] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:34.915 [2024-05-15 02:10:22.716895] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:34.915 passed 00:09:34.915 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:09:34.915 Test: test_nvme_ns_cmd_verify ...passed 00:09:34.915 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:34.915 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:34.915 00:09:34.915 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.915 suites 1 1 n/a 0 0 00:09:34.915 tests 32 32 32 0 0 00:09:34.915 asserts 550 550 550 0 n/a 00:09:34.915 00:09:34.915 Elapsed time = 0.000 seconds 00:09:34.915 [2024-05-15 02:10:22.716913] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:34.915 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:34.915 00:09:34.915 00:09:34.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.915 http://cunit.sourceforge.net/ 00:09:34.915 00:09:34.916 00:09:34.916 Suite: nvme_ns_cmd 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:34.916 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:34.916 00:09:34.916 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.916 suites 1 1 n/a 0 0 00:09:34.916 tests 12 12 12 0 0 00:09:34.916 asserts 123 123 123 0 n/a 00:09:34.916 00:09:34.916 Elapsed time = 0.000 seconds 00:09:34.916 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:34.916 00:09:34.916 00:09:34.916 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.916 http://cunit.sourceforge.net/ 00:09:34.916 00:09:34.916 00:09:34.916 Suite: nvme_qpair 00:09:34.916 Test: test3 ...passed 00:09:34.916 Test: test_ctrlr_failed ...passed 00:09:34.916 Test: struct_packing ...passed 00:09:34.916 Test: test_nvme_qpair_process_completions ...[2024-05-15 02:10:22.728925] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:34.916 [2024-05-15 02:10:22.729151] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:34.916 passed 00:09:34.916 Test: test_nvme_completion_is_retry ...passed 00:09:34.916 Test: test_get_status_string ...passed 00:09:34.916 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:09:34.916 Test: test_nvme_qpair_submit_request ...passed 00:09:34.916 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:34.916 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:34.916 Test: test_nvme_qpair_init_deinit ...passed 00:09:34.916 Test: test_nvme_get_sgl_print_info ...passed 00:09:34.916 00:09:34.916 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.916 suites 1 1 n/a 0 0 00:09:34.916 tests 12 12 12 0 0 00:09:34.916 asserts 154 154 154 0 n/a 00:09:34.916 00:09:34.916 Elapsed time = 0.000 seconds 00:09:34.916 [2024-05-15 02:10:22.729240] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:09:34.916 [2024-05-15 02:10:22.729266] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:09:34.916 [2024-05-15 02:10:22.729347] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:34.916 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:34.916 00:09:34.916 00:09:34.916 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.916 http://cunit.sourceforge.net/ 00:09:34.916 00:09:34.916 00:09:34.916 Suite: nvme_pcie 00:09:34.916 Test: test_prp_list_append ...[2024-05-15 02:10:22.735054] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:34.916 [2024-05-15 02:10:22.735250] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:34.916 passed 00:09:34.916 Test: test_nvme_pcie_hotplug_monitor ...passed 00:09:34.916 Test: test_shadow_doorbell_update ...passed 00:09:34.916 Test: test_build_contig_hw_sgl_request ...passed 00:09:34.916 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:34.916 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:34.916 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:34.916 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:09:34.916 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed[2024-05-15 02:10:22.735267] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:34.916 [2024-05-15 02:10:22.735308] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:34.916 [2024-05-15 02:10:22.735328] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:34.916 [2024-05-15 02:10:22.735406] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:34.916 [2024-05-15 02:10:22.735434] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:34.916 [2024-05-15 02:10:22.735449] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:34.916 [2024-05-15 02:10:22.735463] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:34.916 [2024-05-15 02:10:22.735475] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:34.916 00:09:34.916 00:09:34.916 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.916 suites 1 1 n/a 0 0 00:09:34.916 tests 14 14 14 0 0 00:09:34.916 asserts 235 235 235 0 n/a 00:09:34.916 00:09:34.916 Elapsed time = 0.000 seconds 00:09:34.916 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:34.916 00:09:34.916 00:09:34.916 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.916 http://cunit.sourceforge.net/ 00:09:34.916 00:09:34.916 00:09:34.916 Suite: nvme_ns_cmd 00:09:34.916 Test: nvme_poll_group_create_test ...passed 00:09:34.916 Test: nvme_poll_group_add_remove_test ...passed 00:09:34.916 Test: nvme_poll_group_process_completions ...passed 00:09:34.916 Test: nvme_poll_group_destroy_test ...passed 00:09:34.916 Test: nvme_poll_group_get_free_stats ...passed 00:09:34.916 00:09:34.916 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.916 suites 1 1 n/a 0 0 00:09:34.916 tests 5 5 5 0 0 00:09:34.916 asserts 75 75 75 0 n/a 00:09:34.916 00:09:34.916 Elapsed time = 0.000 seconds 00:09:34.916 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:34.916 00:09:34.916 00:09:34.916 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.916 http://cunit.sourceforge.net/ 00:09:34.916 00:09:34.916 00:09:34.916 Suite: nvme_quirks 00:09:34.916 Test: test_nvme_quirks_striping ...passed 00:09:34.916 00:09:34.916 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.916 suites 1 1 n/a 0 0 00:09:34.916 tests 1 1 1 0 0 00:09:34.916 asserts 5 5 5 0 n/a 00:09:34.916 00:09:34.916 Elapsed time = 0.000 seconds 00:09:34.916 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:34.916 00:09:34.916 00:09:34.916 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.916 http://cunit.sourceforge.net/ 00:09:34.916 00:09:34.916 00:09:34.916 Suite: nvme_tcp 00:09:34.916 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:34.916 Test: test_nvme_tcp_build_iovs ...passed 00:09:34.916 Test: test_nvme_tcp_build_sgl_request ...passed 00:09:34.916 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-05-15 02:10:22.749077] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8207aea18, and the iovcnt=16, remaining_size=28672 00:09:34.916 passed 00:09:34.916 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:34.916 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:34.916 Test: test_nvme_tcp_req_get ...passed 00:09:34.916 Test: test_nvme_tcp_req_init ...passed 00:09:34.916 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:34.916 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:34.916 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:09:34.916 Test: test_nvme_tcp_alloc_reqs ...passed 00:09:34.916 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-05-15 02:10:22.749307] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(6) to be set 00:09:34.916 [2024-05-15 02:10:22.749340] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.916 passed 00:09:34.916 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:09:34.916 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-15 02:10:22.749360] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8207afd38 00:09:34.916 [2024-05-15 02:10:22.749370] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1227:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:34.916 [2024-05-15 02:10:22.749379] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.916 [2024-05-15 02:10:22.749388] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:34.916 [2024-05-15 02:10:22.749397] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749406] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:34.917 [2024-05-15 02:10:22.749414] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749456] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749481] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749507] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749517] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749525] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.749555] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:34.917 [2024-05-15 02:10:22.749565] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:34.917 [2024-05-15 02:10:22.875494] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:34.917 passed 00:09:34.917 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:34.917 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:09:34.917 Test: test_nvme_tcp_icresp_handle ...[2024-05-15 02:10:22.875641] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8207b0170): PDU Sequence Error 00:09:34.917 [2024-05-15 02:10:22.875688] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:34.917 [2024-05-15 02:10:22.875723] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1575:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:34.917 [2024-05-15 02:10:22.875757] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.875790] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:34.917 passed 00:09:34.917 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:09:34.917 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:09:34.917 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:09:34.917 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-15 02:10:22.875823] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.875857] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207b05a8 is same with the state(0) to be set 00:09:34.917 [2024-05-15 02:10:22.875898] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8207b0170): PDU Sequence Error 00:09:34.917 [2024-05-15 02:10:22.875949] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8207b05a8 00:09:34.917 [2024-05-15 02:10:22.876027] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8207ae308, errno=0, rc=0 00:09:34.917 [2024-05-15 02:10:22.876062] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207ae308 is same with the state(5) to be set 00:09:34.917 passed 00:09:34.917 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-15 02:10:22.876096] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8207ae308 is same with the state(5) to be set 00:09:34.917 [2024-05-15 02:10:22.876201] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8207ae308 (0): No error: 0 00:09:34.917 [2024-05-15 02:10:22.876237] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8207ae308 (0): No error: 0 00:09:35.176 [2024-05-15 02:10:22.943200] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:35.176 [2024-05-15 02:10:22.943276] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:35.176 passed 00:09:35.176 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:35.176 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:09:35.177 Test: test_nvme_tcp_ctrlr_construct ...[2024-05-15 02:10:22.943325] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.177 [2024-05-15 02:10:22.943336] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.177 passed 00:09:35.177 Test: test_nvme_tcp_qpair_submit_request ...passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 27 27 27 0 0 00:09:35.177 asserts 624 624 624 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.062 seconds 00:09:35.177 [2024-05-15 02:10:22.943381] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:35.177 [2024-05-15 02:10:22.943390] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.177 [2024-05-15 02:10:22.943403] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:35.177 [2024-05-15 02:10:22.943413] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.177 [2024-05-15 02:10:22.943428] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2375:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82b331000 with addr=192.168.1.78, port=23 00:09:35.177 [2024-05-15 02:10:22.943437] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:35.177 [2024-05-15 02:10:22.943461] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82b304180, and the iovcnt=1, remaining_size=1024 00:09:35.177 [2024-05-15 02:10:22.943487] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:35.177 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: nvme_transport 00:09:35.177 Test: test_nvme_get_transport ...passed 00:09:35.177 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:35.177 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:35.177 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:35.177 Test: test_ctrlr_get_memory_domains ...passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 5 5 5 0 0 00:09:35.177 asserts 28 28 28 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.000 seconds 00:09:35.177 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: nvme_io_msg 00:09:35.177 Test: test_nvme_io_msg_send ...passed 00:09:35.177 Test: test_nvme_io_msg_process ...passed 00:09:35.177 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 3 3 3 0 0 00:09:35.177 asserts 56 56 56 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.000 seconds 00:09:35.177 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: nvme_pcie_common 00:09:35.177 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:09:35.177 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-05-15 02:10:22.961663] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:35.177 passed 00:09:35.177 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:35.177 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:09:35.177 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:09:35.177 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 6 6 6 0 0 00:09:35.177 asserts 148 148 148 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.000 seconds 00:09:35.177 [2024-05-15 02:10:22.961899] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:35.177 [2024-05-15 02:10:22.961913] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:35.177 [2024-05-15 02:10:22.961925] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:35.177 [2024-05-15 02:10:22.962021] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.177 [2024-05-15 02:10:22.962030] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:35.177 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: nvme_fabric 00:09:35.177 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:35.177 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:35.177 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:35.177 Test: test_nvme_fabric_discover_probe ...passed 00:09:35.177 Test: test_nvme_fabric_qpair_connect ...passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 5 5 5 0 0 00:09:35.177 asserts 60 60 60 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.000 seconds 00:09:35.177 [2024-05-15 02:10:22.966569] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:35.177 02:10:22 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: nvme_opal 00:09:35.177 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:35.177 Test: test_opal_add_short_atom_header ...[2024-05-15 02:10:22.970939] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:35.177 passed 00:09:35.177 00:09:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.177 suites 1 1 n/a 0 0 00:09:35.177 tests 2 2 2 0 0 00:09:35.177 asserts 22 22 22 0 n/a 00:09:35.177 00:09:35.177 Elapsed time = 0.000 seconds 00:09:35.177 00:09:35.177 real 0m0.501s 00:09:35.177 user 0m0.092s 00:09:35.177 sys 0m0.133s 00:09:35.177 02:10:22 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.177 02:10:22 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.177 ************************************ 00:09:35.177 END TEST unittest_nvme 00:09:35.177 ************************************ 00:09:35.177 02:10:22 unittest -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:35.177 02:10:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:35.177 02:10:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.177 02:10:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:35.177 ************************************ 00:09:35.177 START TEST unittest_log 00:09:35.177 ************************************ 00:09:35.177 02:10:23 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:35.177 00:09:35.177 00:09:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.177 http://cunit.sourceforge.net/ 00:09:35.177 00:09:35.177 00:09:35.177 Suite: log 00:09:35.177 Test: log_test ...[2024-05-15 02:10:23.011275] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:09:35.177 passed 00:09:35.177 Test: deprecation ...[2024-05-15 02:10:23.011439] log_ut.c: 57:log_test: *DEBUG*: log test 00:09:35.177 log dump test: 00:09:35.177 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:35.177 spdk dump test: 00:09:35.177 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:35.177 spdk dump test: 00:09:35.177 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:35.177 00000010 65 20 63 68 61 72 73 e chars 00:09:36.113 passed 00:09:36.113 00:09:36.113 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.113 suites 1 1 n/a 0 0 00:09:36.113 tests 2 2 2 0 0 00:09:36.113 asserts 73 73 73 0 n/a 00:09:36.113 00:09:36.113 Elapsed time = 0.000 seconds 00:09:36.113 00:09:36.113 real 0m1.071s 00:09:36.113 user 0m0.004s 00:09:36.113 sys 0m0.004s 00:09:36.114 02:10:24 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.114 02:10:24 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:09:36.114 ************************************ 00:09:36.114 END TEST unittest_log 00:09:36.114 ************************************ 00:09:36.114 02:10:24 unittest -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:36.114 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.114 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.114 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.375 ************************************ 00:09:36.375 START TEST unittest_lvol 00:09:36.375 ************************************ 00:09:36.375 02:10:24 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:36.375 00:09:36.375 00:09:36.375 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.375 http://cunit.sourceforge.net/ 00:09:36.375 00:09:36.375 00:09:36.375 Suite: lvol 00:09:36.375 Test: lvs_init_unload_success ...passed 00:09:36.375 Test: lvs_init_destroy_success ...[2024-05-15 02:10:24.122394] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:36.375 [2024-05-15 02:10:24.122613] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:36.375 passed 00:09:36.375 Test: lvs_init_opts_success ...passed 00:09:36.375 Test: lvs_unload_lvs_is_null_fail ...passed 00:09:36.375 Test: lvs_names ...[2024-05-15 02:10:24.122638] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:36.375 [2024-05-15 02:10:24.122652] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:36.375 [2024-05-15 02:10:24.122662] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:36.375 [2024-05-15 02:10:24.122679] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:36.375 passed 00:09:36.375 Test: lvol_create_destroy_success ...passed 00:09:36.375 Test: lvol_create_fail ...passed 00:09:36.375 Test: lvol_destroy_fail ...passed 00:09:36.375 Test: lvol_close ...passed 00:09:36.375 Test: lvol_resize ...passed 00:09:36.375 Test: lvol_set_read_only ...passed 00:09:36.375 Test: test_lvs_load ...passed 00:09:36.375 Test: lvols_load ...[2024-05-15 02:10:24.122724] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:36.375 [2024-05-15 02:10:24.122736] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:36.375 [2024-05-15 02:10:24.122760] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:36.375 [2024-05-15 02:10:24.122780] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:36.375 [2024-05-15 02:10:24.122789] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:36.375 [2024-05-15 02:10:24.122833] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:36.375 [2024-05-15 02:10:24.122842] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:36.375 [2024-05-15 02:10:24.122867] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:36.375 passed 00:09:36.375 Test: lvol_open ...passed 00:09:36.375 Test: lvol_snapshot ...passed 00:09:36.375 Test: lvol_snapshot_fail ...passed 00:09:36.375 Test: lvol_clone ...passed 00:09:36.375 Test: lvol_clone_fail ...passed 00:09:36.375 Test: lvol_iter_clones ...passed 00:09:36.375 Test: lvol_refcnt ...passed 00:09:36.375 Test: lvol_names ...passed 00:09:36.375 Test: lvol_create_thin_provisioned ...passed 00:09:36.375 Test: lvol_rename ...[2024-05-15 02:10:24.122889] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:36.375 [2024-05-15 02:10:24.122949] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:36.375 [2024-05-15 02:10:24.122992] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:36.375 [2024-05-15 02:10:24.123026] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 4a1b2d9d-1260-11ef-99fd-bfc7c66e2865 because it is still open 00:09:36.375 [2024-05-15 02:10:24.123042] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:36.375 [2024-05-15 02:10:24.123054] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:36.375 [2024-05-15 02:10:24.123071] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:36.375 [2024-05-15 02:10:24.123104] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:36.375 passed 00:09:36.375 Test: lvs_rename ...passed 00:09:36.375 Test: lvol_inflate ...passed 00:09:36.375 Test: lvol_decouple_parent ...passed 00:09:36.375 Test: lvol_get_xattr ...passed 00:09:36.375 Test: lvol_esnap_reload ...passed 00:09:36.375 Test: lvol_esnap_create_bad_args ...passed 00:09:36.375 Test: lvol_esnap_create_delete ...passed 00:09:36.375 Test: lvol_esnap_load_esnaps ...passed 00:09:36.375 Test: lvol_esnap_missing ...passed 00:09:36.375 Test: lvol_esnap_hotplug ... 00:09:36.375 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:36.375 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:36.376 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:36.376 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:36.376 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:36.376 [2024-05-15 02:10:24.123117] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:36.376 [2024-05-15 02:10:24.123139] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:36.376 [2024-05-15 02:10:24.123157] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:36.376 [2024-05-15 02:10:24.123175] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:36.376 [2024-05-15 02:10:24.123209] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:36.376 [2024-05-15 02:10:24.123219] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:36.376 [2024-05-15 02:10:24.123228] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:36.376 [2024-05-15 02:10:24.123244] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:36.376 [2024-05-15 02:10:24.123265] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:36.376 [2024-05-15 02:10:24.123291] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:36.376 [2024-05-15 02:10:24.123319] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:36.376 [2024-05-15 02:10:24.123328] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:36.376 [2024-05-15 02:10:24.123381] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4a1b3b73-1260-11ef-99fd-bfc7c66e2865: failed to create esnap bs_dev: error -12 00:09:36.376 [2024-05-15 02:10:24.123419] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4a1b3cd1-1260-11ef-99fd-bfc7c66e2865: failed to create esnap bs_dev: error -12 00:09:36.376 [2024-05-15 02:10:24.123441] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4a1b3dc9-1260-11ef-99fd-bfc7c66e2865: failed to create esnap bs_dev: error -12 00:09:36.376 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:36.376 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:36.376 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:36.376 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:36.376 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:36.376 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:36.376 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:36.376 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:36.376 passed 00:09:36.376 Test: lvol_get_by ...passed 00:09:36.376 Test: lvol_shallow_copy ...passed 00:09:36.376 00:09:36.376 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.376 suites 1 1 n/a 0 0 00:09:36.376 tests 35 35 35 0 0 00:09:36.376 asserts 1459 1459 1459 0 n/a 00:09:36.376 00:09:36.376 Elapsed time = 0.000 seconds 00:09:36.376 [2024-05-15 02:10:24.123611] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:36.376 [2024-05-15 02:10:24.123622] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 4a1b4471-1260-11ef-99fd-bfc7c66e2865 shallow copy, ext_dev must not be NULL 00:09:36.376 00:09:36.376 real 0m0.007s 00:09:36.376 user 0m0.000s 00:09:36.376 sys 0m0.008s 00:09:36.376 02:10:24 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.376 02:10:24 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:36.376 ************************************ 00:09:36.376 END TEST unittest_lvol 00:09:36.376 ************************************ 00:09:36.376 02:10:24 unittest -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.376 02:10:24 unittest -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.376 ************************************ 00:09:36.376 START TEST unittest_nvme_rdma 00:09:36.376 ************************************ 00:09:36.376 02:10:24 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:36.376 00:09:36.376 00:09:36.376 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.376 http://cunit.sourceforge.net/ 00:09:36.376 00:09:36.376 00:09:36.376 Suite: nvme_rdma 00:09:36.376 Test: test_nvme_rdma_build_sgl_request ...[2024-05-15 02:10:24.171808] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:36.376 [2024-05-15 02:10:24.171989] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:36.376 [2024-05-15 02:10:24.172012] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1689:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:36.376 Test: test_nvme_rdma_build_contig_request ...[2024-05-15 02:10:24.172038] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1570:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:36.376 Test: test_nvme_rdma_create_reqs ...[2024-05-15 02:10:24.172071] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_create_rsps ...passed 00:09:36.376 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-05-15 02:10:24.172133] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:36.376 [2024-05-15 02:10:24.172159] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_poller_create ...[2024-05-15 02:10:24.172169] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-05-15 02:10:24.172196] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_ctrlr_construct ...passed 00:09:36.376 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:36.376 Test: test_nvme_rdma_req_init ...passed 00:09:36.376 Test: test_nvme_rdma_validate_cm_event ...[2024-05-15 02:10:24.172258] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_qpair_init ...[2024-05-15 02:10:24.172269] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:36.376 Test: test_nvme_rdma_memory_domain ...[2024-05-15 02:10:24.172304] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:09:36.376 passed 00:09:36.376 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:36.376 Test: test_rdma_get_memory_translation ...[2024-05-15 02:10:24.172321] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:36.376 passed 00:09:36.376 Test: test_get_rdma_qpair_from_wc ...[2024-05-15 02:10:24.172331] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:36.376 Test: test_nvme_rdma_poll_group_get_stats ...[2024-05-15 02:10:24.172369] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:36.376 [2024-05-15 02:10:24.172381] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:36.376 passed 00:09:36.376 Test: test_nvme_rdma_qpair_set_poller ...[2024-05-15 02:10:24.172405] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:09:36.376 [2024-05-15 02:10:24.172416] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:36.376 [2024-05-15 02:10:24.172426] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8206ebe28 on poll group 0x82b64b000 00:09:36.376 [2024-05-15 02:10:24.172435] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:09:36.376 [2024-05-15 02:10:24.172445] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:09:36.376 [2024-05-15 02:10:24.172454] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8206ebe28 on poll group 0x82b64b000 00:09:36.376 passed 00:09:36.376 00:09:36.376 [2024-05-15 02:10:24.172523] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:09:36.376 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.376 suites 1 1 n/a 0 0 00:09:36.376 tests 22 22 22 0 0 00:09:36.376 asserts 412 412 412 0 n/a 00:09:36.376 00:09:36.376 Elapsed time = 0.000 seconds 00:09:36.376 00:09:36.376 real 0m0.007s 00:09:36.376 user 0m0.006s 00:09:36.376 sys 0m0.000s 00:09:36.376 02:10:24 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.376 02:10:24 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:36.376 ************************************ 00:09:36.376 END TEST unittest_nvme_rdma 00:09:36.376 ************************************ 00:09:36.376 02:10:24 unittest -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.376 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.376 ************************************ 00:09:36.376 START TEST unittest_nvmf_transport 00:09:36.377 ************************************ 00:09:36.377 02:10:24 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:36.377 00:09:36.377 00:09:36.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.377 http://cunit.sourceforge.net/ 00:09:36.377 00:09:36.377 00:09:36.377 Suite: nvmf 00:09:36.377 Test: test_spdk_nvmf_transport_create ...[2024-05-15 02:10:24.210037] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:36.377 [2024-05-15 02:10:24.210210] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:36.377 passed 00:09:36.377 Test: test_nvmf_transport_poll_group_create ...passed 00:09:36.377 Test: test_spdk_nvmf_transport_opts_init ...passed 00:09:36.377 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:36.377 00:09:36.377 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.377 suites 1 1 n/a 0 0 00:09:36.377 tests 4 4 4 0 0 00:09:36.377 asserts 49 49 49 0 n/a 00:09:36.377 00:09:36.377 Elapsed time = 0.000 seconds 00:09:36.377 [2024-05-15 02:10:24.210223] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:36.377 [2024-05-15 02:10:24.210246] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:36.377 [2024-05-15 02:10:24.210268] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:36.377 [2024-05-15 02:10:24.210278] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:36.377 [2024-05-15 02:10:24.210286] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:36.377 00:09:36.377 real 0m0.006s 00:09:36.377 user 0m0.005s 00:09:36.377 sys 0m0.005s 00:09:36.377 02:10:24 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.377 ************************************ 00:09:36.377 END TEST unittest_nvmf_transport 00:09:36.377 ************************************ 00:09:36.377 02:10:24 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:09:36.377 02:10:24 unittest -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.377 ************************************ 00:09:36.377 START TEST unittest_rdma 00:09:36.377 ************************************ 00:09:36.377 02:10:24 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:36.377 00:09:36.377 00:09:36.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.377 http://cunit.sourceforge.net/ 00:09:36.377 00:09:36.377 00:09:36.377 Suite: rdma_common 00:09:36.377 Test: test_spdk_rdma_pd ...passed 00:09:36.377 00:09:36.377 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.377 suites 1 1 n/a 0 0 00:09:36.377 tests 1 1 1 0 0 00:09:36.377 asserts 31 31 31 0 n/a 00:09:36.377 00:09:36.377 Elapsed time = 0.000 seconds 00:09:36.377 [2024-05-15 02:10:24.251610] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:36.377 [2024-05-15 02:10:24.251801] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:36.377 00:09:36.377 real 0m0.005s 00:09:36.377 user 0m0.005s 00:09:36.377 sys 0m0.000s 00:09:36.377 02:10:24 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.377 02:10:24 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:36.377 ************************************ 00:09:36.377 END TEST unittest_rdma 00:09:36.377 ************************************ 00:09:36.377 02:10:24 unittest -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.377 02:10:24 unittest -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.377 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.377 ************************************ 00:09:36.377 START TEST unittest_nvmf 00:09:36.377 ************************************ 00:09:36.377 02:10:24 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:09:36.377 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:36.377 00:09:36.377 00:09:36.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.377 http://cunit.sourceforge.net/ 00:09:36.377 00:09:36.377 00:09:36.377 Suite: nvmf 00:09:36.377 Test: test_get_log_page ...[2024-05-15 02:10:24.298809] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:36.377 passed 00:09:36.377 Test: test_process_fabrics_cmd ...[2024-05-15 02:10:24.299064] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:09:36.377 passed 00:09:36.377 Test: test_connect ...[2024-05-15 02:10:24.299176] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:36.377 [2024-05-15 02:10:24.299197] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:36.377 [2024-05-15 02:10:24.299212] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:36.377 [2024-05-15 02:10:24.299226] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:36.377 passed 00:09:36.377 Test: test_get_ns_id_desc_list ...passed 00:09:36.377 Test: test_identify_ns ...[2024-05-15 02:10:24.299240] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:36.377 [2024-05-15 02:10:24.299254] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 888:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:36.377 [2024-05-15 02:10:24.299273] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:36.377 [2024-05-15 02:10:24.299287] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:36.377 [2024-05-15 02:10:24.299305] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:36.377 [2024-05-15 02:10:24.299320] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:36.377 [2024-05-15 02:10:24.299346] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:36.377 [2024-05-15 02:10:24.299367] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:36.377 [2024-05-15 02:10:24.299382] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 690:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:36.377 [2024-05-15 02:10:24.299397] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 714:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:36.377 [2024-05-15 02:10:24.299423] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:09:36.377 [2024-05-15 02:10:24.299444] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:09:36.377 [2024-05-15 02:10:24.299460] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:09:36.377 [2024-05-15 02:10:24.299519] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.377 passed 00:09:36.377 Test: test_identify_ns_iocs_specific ...[2024-05-15 02:10:24.299587] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:36.377 [2024-05-15 02:10:24.299618] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:36.377 [2024-05-15 02:10:24.299652] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.377 passed 00:09:36.377 Test: test_reservation_write_exclusive ...passed 00:09:36.377 Test: test_reservation_exclusive_access ...passed 00:09:36.377 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:36.377 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:36.377 Test: test_reservation_notification_log_page ...passed 00:09:36.377 Test: test_get_dif_ctx ...passed 00:09:36.377 Test: test_set_get_features ...passed 00:09:36.377 Test: test_identify_ctrlr ...passed 00:09:36.377 Test: test_identify_ctrlr_iocs_specific ...[2024-05-15 02:10:24.299719] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.377 [2024-05-15 02:10:24.299836] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:36.377 [2024-05-15 02:10:24.299851] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:36.377 [2024-05-15 02:10:24.299863] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:36.377 [2024-05-15 02:10:24.299876] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:36.377 passed 00:09:36.377 Test: test_custom_admin_cmd ...passed 00:09:36.377 Test: test_fused_compare_and_write ...passed 00:09:36.377 Test: test_multi_async_event_reqs ...passed 00:09:36.377 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:36.377 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:36.377 Test: test_multi_async_events ...passed 00:09:36.377 Test: test_rae ...passed 00:09:36.377 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:36.377 Test: test_nvmf_ctrlr_use_zcopy ...[2024-05-15 02:10:24.299984] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:36.377 [2024-05-15 02:10:24.299999] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:36.377 [2024-05-15 02:10:24.300013] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:36.377 passed 00:09:36.377 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:09:36.377 Test: test_zcopy_read ...passed 00:09:36.377 Test: test_zcopy_write ...passed 00:09:36.377 Test: test_nvmf_property_set ...passed 00:09:36.377 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:09:36.378 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:09:36.378 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:09:36.378 Test: test_nvmf_check_qpair_active ...passed[2024-05-15 02:10:24.300105] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:09:36.378 [2024-05-15 02:10:24.300123] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:09:36.378 [2024-05-15 02:10:24.300161] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:36.378 [2024-05-15 02:10:24.300175] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:36.378 [2024-05-15 02:10:24.300190] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:36.378 [2024-05-15 02:10:24.300203] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:36.378 [2024-05-15 02:10:24.300216] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:36.378 [2024-05-15 02:10:24.300248] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:09:36.378 [2024-05-15 02:10:24.300261] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4692:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:09:36.378 [2024-05-15 02:10:24.300274] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:09:36.378 [2024-05-15 02:10:24.300287] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:09:36.378 [2024-05-15 02:10:24.300300] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:09:36.378 00:09:36.378 00:09:36.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.378 suites 1 1 n/a 0 0 00:09:36.378 tests 32 32 32 0 0 00:09:36.378 asserts 977 977 977 0 n/a 00:09:36.378 00:09:36.378 Elapsed time = 0.000 seconds 00:09:36.378 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:36.378 00:09:36.378 00:09:36.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.378 http://cunit.sourceforge.net/ 00:09:36.378 00:09:36.378 00:09:36.378 Suite: nvmf 00:09:36.378 Test: test_get_rw_params ...passed 00:09:36.378 Test: test_get_rw_ext_params ...passed 00:09:36.378 Test: test_lba_in_range ...passed 00:09:36.378 Test: test_get_dif_ctx ...passed 00:09:36.378 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:36.378 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:09:36.378 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:09:36.378 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-05-15 02:10:24.307466] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:36.378 [2024-05-15 02:10:24.307711] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:36.378 [2024-05-15 02:10:24.307734] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:36.378 [2024-05-15 02:10:24.307756] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:36.378 [2024-05-15 02:10:24.307772] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:36.378 [2024-05-15 02:10:24.307791] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:36.378 [2024-05-15 02:10:24.307806] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:36.378 passed 00:09:36.378 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:36.378 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:36.378 00:09:36.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.378 suites 1 1 n/a 0 0 00:09:36.378 tests 10 10 10 0 0 00:09:36.378 asserts 159 159 159 0 n/a 00:09:36.378 00:09:36.378 Elapsed time = 0.000 seconds 00:09:36.378 [2024-05-15 02:10:24.307832] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:36.378 [2024-05-15 02:10:24.307847] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:36.378 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:36.378 00:09:36.378 00:09:36.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.378 http://cunit.sourceforge.net/ 00:09:36.378 00:09:36.378 00:09:36.378 Suite: nvmf 00:09:36.378 Test: test_discovery_log ...passed 00:09:36.378 Test: test_discovery_log_with_filters ...passed 00:09:36.378 00:09:36.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.378 suites 1 1 n/a 0 0 00:09:36.378 tests 2 2 2 0 0 00:09:36.378 asserts 238 238 238 0 n/a 00:09:36.378 00:09:36.378 Elapsed time = 0.000 seconds 00:09:36.378 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:36.378 00:09:36.378 00:09:36.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.378 http://cunit.sourceforge.net/ 00:09:36.378 00:09:36.378 00:09:36.378 Suite: nvmf 00:09:36.378 Test: nvmf_test_create_subsystem ...[2024-05-15 02:10:24.319574] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:36.378 [2024-05-15 02:10:24.319712] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:09:36.378 [2024-05-15 02:10:24.319736] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:36.378 [2024-05-15 02:10:24.319747] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:09:36.378 [2024-05-15 02:10:24.319757] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:36.378 [2024-05-15 02:10:24.319765] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:09:36.378 [2024-05-15 02:10:24.319775] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:36.378 [2024-05-15 02:10:24.319783] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:09:36.378 [2024-05-15 02:10:24.319793] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:36.378 [2024-05-15 02:10:24.319802] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:09:36.378 [2024-05-15 02:10:24.319811] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:36.378 [2024-05-15 02:10:24.319820] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:09:36.378 [2024-05-15 02:10:24.319834] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:36.378 [2024-05-15 02:10:24.319844] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:09:36.378 passed 00:09:36.378 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:09:36.378 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:09:36.378 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:36.378 Test: test_spdk_nvmf_ns_visible ...passed 00:09:36.378 Test: test_reservation_register ...passed 00:09:36.378 Test: test_reservation_register_with_ptpl ...[2024-05-15 02:10:24.319870] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:36.378 [2024-05-15 02:10:24.319880] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:09:36.378 [2024-05-15 02:10:24.319891] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:36.378 [2024-05-15 02:10:24.319900] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:09:36.378 [2024-05-15 02:10:24.319910] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:36.378 [2024-05-15 02:10:24.319918] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:36.378 [2024-05-15 02:10:24.319928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:36.378 [2024-05-15 02:10:24.319936] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:36.378 [2024-05-15 02:10:24.319978] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:36.378 [2024-05-15 02:10:24.319989] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:36.378 [2024-05-15 02:10:24.320011] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2139:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:09:36.378 [2024-05-15 02:10:24.320033] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:09:36.379 [2024-05-15 02:10:24.320085] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320100] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3135:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:36.379 passed 00:09:36.379 Test: test_reservation_acquire_preempt_1 ...passed 00:09:36.379 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:36.379 Test: test_reservation_release ...passed 00:09:36.379 Test: test_reservation_unregister_notification ...passed 00:09:36.379 Test: test_reservation_release_notification ...passed 00:09:36.379 Test: test_reservation_release_notification_write_exclusive ...[2024-05-15 02:10:24.320245] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320368] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320387] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320408] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320423] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 passed 00:09:36.379 Test: test_reservation_clear_notification ...passed 00:09:36.379 Test: test_reservation_preempt_notification ...passed 00:09:36.379 Test: test_spdk_nvmf_ns_event ...passed 00:09:36.379 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:36.379 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:36.379 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:09:36.379 Test: test_nvmf_ns_reservation_report ...passed 00:09:36.379 Test: test_nvmf_nqn_is_valid ...passed 00:09:36.379 Test: test_nvmf_ns_reservation_restore ...passed 00:09:36.379 Test: test_nvmf_subsystem_state_change ...passed 00:09:36.379 Test: test_nvmf_reservation_custom_ops ...passed 00:09:36.379 00:09:36.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.379 suites 1 1 n/a 0 0 00:09:36.379 tests 24 24 24 0 0 00:09:36.379 asserts 499 499 499 0 n/a 00:09:36.379 00:09:36.379 Elapsed time = 0.000 seconds 00:09:36.379 [2024-05-15 02:10:24.320439] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320454] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:36.379 [2024-05-15 02:10:24.320532] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:36.379 [2024-05-15 02:10:24.320545] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:09:36.379 [2024-05-15 02:10:24.320560] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3441:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:36.379 [2024-05-15 02:10:24.320582] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:36.379 [2024-05-15 02:10:24.320592] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:4a395277-1260-11ef-99fd-bfc7c66e286": uuid is not the correct length 00:09:36.379 [2024-05-15 02:10:24.320601] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:36.379 [2024-05-15 02:10:24.320628] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2634:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:36.379 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:36.379 00:09:36.379 00:09:36.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.379 http://cunit.sourceforge.net/ 00:09:36.379 00:09:36.379 00:09:36.379 Suite: nvmf 00:09:36.379 Test: test_nvmf_tcp_create ...[2024-05-15 02:10:24.330040] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:36.379 passed 00:09:36.379 Test: test_nvmf_tcp_destroy ...passed 00:09:36.379 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:36.379 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:36.379 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:36.379 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:36.379 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:36.379 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-05-15 02:10:24.340615] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 passed 00:09:36.379 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-05-15 02:10:24.340645] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340656] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340665] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340673] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 passed 00:09:36.379 Test: test_nvmf_tcp_icreq_handle ...passed 00:09:36.379 Test: test_nvmf_tcp_check_xfer_type ...passed 00:09:36.379 Test: test_nvmf_tcp_invalid_sgl ...passed 00:09:36.379 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-15 02:10:24.340704] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:36.379 [2024-05-15 02:10:24.340712] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340720] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d03f0 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340727] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:36.379 [2024-05-15 02:10:24.340735] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d03f0 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340743] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340750] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d03f0 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340759] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340766] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d03f0 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340783] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2509:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:36.379 [2024-05-15 02:10:24.340791] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340799] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d03f0 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340809] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x8210cfc78 00:09:36.379 [2024-05-15 02:10:24.340821] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340838] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340853] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2299:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x8210d04e8 00:09:36.379 [2024-05-15 02:10:24.340873] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340890] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340908] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:36.379 [2024-05-15 02:10:24.340928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.340944] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.340961] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:36.379 [2024-05-15 02:10:24.340981] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.341000] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.341020] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.341036] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.341055] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.341076] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.341094] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 [2024-05-15 02:10:24.341111] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.379 [2024-05-15 02:10:24.341132] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.379 passed 00:09:36.379 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-15 02:10:24.341150] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.380 [2024-05-15 02:10:24.341172] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.380 [2024-05-15 02:10:24.341192] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.380 [2024-05-15 02:10:24.341210] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:09:36.380 [2024-05-15 02:10:24.341225] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8210d04e8 is same with the state(5) to be set 00:09:36.380 passed 00:09:36.380 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:09:36.380 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-05-15 02:10:24.348394] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:36.380 [2024-05-15 02:10:24.348430] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:36.380 [2024-05-15 02:10:24.348600] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:36.380 passed 00:09:36.380 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-05-15 02:10:24.348615] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:36.380 passed 00:09:36.380 00:09:36.380 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.380 suites 1 1 n/a 0 0 00:09:36.380 tests 17 17 17 0 0 00:09:36.380 asserts 222 222 222 0 n/a 00:09:36.380 00:09:36.380 Elapsed time = 0.023 seconds 00:09:36.380 [2024-05-15 02:10:24.348714] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:36.380 [2024-05-15 02:10:24.348729] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:36.380 02:10:24 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:36.380 00:09:36.380 00:09:36.380 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.380 http://cunit.sourceforge.net/ 00:09:36.380 00:09:36.380 00:09:36.380 Suite: nvmf 00:09:36.380 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:36.380 00:09:36.380 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.380 suites 1 1 n/a 0 0 00:09:36.380 tests 1 1 1 0 0 00:09:36.380 asserts 17 17 17 0 n/a 00:09:36.380 00:09:36.380 Elapsed time = 0.000 seconds 00:09:36.380 00:09:36.380 real 0m0.071s 00:09:36.380 user 0m0.028s 00:09:36.380 sys 0m0.041s 00:09:36.380 02:10:24 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.380 02:10:24 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:09:36.380 ************************************ 00:09:36.380 END TEST unittest_nvmf 00:09:36.380 ************************************ 00:09:36.641 02:10:24 unittest -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.641 02:10:24 unittest -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.641 02:10:24 unittest -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:36.641 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.641 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.641 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.641 ************************************ 00:09:36.641 START TEST unittest_nvmf_rdma 00:09:36.641 ************************************ 00:09:36.641 02:10:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:36.641 00:09:36.641 00:09:36.641 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.641 http://cunit.sourceforge.net/ 00:09:36.641 00:09:36.641 00:09:36.641 Suite: nvmf 00:09:36.641 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-15 02:10:24.412419] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1861:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:36.641 [2024-05-15 02:10:24.412910] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:36.642 [2024-05-15 02:10:24.412962] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:36.642 passed 00:09:36.642 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:36.642 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:36.642 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:36.642 Test: test_nvmf_rdma_opts_init ...passed 00:09:36.642 Test: test_nvmf_rdma_request_free_data ...passed 00:09:36.642 Test: test_nvmf_rdma_resources_create ...passed 00:09:36.642 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:36.642 Test: test_nvmf_rdma_resize_cq ...[2024-05-15 02:10:24.413944] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 950:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:36.642 Using CQ of insufficient size may lead to CQ overrun 00:09:36.642 passed 00:09:36.642 00:09:36.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.642 suites 1 1 n/a 0 0 00:09:36.642 tests 9 9 9 0 0 00:09:36.642 asserts 579 579 579 0 n/a 00:09:36.642 00:09:36.642 Elapsed time = 0.000 seconds 00:09:36.642 [2024-05-15 02:10:24.413966] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:36.642 [2024-05-15 02:10:24.414012] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:09:36.642 00:09:36.642 real 0m0.007s 00:09:36.642 user 0m0.006s 00:09:36.642 sys 0m0.005s 00:09:36.642 02:10:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.642 02:10:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:36.642 ************************************ 00:09:36.642 END TEST unittest_nvmf_rdma 00:09:36.642 ************************************ 00:09:36.642 02:10:24 unittest -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.642 02:10:24 unittest -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:09:36.642 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.642 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.642 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.642 ************************************ 00:09:36.642 START TEST unittest_scsi 00:09:36.642 ************************************ 00:09:36.642 02:10:24 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:09:36.642 02:10:24 unittest.unittest_scsi -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:36.642 00:09:36.642 00:09:36.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.642 http://cunit.sourceforge.net/ 00:09:36.642 00:09:36.642 00:09:36.642 Suite: dev_suite 00:09:36.642 Test: dev_destruct_null_dev ...passed 00:09:36.642 Test: dev_destruct_zero_luns ...passed 00:09:36.642 Test: dev_destruct_null_lun ...passed 00:09:36.642 Test: dev_destruct_success ...passed 00:09:36.642 Test: dev_construct_num_luns_zero ...[2024-05-15 02:10:24.461454] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:36.642 passed 00:09:36.642 Test: dev_construct_no_lun_zero ...passed 00:09:36.642 Test: dev_construct_null_lun ...passed 00:09:36.642 Test: dev_construct_name_too_long ...passed 00:09:36.642 Test: dev_construct_success ...passed 00:09:36.642 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:36.642 Test: dev_queue_mgmt_task_success ...passed 00:09:36.642 Test: dev_queue_task_success ...passed 00:09:36.642 Test: dev_stop_success ...passed 00:09:36.642 Test: dev_add_port_max_ports ...passed 00:09:36.642 Test: dev_add_port_construct_failure1 ...passed 00:09:36.642 Test: dev_add_port_construct_failure2 ...passed 00:09:36.642 Test: dev_add_port_success1 ...[2024-05-15 02:10:24.461678] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:36.642 [2024-05-15 02:10:24.461701] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:36.642 [2024-05-15 02:10:24.461720] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:36.642 [2024-05-15 02:10:24.461770] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:36.642 [2024-05-15 02:10:24.461789] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:36.642 [2024-05-15 02:10:24.461806] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:36.642 passed 00:09:36.642 Test: dev_add_port_success2 ...passed 00:09:36.642 Test: dev_add_port_success3 ...passed 00:09:36.642 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:36.642 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:36.642 Test: dev_find_port_by_id_success ...passed 00:09:36.642 Test: dev_add_lun_bdev_not_found ...passed 00:09:36.642 Test: dev_add_lun_no_free_lun_id ...passed 00:09:36.642 Test: dev_add_lun_success1 ...passed 00:09:36.642 Test: dev_add_lun_success2 ...passed 00:09:36.642 Test: dev_check_pending_tasks ...passed 00:09:36.642 Test: dev_iterate_luns ...passed 00:09:36.642 Test: dev_find_free_lun ...passed 00:09:36.642 00:09:36.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.642 suites 1 1 n/a 0 0 00:09:36.642 tests 29 29 29 0 0 00:09:36.642 asserts 97 97 97 0 n/a 00:09:36.642 00:09:36.642 Elapsed time = 0.000 seconds 00:09:36.642 [2024-05-15 02:10:24.462039] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:36.642 02:10:24 unittest.unittest_scsi -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:36.642 00:09:36.642 00:09:36.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.642 http://cunit.sourceforge.net/ 00:09:36.642 00:09:36.642 00:09:36.642 Suite: lun_suite 00:09:36.642 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-05-15 02:10:24.469480] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:36.642 passed 00:09:36.642 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:09:36.642 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:36.642 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:36.642 Test: lun_task_mgmt_execute_invalid_case ...passed 00:09:36.642 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:36.642 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:36.642 Test: lun_append_task_null_lun_not_supported ...passed 00:09:36.642 Test: lun_execute_scsi_task_pending ...passed 00:09:36.642 Test: lun_execute_scsi_task_complete ...passed 00:09:36.642 Test: lun_execute_scsi_task_resize ...passed 00:09:36.642 Test: lun_destruct_success ...passed 00:09:36.642 Test: lun_construct_null_ctx ...[2024-05-15 02:10:24.469679] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:36.642 [2024-05-15 02:10:24.469711] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:36.642 [2024-05-15 02:10:24.469762] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:36.642 passed 00:09:36.642 Test: lun_construct_success ...passed 00:09:36.642 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:36.642 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:36.642 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:36.642 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:36.642 00:09:36.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.642 suites 1 1 n/a 0 0 00:09:36.642 tests 18 18 18 0 0 00:09:36.642 asserts 153 153 153 0 n/a 00:09:36.642 00:09:36.642 Elapsed time = 0.000 seconds 00:09:36.642 02:10:24 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:36.642 00:09:36.642 00:09:36.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.642 http://cunit.sourceforge.net/ 00:09:36.642 00:09:36.642 00:09:36.642 Suite: scsi_suite 00:09:36.642 Test: scsi_init ...passed 00:09:36.642 00:09:36.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.642 suites 1 1 n/a 0 0 00:09:36.642 tests 1 1 1 0 0 00:09:36.642 asserts 1 1 1 0 n/a 00:09:36.642 00:09:36.642 Elapsed time = 0.000 seconds 00:09:36.642 02:10:24 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:36.642 00:09:36.642 00:09:36.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.642 http://cunit.sourceforge.net/ 00:09:36.642 00:09:36.642 00:09:36.642 Suite: translation_suite 00:09:36.642 Test: mode_select_6_test ...passed 00:09:36.642 Test: mode_select_6_test2 ...passed 00:09:36.642 Test: mode_sense_6_test ...passed 00:09:36.642 Test: mode_sense_10_test ...passed 00:09:36.642 Test: inquiry_evpd_test ...passed 00:09:36.642 Test: inquiry_standard_test ...passed 00:09:36.642 Test: inquiry_overflow_test ...passed 00:09:36.642 Test: task_complete_test ...passed 00:09:36.642 Test: lba_range_test ...passed 00:09:36.642 Test: xfer_len_test ...passed 00:09:36.642 Test: xfer_test ...passed 00:09:36.642 Test: scsi_name_padding_test ...passed 00:09:36.642 Test: get_dif_ctx_test ...passed 00:09:36.642 Test: unmap_split_test ...passed 00:09:36.642 00:09:36.642 [2024-05-15 02:10:24.479730] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:36.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.642 suites 1 1 n/a 0 0 00:09:36.642 tests 14 14 14 0 0 00:09:36.642 asserts 1205 1205 1205 0 n/a 00:09:36.642 00:09:36.642 Elapsed time = 0.000 seconds 00:09:36.642 02:10:24 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:36.642 00:09:36.642 00:09:36.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.642 http://cunit.sourceforge.net/ 00:09:36.642 00:09:36.642 00:09:36.642 Suite: reservation_suite 00:09:36.642 Test: test_reservation_register ...passed 00:09:36.643 Test: test_reservation_reserve ...[2024-05-15 02:10:24.484105] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 [2024-05-15 02:10:24.484275] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 [2024-05-15 02:10:24.484289] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:36.643 [2024-05-15 02:10:24.484300] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:36.643 passed 00:09:36.643 Test: test_reservation_preempt_non_all_regs ...passed 00:09:36.643 Test: test_reservation_preempt_all_regs ...passed 00:09:36.643 Test: test_reservation_cmds_conflict ...passed 00:09:36.643 Test: test_scsi2_reserve_release ...passed 00:09:36.643 Test: test_pr_with_scsi2_reserve_release ...passed 00:09:36.643 00:09:36.643 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.643 suites 1 1 n/a 0 0 00:09:36.643 tests 7 7 7 0 0 00:09:36.643 asserts 257 257 257 0 n/a 00:09:36.643 00:09:36.643 Elapsed time = 0.000 seconds 00:09:36.643 [2024-05-15 02:10:24.484315] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 [2024-05-15 02:10:24.484325] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:36.643 [2024-05-15 02:10:24.484345] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 [2024-05-15 02:10:24.484359] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 [2024-05-15 02:10:24.484369] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:36.643 [2024-05-15 02:10:24.484378] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:36.643 [2024-05-15 02:10:24.484387] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:36.643 [2024-05-15 02:10:24.484395] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:36.643 [2024-05-15 02:10:24.484403] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:36.643 [2024-05-15 02:10:24.484422] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:36.643 00:09:36.643 real 0m0.029s 00:09:36.643 user 0m0.011s 00:09:36.643 sys 0m0.018s 00:09:36.643 02:10:24 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.643 02:10:24 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 END TEST unittest_scsi 00:09:36.643 ************************************ 00:09:36.643 02:10:24 unittest -- unit/unittest.sh@276 -- # uname -s 00:09:36.643 02:10:24 unittest -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:09:36.643 02:10:24 unittest -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 START TEST unittest_thread 00:09:36.643 ************************************ 00:09:36.643 02:10:24 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:36.643 00:09:36.643 00:09:36.643 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.643 http://cunit.sourceforge.net/ 00:09:36.643 00:09:36.643 00:09:36.643 Suite: io_channel 00:09:36.643 Test: thread_alloc ...passed 00:09:36.643 Test: thread_send_msg ...passed 00:09:36.643 Test: thread_poller ...passed 00:09:36.643 Test: poller_pause ...passed 00:09:36.643 Test: thread_for_each ...passed 00:09:36.643 Test: for_each_channel_remove ...passed 00:09:36.643 Test: for_each_channel_unreg ...[2024-05-15 02:10:24.539086] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x8204fcc84 already registered (old:0x82cd44000 new:0x82cd44180) 00:09:36.643 passed 00:09:36.643 Test: thread_name ...passed 00:09:36.643 Test: channel ...[2024-05-15 02:10:24.539826] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x2276c8 00:09:36.643 passed 00:09:36.643 Test: channel_destroy_races ...passed 00:09:36.643 Test: thread_exit_test ...[2024-05-15 02:10:24.540577] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 636:thread_exit: *ERROR*: thread 0x82cd09a80 got timeout, and move it to the exited state forcefully 00:09:36.643 passed 00:09:36.643 Test: thread_update_stats_test ...passed 00:09:36.643 Test: nested_channel ...passed 00:09:36.643 Test: device_unregister_and_thread_exit_race ...passed 00:09:36.643 Test: cache_closest_timed_poller ...passed 00:09:36.643 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:36.643 Test: io_device_lookup ...passed 00:09:36.643 Test: spdk_spin ...[2024-05-15 02:10:24.541973] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:36.643 [2024-05-15 02:10:24.541993] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x8204fcc80 00:09:36.643 [2024-05-15 02:10:24.542004] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:36.643 [2024-05-15 02:10:24.542122] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:36.643 [2024-05-15 02:10:24.542131] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x8204fcc80 00:09:36.643 [2024-05-15 02:10:24.542140] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:36.643 [2024-05-15 02:10:24.542149] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x8204fcc80 00:09:36.643 [2024-05-15 02:10:24.542157] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:36.643 [2024-05-15 02:10:24.542166] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x8204fcc80 00:09:36.643 [2024-05-15 02:10:24.542174] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:36.643 [2024-05-15 02:10:24.542205] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x8204fcc80 00:09:36.643 passed 00:09:36.643 Test: for_each_channel_and_thread_exit_race ...passed 00:09:36.643 Test: for_each_thread_and_thread_exit_race ...passed 00:09:36.643 00:09:36.643 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.643 suites 1 1 n/a 0 0 00:09:36.643 tests 20 20 20 0 0 00:09:36.643 asserts 409 409 409 0 n/a 00:09:36.643 00:09:36.643 Elapsed time = 0.000 seconds 00:09:36.643 00:09:36.643 real 0m0.014s 00:09:36.643 user 0m0.006s 00:09:36.643 sys 0m0.008s 00:09:36.643 02:10:24 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.643 02:10:24 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 END TEST unittest_thread 00:09:36.643 ************************************ 00:09:36.643 02:10:24 unittest -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 START TEST unittest_iobuf 00:09:36.643 ************************************ 00:09:36.643 02:10:24 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:36.643 00:09:36.643 00:09:36.643 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.643 http://cunit.sourceforge.net/ 00:09:36.643 00:09:36.643 00:09:36.643 Suite: io_channel 00:09:36.643 Test: iobuf ...passed 00:09:36.643 Test: iobuf_cache ...[2024-05-15 02:10:24.583666] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:36.643 [2024-05-15 02:10:24.583867] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:36.643 [2024-05-15 02:10:24.583899] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:36.643 [2024-05-15 02:10:24.583913] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:36.643 passed 00:09:36.643 00:09:36.643 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.643 suites 1 1 n/a 0 0 00:09:36.643 tests 2 2 2 0 0 00:09:36.643 asserts 107 107 107 0 n/a 00:09:36.643 00:09:36.643 Elapsed time = 0.000 seconds[2024-05-15 02:10:24.583928] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:36.643 [2024-05-15 02:10:24.583949] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:36.643 00:09:36.643 00:09:36.643 real 0m0.006s 00:09:36.643 user 0m0.005s 00:09:36.643 sys 0m0.000s 00:09:36.643 02:10:24 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.643 02:10:24 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 END TEST unittest_iobuf 00:09:36.643 ************************************ 00:09:36.643 02:10:24 unittest -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.643 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 ************************************ 00:09:36.643 START TEST unittest_util 00:09:36.644 ************************************ 00:09:36.644 02:10:24 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:09:36.644 02:10:24 unittest.unittest_util -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:36.644 00:09:36.644 00:09:36.644 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.644 http://cunit.sourceforge.net/ 00:09:36.644 00:09:36.644 00:09:36.644 Suite: base64 00:09:36.644 Test: test_base64_get_encoded_strlen ...passed 00:09:36.644 Test: test_base64_get_decoded_len ...passed 00:09:36.644 Test: test_base64_encode ...passed 00:09:36.644 Test: test_base64_decode ...passed 00:09:36.644 Test: test_base64_urlsafe_encode ...passed 00:09:36.644 Test: test_base64_urlsafe_decode ...passed 00:09:36.644 00:09:36.644 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.644 suites 1 1 n/a 0 0 00:09:36.644 tests 6 6 6 0 0 00:09:36.644 asserts 112 112 112 0 n/a 00:09:36.644 00:09:36.644 Elapsed time = 0.000 seconds 00:09:36.644 02:10:24 unittest.unittest_util -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:36.644 00:09:36.644 00:09:36.644 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.644 http://cunit.sourceforge.net/ 00:09:36.644 00:09:36.644 00:09:36.644 Suite: bit_array 00:09:36.644 Test: test_1bit ...passed 00:09:36.644 Test: test_64bit ...passed 00:09:36.644 Test: test_find ...passed 00:09:36.644 Test: test_resize ...passed 00:09:36.644 Test: test_errors ...passed 00:09:36.644 Test: test_count ...passed 00:09:36.644 Test: test_mask_store_load ...passed 00:09:36.644 Test: test_mask_clear ...passed 00:09:36.644 00:09:36.644 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.644 suites 1 1 n/a 0 0 00:09:36.644 tests 8 8 8 0 0 00:09:36.644 asserts 5075 5075 5075 0 n/a 00:09:36.644 00:09:36.644 Elapsed time = 0.000 seconds 00:09:36.644 02:10:24 unittest.unittest_util -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:36.906 00:09:36.906 00:09:36.906 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.906 http://cunit.sourceforge.net/ 00:09:36.906 00:09:36.906 00:09:36.906 Suite: cpuset 00:09:36.906 Test: test_cpuset ...passed 00:09:36.906 Test: test_cpuset_parse ...[2024-05-15 02:10:24.642843] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:36.906 [2024-05-15 02:10:24.643038] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:36.906 [2024-05-15 02:10:24.643061] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:36.906 [2024-05-15 02:10:24.643081] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:36.906 [2024-05-15 02:10:24.643100] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:36.906 [2024-05-15 02:10:24.643118] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:36.906 [2024-05-15 02:10:24.643140] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:36.907 passed 00:09:36.907 Test: test_cpuset_fmt ...[2024-05-15 02:10:24.643164] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:36.907 passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 3 3 3 0 0 00:09:36.907 asserts 65 65 65 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: crc16 00:09:36.907 Test: test_crc16_t10dif ...passed 00:09:36.907 Test: test_crc16_t10dif_seed ...passed 00:09:36.907 Test: test_crc16_t10dif_copy ...passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 3 3 3 0 0 00:09:36.907 asserts 5 5 5 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: crc32_ieee 00:09:36.907 Test: test_crc32_ieee ...passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 1 1 1 0 0 00:09:36.907 asserts 1 1 1 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: crc32c 00:09:36.907 Test: test_crc32c ...passed 00:09:36.907 Test: test_crc32c_nvme ...passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 2 2 2 0 0 00:09:36.907 asserts 16 16 16 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: crc64 00:09:36.907 Test: test_crc64_nvme ...passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 1 1 1 0 0 00:09:36.907 asserts 4 4 4 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: string 00:09:36.907 Test: test_parse_ip_addr ...passed 00:09:36.907 Test: test_str_chomp ...passed 00:09:36.907 Test: test_parse_capacity ...passed 00:09:36.907 Test: test_sprintf_append_realloc ...passed 00:09:36.907 Test: test_strtol ...passed 00:09:36.907 Test: test_strtoll ...passed 00:09:36.907 Test: test_strarray ...passed 00:09:36.907 Test: test_strcpy_replace ...passed 00:09:36.907 00:09:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.907 suites 1 1 n/a 0 0 00:09:36.907 tests 8 8 8 0 0 00:09:36.907 asserts 161 161 161 0 n/a 00:09:36.907 00:09:36.907 Elapsed time = 0.000 seconds 00:09:36.907 02:10:24 unittest.unittest_util -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:36.907 00:09:36.907 00:09:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.907 http://cunit.sourceforge.net/ 00:09:36.907 00:09:36.907 00:09:36.907 Suite: dif 00:09:36.907 Test: dif_generate_and_verify_test ...passed 00:09:36.907 Test: dif_disable_check_test ...[2024-05-15 02:10:24.669616] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:36.907 [2024-05-15 02:10:24.669792] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:36.907 [2024-05-15 02:10:24.669827] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:36.907 [2024-05-15 02:10:24.669860] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:36.907 [2024-05-15 02:10:24.669892] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:36.907 [2024-05-15 02:10:24.669924] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:36.907 passed 00:09:36.907 Test: dif_generate_and_verify_different_pi_formats_test ...passed 00:09:36.907 Test: dif_apptag_mask_test ...[2024-05-15 02:10:24.670032] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:36.907 [2024-05-15 02:10:24.670065] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:36.907 [2024-05-15 02:10:24.670097] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:36.907 [2024-05-15 02:10:24.670204] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:36.907 [2024-05-15 02:10:24.670238] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:36.907 [2024-05-15 02:10:24.670271] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:36.907 [2024-05-15 02:10:24.670303] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:36.907 [2024-05-15 02:10:24.670335] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670366] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670398] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670429] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670461] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670492] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670525] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:36.907 [2024-05-15 02:10:24.670582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:36.907 [2024-05-15 02:10:24.670620] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:36.907 passed 00:09:36.907 Test: dif_sec_512_md_0_error_test ...passed 00:09:36.907 Test: dif_sec_4096_md_0_error_test ...passed 00:09:36.907 Test: dif_sec_4100_md_128_error_test ...[2024-05-15 02:10:24.670643] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:36.907 [2024-05-15 02:10:24.670652] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:36.907 [2024-05-15 02:10:24.670659] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:36.907 [2024-05-15 02:10:24.670668] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:36.907 passed 00:09:36.907 Test: dif_guard_seed_test ...passed 00:09:36.907 Test: dif_guard_value_test ...passed 00:09:36.907 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...[2024-05-15 02:10:24.670675] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:36.907 passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:36.907 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:36.907 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:36.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 02:10:24.675670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:09:36.908 [2024-05-15 02:10:24.675943] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe01, Actual=fe21 00:09:36.908 [2024-05-15 02:10:24.676196] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.676448] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.676699] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.908 [2024-05-15 02:10:24.676951] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.908 [2024-05-15 02:10:24.677202] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=65a5 00:09:36.908 [2024-05-15 02:10:24.677370] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=de2a 00:09:36.908 [2024-05-15 02:10:24.677548] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1a9753ed, Actual=1ab753ed 00:09:36.908 [2024-05-15 02:10:24.677798] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38774660, Actual=38574660 00:09:36.908 [2024-05-15 02:10:24.678050] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.678299] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.678551] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.908 [2024-05-15 02:10:24.678802] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.908 [2024-05-15 02:10:24.679054] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=16459569 00:09:36.908 [2024-05-15 02:10:24.679221] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=762915a1 00:09:36.908 [2024-05-15 02:10:24.679389] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.908 [2024-05-15 02:10:24.679639] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.908 [2024-05-15 02:10:24.679888] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.680138] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.680387] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.908 [2024-05-15 02:10:24.680638] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.908 [2024-05-15 02:10:24.680887] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.908 [2024-05-15 02:10:24.681054] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.908 passed 00:09:36.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-15 02:10:24.681115] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.908 [2024-05-15 02:10:24.681149] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:09:36.908 [2024-05-15 02:10:24.681183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681217] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681258] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.681292] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.681326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.908 [2024-05-15 02:10:24.681358] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=de2a 00:09:36.908 [2024-05-15 02:10:24.681391] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.908 [2024-05-15 02:10:24.681433] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:09:36.908 [2024-05-15 02:10:24.681468] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681501] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681535] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.908 [2024-05-15 02:10:24.681569] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.908 [2024-05-15 02:10:24.681603] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.908 [2024-05-15 02:10:24.681635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=762915a1 00:09:36.908 [2024-05-15 02:10:24.681668] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.908 [2024-05-15 02:10:24.681702] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.908 [2024-05-15 02:10:24.681736] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681769] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.681803] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 passed 00:09:36.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-15 02:10:24.681838] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.681872] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.908 [2024-05-15 02:10:24.681905] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.908 [2024-05-15 02:10:24.681941] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.908 [2024-05-15 02:10:24.681976] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:09:36.908 [2024-05-15 02:10:24.682010] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682044] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682079] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.682113] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.682146] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.908 [2024-05-15 02:10:24.682179] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=de2a 00:09:36.908 [2024-05-15 02:10:24.682213] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.908 [2024-05-15 02:10:24.682246] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:09:36.908 [2024-05-15 02:10:24.682280] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682314] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682347] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.908 [2024-05-15 02:10:24.682381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.908 [2024-05-15 02:10:24.682414] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.908 passed 00:09:36.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-15 02:10:24.682447] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=762915a1 00:09:36.908 [2024-05-15 02:10:24.682480] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.908 [2024-05-15 02:10:24.682514] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.908 [2024-05-15 02:10:24.682548] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682616] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.682651] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.908 [2024-05-15 02:10:24.682685] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.908 [2024-05-15 02:10:24.682717] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.908 [2024-05-15 02:10:24.682753] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.908 [2024-05-15 02:10:24.682787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:09:36.908 [2024-05-15 02:10:24.682821] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.908 [2024-05-15 02:10:24.682855] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.682889] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.682924] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.682957] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.909 [2024-05-15 02:10:24.682990] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=de2a 00:09:36.909 [2024-05-15 02:10:24.683023] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.909 [2024-05-15 02:10:24.683057] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:09:36.909 [2024-05-15 02:10:24.683091] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683124] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683158] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.683192] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.683226] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.909 [2024-05-15 02:10:24.683258] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=762915a1 00:09:36.909 [2024-05-15 02:10:24.683291] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.909 [2024-05-15 02:10:24.683326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.909 [2024-05-15 02:10:24.683360] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 passed 00:09:36.909 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-15 02:10:24.683394] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683428] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.683469] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.683504] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.909 [2024-05-15 02:10:24.683537] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.909 [2024-05-15 02:10:24.683572] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.909 [2024-05-15 02:10:24.683606] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:09:36.909 [2024-05-15 02:10:24.683640] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683674] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.683743] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.683777] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.909 passed 00:09:36.909 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-15 02:10:24.683810] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=de2a 00:09:36.909 [2024-05-15 02:10:24.683845] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.909 [2024-05-15 02:10:24.683879] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:09:36.909 [2024-05-15 02:10:24.683913] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683947] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.683981] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.684015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.684049] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.909 [2024-05-15 02:10:24.684081] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=762915a1 00:09:36.909 [2024-05-15 02:10:24.684114] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.909 [2024-05-15 02:10:24.684148] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.909 [2024-05-15 02:10:24.684207] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684243] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684277] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.684311] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.684345] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.909 passed 00:09:36.909 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-15 02:10:24.684378] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.909 [2024-05-15 02:10:24.684414] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.909 [2024-05-15 02:10:24.684448] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:09:36.909 [2024-05-15 02:10:24.684482] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684515] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684549] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.684584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.684618] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.909 passed 00:09:36.909 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-15 02:10:24.684650] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=de2a 00:09:36.909 [2024-05-15 02:10:24.684685] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.909 [2024-05-15 02:10:24.684719] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:09:36.909 [2024-05-15 02:10:24.684753] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684786] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.684820] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.684854] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.909 [2024-05-15 02:10:24.684888] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.909 [2024-05-15 02:10:24.684921] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=762915a1 00:09:36.909 [2024-05-15 02:10:24.684954] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.909 [2024-05-15 02:10:24.684988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88210a2d4837a266, Actual=88010a2d4837a266 00:09:36.909 [2024-05-15 02:10:24.685022] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.685056] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.685089] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 [2024-05-15 02:10:24.685123] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.909 passed 00:09:36.909 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-05-15 02:10:24.685157] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.909 [2024-05-15 02:10:24.685190] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc38abf3d65797ee 00:09:36.909 passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:36.909 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:36.909 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:36.909 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:36.909 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 02:10:24.689749] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:09:36.909 [2024-05-15 02:10:24.689891] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=81cb, Actual=81eb 00:09:36.909 [2024-05-15 02:10:24.690026] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.909 [2024-05-15 02:10:24.690162] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.690303] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.690438] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.690573] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=65a5 00:09:36.910 [2024-05-15 02:10:24.690707] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=f906 00:09:36.910 [2024-05-15 02:10:24.690841] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1a9753ed, Actual=1ab753ed 00:09:36.910 [2024-05-15 02:10:24.690976] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=22e55fe6, Actual=22c55fe6 00:09:36.910 [2024-05-15 02:10:24.691110] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.691244] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.691386] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.910 [2024-05-15 02:10:24.691521] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.910 [2024-05-15 02:10:24.691656] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=16459569 00:09:36.910 [2024-05-15 02:10:24.691790] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=46ee32ef 00:09:36.910 [2024-05-15 02:10:24.691925] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.910 [2024-05-15 02:10:24.692061] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c27b177a3ba35ffd, Actual=c25b177a3ba35ffd 00:09:36.910 [2024-05-15 02:10:24.692196] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.692331] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.692466] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.692602] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.692750] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.910 passed 00:09:36.910 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 02:10:24.692886] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=9cc21f25f7860eb0 00:09:36.910 [2024-05-15 02:10:24.692926] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.910 [2024-05-15 02:10:24.692961] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:09:36.910 [2024-05-15 02:10:24.692996] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.693031] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.693065] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.910 [2024-05-15 02:10:24.693101] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.910 [2024-05-15 02:10:24.693136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.910 [2024-05-15 02:10:24.693171] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=d1d 00:09:36.910 [2024-05-15 02:10:24.693207] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.910 [2024-05-15 02:10:24.693242] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c0d37e64, Actual=c0f37e64 00:09:36.910 [2024-05-15 02:10:24.693277] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.693312] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.693346] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.910 [2024-05-15 02:10:24.693381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.910 [2024-05-15 02:10:24.693424] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.910 [2024-05-15 02:10:24.693459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a4d8136d 00:09:36.910 [2024-05-15 02:10:24.693495] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.910 [2024-05-15 02:10:24.693530] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=22e6839a049f5467, Actual=22c6839a049f5467 00:09:36.910 [2024-05-15 02:10:24.693565] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 passed 00:09:36.910 Test: dix_sec_512_md_0_error ...[2024-05-15 02:10:24.693600] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.693635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.910 [2024-05-15 02:10:24.693670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.910 [2024-05-15 02:10:24.693705] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.910 [2024-05-15 02:10:24.693741] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7c5f8bc5c8ba052a 00:09:36.910 [2024-05-15 02:10:24.693752] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:36.910 passed 00:09:36.910 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:09:36.910 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:36.910 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:36.910 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:36.910 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:36.910 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:36.910 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:36.910 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:36.910 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:36.910 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 02:10:24.697690] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:09:36.910 [2024-05-15 02:10:24.697811] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=81cb, Actual=81eb 00:09:36.910 [2024-05-15 02:10:24.697927] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.698043] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.698159] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.698274] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.698390] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=65a5 00:09:36.910 [2024-05-15 02:10:24.698505] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=f906 00:09:36.910 [2024-05-15 02:10:24.698621] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1a9753ed, Actual=1ab753ed 00:09:36.910 [2024-05-15 02:10:24.698744] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=22e55fe6, Actual=22c55fe6 00:09:36.910 [2024-05-15 02:10:24.698859] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.698974] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.699089] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.910 [2024-05-15 02:10:24.699209] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2000000000005a 00:09:36.910 [2024-05-15 02:10:24.699324] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=16459569 00:09:36.910 [2024-05-15 02:10:24.699439] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=46ee32ef 00:09:36.910 [2024-05-15 02:10:24.699553] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.910 [2024-05-15 02:10:24.699670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c27b177a3ba35ffd, Actual=c25b177a3ba35ffd 00:09:36.910 [2024-05-15 02:10:24.699787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.699902] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:09:36.910 [2024-05-15 02:10:24.700018] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.700133] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:09:36.910 [2024-05-15 02:10:24.700249] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.910 passed 00:09:36.910 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 02:10:24.700365] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=9cc21f25f7860eb0 00:09:36.910 [2024-05-15 02:10:24.700398] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:09:36.910 [2024-05-15 02:10:24.700429] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:09:36.910 [2024-05-15 02:10:24.700459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.700490] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.700520] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.911 [2024-05-15 02:10:24.700551] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.911 [2024-05-15 02:10:24.700581] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=65a5 00:09:36.911 [2024-05-15 02:10:24.700612] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=d1d 00:09:36.911 [2024-05-15 02:10:24.700643] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:09:36.911 [2024-05-15 02:10:24.700673] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c0d37e64, Actual=c0f37e64 00:09:36.911 [2024-05-15 02:10:24.700703] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.700732] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.700762] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.911 [2024-05-15 02:10:24.700791] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000000058 00:09:36.911 [2024-05-15 02:10:24.700821] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=16459569 00:09:36.911 [2024-05-15 02:10:24.700850] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a4d8136d 00:09:36.911 [2024-05-15 02:10:24.700880] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a556a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:36.911 [2024-05-15 02:10:24.700910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=22e6839a049f5467, Actual=22c6839a049f5467 00:09:36.911 passed 00:09:36.911 Test: set_md_interleave_iovs_test ...[2024-05-15 02:10:24.700941] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.700971] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:09:36.911 [2024-05-15 02:10:24.701001] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.911 [2024-05-15 02:10:24.701031] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:09:36.911 [2024-05-15 02:10:24.701061] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=91c943b0bb9d5540 00:09:36.911 [2024-05-15 02:10:24.701092] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7c5f8bc5c8ba052a 00:09:36.911 passed 00:09:36.911 Test: set_md_interleave_iovs_split_test ...passed 00:09:36.911 Test: dif_generate_stream_pi_16_test ...passed 00:09:36.911 Test: dif_generate_stream_test ...passed 00:09:36.911 Test: set_md_interleave_iovs_alignment_test ...[2024-05-15 02:10:24.701732] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:36.911 passed 00:09:36.911 Test: dif_generate_split_test ...passed 00:09:36.911 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:36.911 Test: dif_verify_split_test ...passed 00:09:36.911 Test: dif_verify_stream_multi_segments_test ...passed 00:09:36.911 Test: update_crc32c_pi_16_test ...passed 00:09:36.911 Test: update_crc32c_test ...passed 00:09:36.911 Test: dif_update_crc32c_split_test ...passed 00:09:36.911 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:36.911 Test: get_range_with_md_test ...passed 00:09:36.911 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:36.911 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:36.911 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:36.911 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:36.911 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:36.911 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:36.911 Test: dif_generate_and_verify_unmap_test ...passed 00:09:36.911 00:09:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.911 suites 1 1 n/a 0 0 00:09:36.911 tests 79 79 79 0 0 00:09:36.911 asserts 3584 3584 3584 0 n/a 00:09:36.911 00:09:36.911 Elapsed time = 0.031 seconds 00:09:36.911 02:10:24 unittest.unittest_util -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:36.911 00:09:36.911 00:09:36.911 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.911 http://cunit.sourceforge.net/ 00:09:36.911 00:09:36.911 00:09:36.911 Suite: iov 00:09:36.911 Test: test_single_iov ...passed 00:09:36.911 Test: test_simple_iov ...passed 00:09:36.911 Test: test_complex_iov ...passed 00:09:36.911 Test: test_iovs_to_buf ...passed 00:09:36.911 Test: test_buf_to_iovs ...passed 00:09:36.911 Test: test_memset ...passed 00:09:36.911 Test: test_iov_one ...passed 00:09:36.911 Test: test_iov_xfer ...passed 00:09:36.911 00:09:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.911 suites 1 1 n/a 0 0 00:09:36.911 tests 8 8 8 0 0 00:09:36.911 asserts 156 156 156 0 n/a 00:09:36.911 00:09:36.911 Elapsed time = 0.000 seconds 00:09:36.911 02:10:24 unittest.unittest_util -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:36.911 00:09:36.911 00:09:36.911 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.911 http://cunit.sourceforge.net/ 00:09:36.911 00:09:36.911 00:09:36.911 Suite: math 00:09:36.911 Test: test_serial_number_arithmetic ...passed 00:09:36.911 Suite: erase 00:09:36.911 Test: test_memset_s ...passed 00:09:36.911 00:09:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.911 suites 2 2 n/a 0 0 00:09:36.911 tests 2 2 2 0 0 00:09:36.911 asserts 18 18 18 0 n/a 00:09:36.911 00:09:36.911 Elapsed time = 0.000 seconds 00:09:36.911 02:10:24 unittest.unittest_util -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:36.911 00:09:36.911 00:09:36.911 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.911 http://cunit.sourceforge.net/ 00:09:36.911 00:09:36.911 00:09:36.911 Suite: pipe 00:09:36.911 Test: test_create_destroy ...passed 00:09:36.911 Test: test_write_get_buffer ...passed 00:09:36.911 Test: test_write_advance ...passed 00:09:36.911 Test: test_read_get_buffer ...passed 00:09:36.911 Test: test_read_advance ...passed 00:09:36.911 Test: test_data ...passed 00:09:36.911 00:09:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.911 suites 1 1 n/a 0 0 00:09:36.911 tests 6 6 6 0 0 00:09:36.911 asserts 251 251 251 0 n/a 00:09:36.911 00:09:36.911 Elapsed time = 0.000 seconds 00:09:36.911 02:10:24 unittest.unittest_util -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:36.911 00:09:36.911 00:09:36.911 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.911 http://cunit.sourceforge.net/ 00:09:36.911 00:09:36.911 00:09:36.911 Suite: xor 00:09:36.911 Test: test_xor_gen ...passed 00:09:36.911 00:09:36.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.911 suites 1 1 n/a 0 0 00:09:36.911 tests 1 1 1 0 0 00:09:36.911 asserts 17 17 17 0 n/a 00:09:36.911 00:09:36.911 Elapsed time = 0.000 seconds 00:09:36.911 00:09:36.911 real 0m0.111s 00:09:36.911 user 0m0.049s 00:09:36.911 sys 0m0.059s 00:09:36.911 02:10:24 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.911 ************************************ 00:09:36.911 END TEST unittest_util 00:09:36.911 ************************************ 00:09:36.912 02:10:24 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 START TEST unittest_dma 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:36.912 00:09:36.912 00:09:36.912 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.912 http://cunit.sourceforge.net/ 00:09:36.912 00:09:36.912 00:09:36.912 Suite: dma_suite 00:09:36.912 Test: test_dma ...passed 00:09:36.912 00:09:36.912 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.912 suites 1 1 n/a 0 0 00:09:36.912 tests 1 1 1 0 0 00:09:36.912 asserts 54 54 54 0 n/a 00:09:36.912 00:09:36.912 Elapsed time = 0.000 seconds 00:09:36.912 [2024-05-15 02:10:24.781085] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:36.912 00:09:36.912 real 0m0.007s 00:09:36.912 user 0m0.007s 00:09:36.912 sys 0m0.001s 00:09:36.912 02:10:24 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.912 02:10:24 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 END TEST unittest_dma 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 START TEST unittest_init 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:09:36.912 02:10:24 unittest.unittest_init -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:36.912 00:09:36.912 00:09:36.912 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.912 http://cunit.sourceforge.net/ 00:09:36.912 00:09:36.912 00:09:36.912 Suite: subsystem_suite 00:09:36.912 Test: subsystem_sort_test_depends_on_single ...passed 00:09:36.912 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:36.912 Test: subsystem_sort_test_missing_dependency ...passed 00:09:36.912 00:09:36.912 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.912 suites 1 1 n/a 0 0 00:09:36.912 tests 3 3 3 0 0 00:09:36.912 asserts 20 20 20 0 n/a 00:09:36.912 00:09:36.912 Elapsed time = 0.000 seconds 00:09:36.912 [2024-05-15 02:10:24.826709] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:36.912 [2024-05-15 02:10:24.826884] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:36.912 00:09:36.912 real 0m0.006s 00:09:36.912 user 0m0.006s 00:09:36.912 sys 0m0.000s 00:09:36.912 02:10:24 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.912 ************************************ 00:09:36.912 END TEST unittest_init 00:09:36.912 02:10:24 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@288 -- # run_test unittest_keyring /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 START TEST unittest_keyring 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:36.912 00:09:36.912 00:09:36.912 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.912 http://cunit.sourceforge.net/ 00:09:36.912 00:09:36.912 00:09:36.912 Suite: keyring 00:09:36.912 Test: test_keyring_add_remove ...[2024-05-15 02:10:24.869319] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:09:36.912 [2024-05-15 02:10:24.869638] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:09:36.912 [2024-05-15 02:10:24.869675] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:09:36.912 passed 00:09:36.912 Test: test_keyring_get_put ...passed 00:09:36.912 00:09:36.912 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.912 suites 1 1 n/a 0 0 00:09:36.912 tests 2 2 2 0 0 00:09:36.912 asserts 44 44 44 0 n/a 00:09:36.912 00:09:36.912 Elapsed time = 0.000 seconds 00:09:36.912 00:09:36.912 real 0m0.007s 00:09:36.912 user 0m0.006s 00:09:36.912 sys 0m0.006s 00:09:36.912 02:10:24 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.912 02:10:24 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 END TEST unittest_keyring 00:09:36.912 ************************************ 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@290 -- # '[' no = yes ']' 00:09:36.912 00:09:36.912 00:09:36.912 ===================== 00:09:36.912 All unit tests passed 00:09:36.912 ===================== 00:09:36.912 WARN: lcov not installed or SPDK built without coverage! 00:09:36.912 02:10:24 unittest -- unit/unittest.sh@303 -- # set +x 00:09:36.912 WARN: neither valgrind nor ASAN is enabled! 00:09:36.912 00:09:36.912 00:09:36.912 00:09:36.912 real 0m17.447s 00:09:36.912 user 0m14.621s 00:09:36.912 sys 0m1.561s 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.912 02:10:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:36.912 ************************************ 00:09:36.912 END TEST unittest 00:09:36.912 ************************************ 00:09:37.171 02:10:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:37.171 02:10:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:37.171 02:10:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:37.171 02:10:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:37.171 02:10:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:37.171 02:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:37.171 02:10:24 -- spdk/autotest.sh@164 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:37.171 02:10:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:37.171 02:10:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.171 02:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:37.171 ************************************ 00:09:37.171 START TEST env 00:09:37.171 ************************************ 00:09:37.171 02:10:24 env -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:37.738 * Looking for test storage... 00:09:37.738 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:09:37.738 02:10:25 env -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:37.738 02:10:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:37.738 02:10:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.738 02:10:25 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.738 ************************************ 00:09:37.738 START TEST env_memory 00:09:37.738 ************************************ 00:09:37.738 02:10:25 env.env_memory -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:37.738 00:09:37.738 00:09:37.738 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.738 http://cunit.sourceforge.net/ 00:09:37.738 00:09:37.738 00:09:37.738 Suite: memory 00:09:37.738 Test: alloc and free memory map ...[2024-05-15 02:10:25.556433] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:37.738 passed 00:09:37.738 Test: mem map translation ...[2024-05-15 02:10:25.563774] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:37.738 [2024-05-15 02:10:25.563825] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:37.738 [2024-05-15 02:10:25.563843] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:37.738 [2024-05-15 02:10:25.563853] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:37.738 passed 00:09:37.738 Test: mem map registration ...[2024-05-15 02:10:25.572835] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:37.738 [2024-05-15 02:10:25.572883] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:37.738 passed 00:09:37.738 Test: mem map adjacent registrations ...passed 00:09:37.738 00:09:37.738 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.738 suites 1 1 n/a 0 0 00:09:37.738 tests 4 4 4 0 0 00:09:37.738 asserts 152 152 152 0 n/a 00:09:37.738 00:09:37.738 Elapsed time = 0.039 seconds 00:09:37.738 00:09:37.738 real 0m0.043s 00:09:37.738 user 0m0.034s 00:09:37.738 sys 0m0.010s 00:09:37.738 02:10:25 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.738 02:10:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:37.738 ************************************ 00:09:37.738 END TEST env_memory 00:09:37.738 ************************************ 00:09:37.738 02:10:25 env -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:37.738 02:10:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:37.738 02:10:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.738 02:10:25 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.738 ************************************ 00:09:37.738 START TEST env_vtophys 00:09:37.738 ************************************ 00:09:37.738 02:10:25 env.env_vtophys -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:37.738 EAL: lib.eal log level changed from notice to debug 00:09:37.738 EAL: Sysctl reports 10 cpus 00:09:37.738 EAL: Detected lcore 0 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 1 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 2 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 3 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 4 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 5 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 6 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 7 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 8 as core 0 on socket 0 00:09:37.738 EAL: Detected lcore 9 as core 0 on socket 0 00:09:37.738 EAL: Maximum logical cores by configuration: 128 00:09:37.738 EAL: Detected CPU lcores: 10 00:09:37.738 EAL: Detected NUMA nodes: 1 00:09:37.738 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:37.738 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:37.738 EAL: Checking presence of .so 'librte_eal.so' 00:09:37.738 EAL: Detected static linkage of DPDK 00:09:37.738 EAL: No shared files mode enabled, IPC will be disabled 00:09:37.738 EAL: PCI scan found 10 devices 00:09:37.738 EAL: Specific IOVA mode is not requested, autodetecting 00:09:37.738 EAL: Selecting IOVA mode according to bus requests 00:09:37.738 EAL: Bus pci wants IOVA as 'PA' 00:09:37.738 EAL: Selected IOVA mode 'PA' 00:09:37.738 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:09:37.738 EAL: Ask a virtual area of 0x2e000 bytes 00:09:37.738 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000f5a000) not respected! 00:09:37.738 EAL: This may cause issues with mapping memory into secondary processes 00:09:37.738 EAL: Virtual area found at 0x1000f5a000 (size = 0x2e000) 00:09:37.738 EAL: Setting up physically contiguous memory... 00:09:37.738 EAL: Ask a virtual area of 0x1000 bytes 00:09:37.738 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1001139000) not respected! 00:09:37.738 EAL: This may cause issues with mapping memory into secondary processes 00:09:37.738 EAL: Virtual area found at 0x1001139000 (size = 0x1000) 00:09:37.738 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:09:37.738 EAL: Ask a virtual area of 0xf0000000 bytes 00:09:37.738 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:09:37.738 EAL: This may cause issues with mapping memory into secondary processes 00:09:37.738 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:09:37.738 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:09:37.738 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x250000000, len 268435456 00:09:37.997 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x260000000, len 268435456 00:09:37.997 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x270000000, len 268435456 00:09:37.997 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x280000000, len 268435456 00:09:37.997 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x290000000, len 268435456 00:09:37.997 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x2a0000000, len 268435456 00:09:38.256 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x2b0000000, len 268435456 00:09:38.256 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2c0000000, len 268435456 00:09:38.256 EAL: No shared files mode enabled, IPC is disabled 00:09:38.256 EAL: Added 2048M to heap on socket 0 00:09:38.256 EAL: TSC is not safe to use in SMP mode 00:09:38.256 EAL: TSC is not invariant 00:09:38.256 EAL: TSC frequency is ~2100005 KHz 00:09:38.256 EAL: Main lcore 0 is ready (tid=82bc11000;cpuset=[0]) 00:09:38.256 EAL: PCI scan found 10 devices 00:09:38.256 EAL: Registering mem event callbacks not supported 00:09:38.256 00:09:38.256 00:09:38.256 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.256 http://cunit.sourceforge.net/ 00:09:38.256 00:09:38.256 00:09:38.256 Suite: components_suite 00:09:38.256 Test: vtophys_malloc_test ...passed 00:09:38.515 Test: vtophys_spdk_malloc_test ...passed 00:09:38.515 00:09:38.515 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.515 suites 1 1 n/a 0 0 00:09:38.515 tests 2 2 2 0 0 00:09:38.515 asserts 497 497 497 0 n/a 00:09:38.515 00:09:38.515 Elapsed time = 0.312 seconds 00:09:38.515 00:09:38.515 real 0m0.853s 00:09:38.515 user 0m0.312s 00:09:38.515 sys 0m0.539s 00:09:38.515 02:10:26 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.515 02:10:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:38.515 ************************************ 00:09:38.515 END TEST env_vtophys 00:09:38.515 ************************************ 00:09:38.515 02:10:26 env -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:38.515 02:10:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.515 02:10:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.515 02:10:26 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.785 ************************************ 00:09:38.785 START TEST env_pci 00:09:38.785 ************************************ 00:09:38.785 02:10:26 env.env_pci -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:38.785 00:09:38.785 00:09:38.785 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.785 http://cunit.sourceforge.net/ 00:09:38.785 00:09:38.785 00:09:38.785 Suite: pci 00:09:38.785 Test: pci_hook ...passed 00:09:38.785 00:09:38.785 EAL: Cannot find device (10000:00:01.0) 00:09:38.785 EAL: Failed to attach device on primary process 00:09:38.785 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.785 suites 1 1 n/a 0 0 00:09:38.785 tests 1 1 1 0 0 00:09:38.785 asserts 25 25 25 0 n/a 00:09:38.785 00:09:38.785 Elapsed time = 0.000 seconds 00:09:38.785 00:09:38.785 real 0m0.008s 00:09:38.785 user 0m0.008s 00:09:38.785 sys 0m0.005s 00:09:38.785 02:10:26 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.785 02:10:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:38.785 ************************************ 00:09:38.785 END TEST env_pci 00:09:38.785 ************************************ 00:09:38.785 02:10:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:38.785 02:10:26 env -- env/env.sh@15 -- # uname 00:09:38.785 02:10:26 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:09:38.785 02:10:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:09:38.785 02:10:26 env -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:38.785 02:10:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.785 02:10:26 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.785 ************************************ 00:09:38.785 START TEST env_dpdk_post_init 00:09:38.785 ************************************ 00:09:38.785 02:10:26 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:09:38.785 EAL: Sysctl reports 10 cpus 00:09:38.785 EAL: Detected CPU lcores: 10 00:09:38.785 EAL: Detected NUMA nodes: 1 00:09:38.785 EAL: Detected static linkage of DPDK 00:09:38.785 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:38.785 EAL: Selected IOVA mode 'PA' 00:09:38.785 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:09:38.785 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x250000000, len 268435456 00:09:38.785 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x260000000, len 268435456 00:09:38.785 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x270000000, len 268435456 00:09:39.044 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x280000000, len 268435456 00:09:39.044 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x290000000, len 268435456 00:09:39.044 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x2a0000000, len 268435456 00:09:39.044 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x2b0000000, len 268435456 00:09:39.304 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2c0000000, len 268435456 00:09:39.304 EAL: TSC is not safe to use in SMP mode 00:09:39.304 EAL: TSC is not invariant 00:09:39.304 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:39.304 [2024-05-15 02:10:27.085615] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:09:39.304 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:39.304 Starting DPDK initialization... 00:09:39.304 Starting SPDK post initialization... 00:09:39.304 SPDK NVMe probe 00:09:39.304 Attaching to 0000:00:10.0 00:09:39.304 Attached to 0000:00:10.0 00:09:39.304 Cleaning up... 00:09:39.304 00:09:39.304 real 0m0.560s 00:09:39.304 user 0m0.009s 00:09:39.304 sys 0m0.547s 00:09:39.304 02:10:27 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.304 02:10:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:39.304 ************************************ 00:09:39.304 END TEST env_dpdk_post_init 00:09:39.304 ************************************ 00:09:39.304 02:10:27 env -- env/env.sh@26 -- # uname 00:09:39.304 02:10:27 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:09:39.304 00:09:39.304 real 0m2.235s 00:09:39.304 user 0m0.570s 00:09:39.304 sys 0m1.729s 00:09:39.304 02:10:27 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.304 02:10:27 env -- common/autotest_common.sh@10 -- # set +x 00:09:39.304 ************************************ 00:09:39.304 END TEST env 00:09:39.304 ************************************ 00:09:39.304 02:10:27 -- spdk/autotest.sh@165 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:39.304 02:10:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:39.304 02:10:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.304 02:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:39.304 ************************************ 00:09:39.304 START TEST rpc 00:09:39.304 ************************************ 00:09:39.304 02:10:27 rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:39.562 * Looking for test storage... 00:09:39.562 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:09:39.562 02:10:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45837 00:09:39.562 02:10:27 rpc -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:39.562 02:10:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.562 02:10:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45837 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@827 -- # '[' -z 45837 ']' 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:39.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:39.562 02:10:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.562 [2024-05-15 02:10:27.403704] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:39.562 [2024-05-15 02:10:27.403878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:40.129 EAL: TSC is not safe to use in SMP mode 00:09:40.129 EAL: TSC is not invariant 00:09:40.129 [2024-05-15 02:10:27.891595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.129 [2024-05-15 02:10:27.972278] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:40.129 [2024-05-15 02:10:27.974384] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:40.129 [2024-05-15 02:10:27.974415] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45837' to capture a snapshot of events at runtime. 00:09:40.129 [2024-05-15 02:10:27.974437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.696 02:10:28 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:40.696 02:10:28 rpc -- common/autotest_common.sh@860 -- # return 0 00:09:40.696 02:10:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:09:40.696 02:10:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:09:40.696 02:10:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:40.696 02:10:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:40.696 02:10:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:40.696 02:10:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.696 02:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 ************************************ 00:09:40.696 START TEST rpc_integrity 00:09:40.696 ************************************ 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.696 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:40.696 { 00:09:40.696 "name": "Malloc0", 00:09:40.696 "aliases": [ 00:09:40.696 "4cc30aa7-1260-11ef-99fd-bfc7c66e2865" 00:09:40.696 ], 00:09:40.696 "product_name": "Malloc disk", 00:09:40.696 "block_size": 512, 00:09:40.696 "num_blocks": 16384, 00:09:40.696 "uuid": "4cc30aa7-1260-11ef-99fd-bfc7c66e2865", 00:09:40.696 "assigned_rate_limits": { 00:09:40.696 "rw_ios_per_sec": 0, 00:09:40.696 "rw_mbytes_per_sec": 0, 00:09:40.696 "r_mbytes_per_sec": 0, 00:09:40.696 "w_mbytes_per_sec": 0 00:09:40.696 }, 00:09:40.696 "claimed": false, 00:09:40.696 "zoned": false, 00:09:40.696 "supported_io_types": { 00:09:40.696 "read": true, 00:09:40.696 "write": true, 00:09:40.696 "unmap": true, 00:09:40.696 "write_zeroes": true, 00:09:40.696 "flush": true, 00:09:40.696 "reset": true, 00:09:40.696 "compare": false, 00:09:40.696 "compare_and_write": false, 00:09:40.696 "abort": true, 00:09:40.696 "nvme_admin": false, 00:09:40.696 "nvme_io": false 00:09:40.696 }, 00:09:40.696 "memory_domains": [ 00:09:40.696 { 00:09:40.696 "dma_device_id": "system", 00:09:40.696 "dma_device_type": 1 00:09:40.696 }, 00:09:40.696 { 00:09:40.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.696 "dma_device_type": 2 00:09:40.696 } 00:09:40.696 ], 00:09:40.696 "driver_specific": {} 00:09:40.696 } 00:09:40.696 ]' 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.697 [2024-05-15 02:10:28.618555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:40.697 [2024-05-15 02:10:28.618620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.697 [2024-05-15 02:10:28.619186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2cca00 00:09:40.697 [2024-05-15 02:10:28.619210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.697 [2024-05-15 02:10:28.619909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.697 [2024-05-15 02:10:28.619941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:40.697 Passthru0 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:40.697 { 00:09:40.697 "name": "Malloc0", 00:09:40.697 "aliases": [ 00:09:40.697 "4cc30aa7-1260-11ef-99fd-bfc7c66e2865" 00:09:40.697 ], 00:09:40.697 "product_name": "Malloc disk", 00:09:40.697 "block_size": 512, 00:09:40.697 "num_blocks": 16384, 00:09:40.697 "uuid": "4cc30aa7-1260-11ef-99fd-bfc7c66e2865", 00:09:40.697 "assigned_rate_limits": { 00:09:40.697 "rw_ios_per_sec": 0, 00:09:40.697 "rw_mbytes_per_sec": 0, 00:09:40.697 "r_mbytes_per_sec": 0, 00:09:40.697 "w_mbytes_per_sec": 0 00:09:40.697 }, 00:09:40.697 "claimed": true, 00:09:40.697 "claim_type": "exclusive_write", 00:09:40.697 "zoned": false, 00:09:40.697 "supported_io_types": { 00:09:40.697 "read": true, 00:09:40.697 "write": true, 00:09:40.697 "unmap": true, 00:09:40.697 "write_zeroes": true, 00:09:40.697 "flush": true, 00:09:40.697 "reset": true, 00:09:40.697 "compare": false, 00:09:40.697 "compare_and_write": false, 00:09:40.697 "abort": true, 00:09:40.697 "nvme_admin": false, 00:09:40.697 "nvme_io": false 00:09:40.697 }, 00:09:40.697 "memory_domains": [ 00:09:40.697 { 00:09:40.697 "dma_device_id": "system", 00:09:40.697 "dma_device_type": 1 00:09:40.697 }, 00:09:40.697 { 00:09:40.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.697 "dma_device_type": 2 00:09:40.697 } 00:09:40.697 ], 00:09:40.697 "driver_specific": {} 00:09:40.697 }, 00:09:40.697 { 00:09:40.697 "name": "Passthru0", 00:09:40.697 "aliases": [ 00:09:40.697 "0eedaba2-5a25-de5c-987a-1b9ef266a18d" 00:09:40.697 ], 00:09:40.697 "product_name": "passthru", 00:09:40.697 "block_size": 512, 00:09:40.697 "num_blocks": 16384, 00:09:40.697 "uuid": "0eedaba2-5a25-de5c-987a-1b9ef266a18d", 00:09:40.697 "assigned_rate_limits": { 00:09:40.697 "rw_ios_per_sec": 0, 00:09:40.697 "rw_mbytes_per_sec": 0, 00:09:40.697 "r_mbytes_per_sec": 0, 00:09:40.697 "w_mbytes_per_sec": 0 00:09:40.697 }, 00:09:40.697 "claimed": false, 00:09:40.697 "zoned": false, 00:09:40.697 "supported_io_types": { 00:09:40.697 "read": true, 00:09:40.697 "write": true, 00:09:40.697 "unmap": true, 00:09:40.697 "write_zeroes": true, 00:09:40.697 "flush": true, 00:09:40.697 "reset": true, 00:09:40.697 "compare": false, 00:09:40.697 "compare_and_write": false, 00:09:40.697 "abort": true, 00:09:40.697 "nvme_admin": false, 00:09:40.697 "nvme_io": false 00:09:40.697 }, 00:09:40.697 "memory_domains": [ 00:09:40.697 { 00:09:40.697 "dma_device_id": "system", 00:09:40.697 "dma_device_type": 1 00:09:40.697 }, 00:09:40.697 { 00:09:40.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.697 "dma_device_type": 2 00:09:40.697 } 00:09:40.697 ], 00:09:40.697 "driver_specific": { 00:09:40.697 "passthru": { 00:09:40.697 "name": "Passthru0", 00:09:40.697 "base_bdev_name": "Malloc0" 00:09:40.697 } 00:09:40.697 } 00:09:40.697 } 00:09:40.697 ]' 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.697 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:40.697 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:40.956 02:10:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:40.956 00:09:40.956 real 0m0.153s 00:09:40.956 user 0m0.035s 00:09:40.956 sys 0m0.046s 00:09:40.956 ************************************ 00:09:40.956 END TEST rpc_integrity 00:09:40.956 ************************************ 00:09:40.956 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.956 02:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.956 02:10:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:40.956 02:10:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:40.956 02:10:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.956 02:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.956 ************************************ 00:09:40.956 START TEST rpc_plugins 00:09:40.956 ************************************ 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:09:40.956 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.956 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:40.956 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.956 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.956 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:40.956 { 00:09:40.956 "name": "Malloc1", 00:09:40.956 "aliases": [ 00:09:40.956 "4cdd48c4-1260-11ef-99fd-bfc7c66e2865" 00:09:40.956 ], 00:09:40.956 "product_name": "Malloc disk", 00:09:40.956 "block_size": 4096, 00:09:40.956 "num_blocks": 256, 00:09:40.956 "uuid": "4cdd48c4-1260-11ef-99fd-bfc7c66e2865", 00:09:40.956 "assigned_rate_limits": { 00:09:40.956 "rw_ios_per_sec": 0, 00:09:40.956 "rw_mbytes_per_sec": 0, 00:09:40.956 "r_mbytes_per_sec": 0, 00:09:40.956 "w_mbytes_per_sec": 0 00:09:40.956 }, 00:09:40.956 "claimed": false, 00:09:40.956 "zoned": false, 00:09:40.956 "supported_io_types": { 00:09:40.956 "read": true, 00:09:40.956 "write": true, 00:09:40.956 "unmap": true, 00:09:40.956 "write_zeroes": true, 00:09:40.956 "flush": true, 00:09:40.956 "reset": true, 00:09:40.956 "compare": false, 00:09:40.956 "compare_and_write": false, 00:09:40.956 "abort": true, 00:09:40.956 "nvme_admin": false, 00:09:40.956 "nvme_io": false 00:09:40.956 }, 00:09:40.956 "memory_domains": [ 00:09:40.956 { 00:09:40.956 "dma_device_id": "system", 00:09:40.956 "dma_device_type": 1 00:09:40.956 }, 00:09:40.956 { 00:09:40.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.956 "dma_device_type": 2 00:09:40.956 } 00:09:40.957 ], 00:09:40.957 "driver_specific": {} 00:09:40.957 } 00:09:40.957 ]' 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:40.957 02:10:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:40.957 00:09:40.957 real 0m0.072s 00:09:40.957 user 0m0.025s 00:09:40.957 sys 0m0.012s 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 ************************************ 00:09:40.957 END TEST rpc_plugins 00:09:40.957 ************************************ 00:09:40.957 02:10:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 ************************************ 00:09:40.957 START TEST rpc_trace_cmd_test 00:09:40.957 ************************************ 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:40.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45837", 00:09:40.957 "tpoint_group_mask": "0x8", 00:09:40.957 "iscsi_conn": { 00:09:40.957 "mask": "0x2", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "scsi": { 00:09:40.957 "mask": "0x4", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "bdev": { 00:09:40.957 "mask": "0x8", 00:09:40.957 "tpoint_mask": "0xffffffffffffffff" 00:09:40.957 }, 00:09:40.957 "nvmf_rdma": { 00:09:40.957 "mask": "0x10", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "nvmf_tcp": { 00:09:40.957 "mask": "0x20", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "blobfs": { 00:09:40.957 "mask": "0x80", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "dsa": { 00:09:40.957 "mask": "0x200", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "thread": { 00:09:40.957 "mask": "0x400", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "nvme_pcie": { 00:09:40.957 "mask": "0x800", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "iaa": { 00:09:40.957 "mask": "0x1000", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "nvme_tcp": { 00:09:40.957 "mask": "0x2000", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "bdev_nvme": { 00:09:40.957 "mask": "0x4000", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 }, 00:09:40.957 "sock": { 00:09:40.957 "mask": "0x8000", 00:09:40.957 "tpoint_mask": "0x0" 00:09:40.957 } 00:09:40.957 }' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:40.957 00:09:40.957 real 0m0.061s 00:09:40.957 user 0m0.016s 00:09:40.957 sys 0m0.034s 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 ************************************ 00:09:40.957 END TEST rpc_trace_cmd_test 00:09:40.957 ************************************ 00:09:40.957 02:10:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:40.957 02:10:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:40.957 02:10:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.957 02:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.957 ************************************ 00:09:40.957 START TEST rpc_daemon_integrity 00:09:40.957 ************************************ 00:09:40.957 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:09:40.957 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:40.957 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.957 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.272 02:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:41.272 { 00:09:41.272 "name": "Malloc2", 00:09:41.272 "aliases": [ 00:09:41.272 "4d001346-1260-11ef-99fd-bfc7c66e2865" 00:09:41.272 ], 00:09:41.272 "product_name": "Malloc disk", 00:09:41.272 "block_size": 512, 00:09:41.272 "num_blocks": 16384, 00:09:41.272 "uuid": "4d001346-1260-11ef-99fd-bfc7c66e2865", 00:09:41.272 "assigned_rate_limits": { 00:09:41.272 "rw_ios_per_sec": 0, 00:09:41.272 "rw_mbytes_per_sec": 0, 00:09:41.272 "r_mbytes_per_sec": 0, 00:09:41.272 "w_mbytes_per_sec": 0 00:09:41.272 }, 00:09:41.272 "claimed": false, 00:09:41.272 "zoned": false, 00:09:41.272 "supported_io_types": { 00:09:41.272 "read": true, 00:09:41.272 "write": true, 00:09:41.272 "unmap": true, 00:09:41.272 "write_zeroes": true, 00:09:41.272 "flush": true, 00:09:41.272 "reset": true, 00:09:41.272 "compare": false, 00:09:41.272 "compare_and_write": false, 00:09:41.272 "abort": true, 00:09:41.272 "nvme_admin": false, 00:09:41.272 "nvme_io": false 00:09:41.272 }, 00:09:41.272 "memory_domains": [ 00:09:41.272 { 00:09:41.272 "dma_device_id": "system", 00:09:41.272 "dma_device_type": 1 00:09:41.272 }, 00:09:41.272 { 00:09:41.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.272 "dma_device_type": 2 00:09:41.272 } 00:09:41.272 ], 00:09:41.272 "driver_specific": {} 00:09:41.272 } 00:09:41.272 ]' 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 [2024-05-15 02:10:29.014571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:41.272 [2024-05-15 02:10:29.014610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.272 [2024-05-15 02:10:29.014633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2cca00 00:09:41.272 [2024-05-15 02:10:29.014656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.272 [2024-05-15 02:10:29.015078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.272 [2024-05-15 02:10:29.015099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:41.272 Passthru0 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.272 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:41.272 { 00:09:41.272 "name": "Malloc2", 00:09:41.272 "aliases": [ 00:09:41.273 "4d001346-1260-11ef-99fd-bfc7c66e2865" 00:09:41.273 ], 00:09:41.273 "product_name": "Malloc disk", 00:09:41.273 "block_size": 512, 00:09:41.273 "num_blocks": 16384, 00:09:41.273 "uuid": "4d001346-1260-11ef-99fd-bfc7c66e2865", 00:09:41.273 "assigned_rate_limits": { 00:09:41.273 "rw_ios_per_sec": 0, 00:09:41.273 "rw_mbytes_per_sec": 0, 00:09:41.273 "r_mbytes_per_sec": 0, 00:09:41.273 "w_mbytes_per_sec": 0 00:09:41.273 }, 00:09:41.273 "claimed": true, 00:09:41.273 "claim_type": "exclusive_write", 00:09:41.273 "zoned": false, 00:09:41.273 "supported_io_types": { 00:09:41.273 "read": true, 00:09:41.273 "write": true, 00:09:41.273 "unmap": true, 00:09:41.273 "write_zeroes": true, 00:09:41.273 "flush": true, 00:09:41.273 "reset": true, 00:09:41.273 "compare": false, 00:09:41.273 "compare_and_write": false, 00:09:41.273 "abort": true, 00:09:41.273 "nvme_admin": false, 00:09:41.273 "nvme_io": false 00:09:41.273 }, 00:09:41.273 "memory_domains": [ 00:09:41.273 { 00:09:41.273 "dma_device_id": "system", 00:09:41.273 "dma_device_type": 1 00:09:41.273 }, 00:09:41.273 { 00:09:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.273 "dma_device_type": 2 00:09:41.273 } 00:09:41.273 ], 00:09:41.273 "driver_specific": {} 00:09:41.273 }, 00:09:41.273 { 00:09:41.273 "name": "Passthru0", 00:09:41.273 "aliases": [ 00:09:41.273 "f39d6cc2-c5ac-3a58-9448-9a6cda591f0e" 00:09:41.273 ], 00:09:41.273 "product_name": "passthru", 00:09:41.273 "block_size": 512, 00:09:41.273 "num_blocks": 16384, 00:09:41.273 "uuid": "f39d6cc2-c5ac-3a58-9448-9a6cda591f0e", 00:09:41.273 "assigned_rate_limits": { 00:09:41.273 "rw_ios_per_sec": 0, 00:09:41.273 "rw_mbytes_per_sec": 0, 00:09:41.273 "r_mbytes_per_sec": 0, 00:09:41.273 "w_mbytes_per_sec": 0 00:09:41.273 }, 00:09:41.273 "claimed": false, 00:09:41.273 "zoned": false, 00:09:41.273 "supported_io_types": { 00:09:41.273 "read": true, 00:09:41.273 "write": true, 00:09:41.273 "unmap": true, 00:09:41.273 "write_zeroes": true, 00:09:41.273 "flush": true, 00:09:41.273 "reset": true, 00:09:41.273 "compare": false, 00:09:41.273 "compare_and_write": false, 00:09:41.273 "abort": true, 00:09:41.273 "nvme_admin": false, 00:09:41.273 "nvme_io": false 00:09:41.273 }, 00:09:41.273 "memory_domains": [ 00:09:41.273 { 00:09:41.273 "dma_device_id": "system", 00:09:41.273 "dma_device_type": 1 00:09:41.273 }, 00:09:41.273 { 00:09:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.273 "dma_device_type": 2 00:09:41.273 } 00:09:41.273 ], 00:09:41.273 "driver_specific": { 00:09:41.273 "passthru": { 00:09:41.273 "name": "Passthru0", 00:09:41.273 "base_bdev_name": "Malloc2" 00:09:41.273 } 00:09:41.273 } 00:09:41.273 } 00:09:41.273 ]' 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:41.273 00:09:41.273 real 0m0.134s 00:09:41.273 user 0m0.051s 00:09:41.273 sys 0m0.027s 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.273 ************************************ 00:09:41.273 END TEST rpc_daemon_integrity 00:09:41.273 ************************************ 00:09:41.273 02:10:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 02:10:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:41.273 02:10:29 rpc -- rpc/rpc.sh@84 -- # killprocess 45837 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@946 -- # '[' -z 45837 ']' 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@950 -- # kill -0 45837 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@951 -- # uname 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@954 -- # ps -c -o command 45837 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@954 -- # tail -1 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:09:41.273 killing process with pid 45837 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 45837' 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@965 -- # kill 45837 00:09:41.273 02:10:29 rpc -- common/autotest_common.sh@970 -- # wait 45837 00:09:41.530 00:09:41.530 real 0m2.123s 00:09:41.530 user 0m2.278s 00:09:41.530 sys 0m0.898s 00:09:41.530 ************************************ 00:09:41.530 END TEST rpc 00:09:41.530 ************************************ 00:09:41.530 02:10:29 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.530 02:10:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.530 02:10:29 -- spdk/autotest.sh@166 -- # run_test skip_rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:41.530 02:10:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:41.530 02:10:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.530 02:10:29 -- common/autotest_common.sh@10 -- # set +x 00:09:41.530 ************************************ 00:09:41.530 START TEST skip_rpc 00:09:41.530 ************************************ 00:09:41.530 02:10:29 skip_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:41.788 * Looking for test storage... 00:09:41.788 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:09:41.788 02:10:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:41.788 02:10:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:41.788 02:10:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:41.788 02:10:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:41.788 02:10:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.788 02:10:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.788 ************************************ 00:09:41.788 START TEST skip_rpc 00:09:41.788 ************************************ 00:09:41.788 02:10:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:09:41.788 02:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46013 00:09:41.788 02:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.788 02:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:41.788 02:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:41.788 [2024-05-15 02:10:29.611638] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:41.788 [2024-05-15 02:10:29.611978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:42.353 EAL: TSC is not safe to use in SMP mode 00:09:42.353 EAL: TSC is not invariant 00:09:42.353 [2024-05-15 02:10:30.058942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.353 [2024-05-15 02:10:30.155232] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:42.353 [2024-05-15 02:10:30.157921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46013 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 46013 ']' 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 46013 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46013 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # tail -1 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:09:47.630 killing process with pid 46013 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46013' 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 46013 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 46013 00:09:47.630 00:09:47.630 real 0m5.261s 00:09:47.630 user 0m4.804s 00:09:47.630 sys 0m0.480s 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:47.630 02:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 ************************************ 00:09:47.630 END TEST skip_rpc 00:09:47.630 ************************************ 00:09:47.630 02:10:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:47.630 02:10:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:47.630 02:10:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:47.630 02:10:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 ************************************ 00:09:47.630 START TEST skip_rpc_with_json 00:09:47.630 ************************************ 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46058 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46058 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 46058 ']' 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:47.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:47.630 02:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 [2024-05-15 02:10:34.919068] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:47.630 [2024-05-15 02:10:34.919369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:47.630 EAL: TSC is not safe to use in SMP mode 00:09:47.630 EAL: TSC is not invariant 00:09:47.630 [2024-05-15 02:10:35.470374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.630 [2024-05-15 02:10:35.564893] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:47.630 [2024-05-15 02:10:35.567608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.890 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:47.890 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:09:47.890 02:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:47.890 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.890 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:47.890 [2024-05-15 02:10:35.888864] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:48.148 request: 00:09:48.148 { 00:09:48.148 "trtype": "tcp", 00:09:48.148 "method": "nvmf_get_transports", 00:09:48.148 "req_id": 1 00:09:48.148 } 00:09:48.148 Got JSON-RPC error response 00:09:48.148 response: 00:09:48.148 { 00:09:48.148 "code": -19, 00:09:48.148 "message": "Operation not supported by device" 00:09:48.148 } 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 [2024-05-15 02:10:35.900874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.148 02:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.148 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:48.148 { 00:09:48.148 "subsystems": [ 00:09:48.148 { 00:09:48.148 "subsystem": "vmd", 00:09:48.148 "config": [] 00:09:48.148 }, 00:09:48.148 { 00:09:48.149 "subsystem": "iobuf", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "iobuf_set_options", 00:09:48.149 "params": { 00:09:48.149 "small_pool_count": 8192, 00:09:48.149 "large_pool_count": 1024, 00:09:48.149 "small_bufsize": 8192, 00:09:48.149 "large_bufsize": 135168 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "scheduler", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "framework_set_scheduler", 00:09:48.149 "params": { 00:09:48.149 "name": "static" 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "sock", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "sock_impl_set_options", 00:09:48.149 "params": { 00:09:48.149 "impl_name": "posix", 00:09:48.149 "recv_buf_size": 2097152, 00:09:48.149 "send_buf_size": 2097152, 00:09:48.149 "enable_recv_pipe": true, 00:09:48.149 "enable_quickack": false, 00:09:48.149 "enable_placement_id": 0, 00:09:48.149 "enable_zerocopy_send_server": true, 00:09:48.149 "enable_zerocopy_send_client": false, 00:09:48.149 "zerocopy_threshold": 0, 00:09:48.149 "tls_version": 0, 00:09:48.149 "enable_ktls": false 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "sock_impl_set_options", 00:09:48.149 "params": { 00:09:48.149 "impl_name": "ssl", 00:09:48.149 "recv_buf_size": 4096, 00:09:48.149 "send_buf_size": 4096, 00:09:48.149 "enable_recv_pipe": true, 00:09:48.149 "enable_quickack": false, 00:09:48.149 "enable_placement_id": 0, 00:09:48.149 "enable_zerocopy_send_server": true, 00:09:48.149 "enable_zerocopy_send_client": false, 00:09:48.149 "zerocopy_threshold": 0, 00:09:48.149 "tls_version": 0, 00:09:48.149 "enable_ktls": false 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "keyring", 00:09:48.149 "config": [] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "accel", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "accel_set_options", 00:09:48.149 "params": { 00:09:48.149 "small_cache_size": 128, 00:09:48.149 "large_cache_size": 16, 00:09:48.149 "task_count": 2048, 00:09:48.149 "sequence_count": 2048, 00:09:48.149 "buf_count": 2048 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "bdev", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "bdev_set_options", 00:09:48.149 "params": { 00:09:48.149 "bdev_io_pool_size": 65535, 00:09:48.149 "bdev_io_cache_size": 256, 00:09:48.149 "bdev_auto_examine": true, 00:09:48.149 "iobuf_small_cache_size": 128, 00:09:48.149 "iobuf_large_cache_size": 16 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "bdev_raid_set_options", 00:09:48.149 "params": { 00:09:48.149 "process_window_size_kb": 1024 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "bdev_nvme_set_options", 00:09:48.149 "params": { 00:09:48.149 "action_on_timeout": "none", 00:09:48.149 "timeout_us": 0, 00:09:48.149 "timeout_admin_us": 0, 00:09:48.149 "keep_alive_timeout_ms": 10000, 00:09:48.149 "arbitration_burst": 0, 00:09:48.149 "low_priority_weight": 0, 00:09:48.149 "medium_priority_weight": 0, 00:09:48.149 "high_priority_weight": 0, 00:09:48.149 "nvme_adminq_poll_period_us": 10000, 00:09:48.149 "nvme_ioq_poll_period_us": 0, 00:09:48.149 "io_queue_requests": 0, 00:09:48.149 "delay_cmd_submit": true, 00:09:48.149 "transport_retry_count": 4, 00:09:48.149 "bdev_retry_count": 3, 00:09:48.149 "transport_ack_timeout": 0, 00:09:48.149 "ctrlr_loss_timeout_sec": 0, 00:09:48.149 "reconnect_delay_sec": 0, 00:09:48.149 "fast_io_fail_timeout_sec": 0, 00:09:48.149 "disable_auto_failback": false, 00:09:48.149 "generate_uuids": false, 00:09:48.149 "transport_tos": 0, 00:09:48.149 "nvme_error_stat": false, 00:09:48.149 "rdma_srq_size": 0, 00:09:48.149 "io_path_stat": false, 00:09:48.149 "allow_accel_sequence": false, 00:09:48.149 "rdma_max_cq_size": 0, 00:09:48.149 "rdma_cm_event_timeout_ms": 0, 00:09:48.149 "dhchap_digests": [ 00:09:48.149 "sha256", 00:09:48.149 "sha384", 00:09:48.149 "sha512" 00:09:48.149 ], 00:09:48.149 "dhchap_dhgroups": [ 00:09:48.149 "null", 00:09:48.149 "ffdhe2048", 00:09:48.149 "ffdhe3072", 00:09:48.149 "ffdhe4096", 00:09:48.149 "ffdhe6144", 00:09:48.149 "ffdhe8192" 00:09:48.149 ] 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "bdev_nvme_set_hotplug", 00:09:48.149 "params": { 00:09:48.149 "period_us": 100000, 00:09:48.149 "enable": false 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "bdev_wait_for_examine" 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "scsi", 00:09:48.149 "config": null 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "nvmf", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "nvmf_set_config", 00:09:48.149 "params": { 00:09:48.149 "discovery_filter": "match_any", 00:09:48.149 "admin_cmd_passthru": { 00:09:48.149 "identify_ctrlr": false 00:09:48.149 } 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "nvmf_set_max_subsystems", 00:09:48.149 "params": { 00:09:48.149 "max_subsystems": 1024 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "nvmf_set_crdt", 00:09:48.149 "params": { 00:09:48.149 "crdt1": 0, 00:09:48.149 "crdt2": 0, 00:09:48.149 "crdt3": 0 00:09:48.149 } 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "method": "nvmf_create_transport", 00:09:48.149 "params": { 00:09:48.149 "trtype": "TCP", 00:09:48.149 "max_queue_depth": 128, 00:09:48.149 "max_io_qpairs_per_ctrlr": 127, 00:09:48.149 "in_capsule_data_size": 4096, 00:09:48.149 "max_io_size": 131072, 00:09:48.149 "io_unit_size": 131072, 00:09:48.149 "max_aq_depth": 128, 00:09:48.149 "num_shared_buffers": 511, 00:09:48.149 "buf_cache_size": 4294967295, 00:09:48.149 "dif_insert_or_strip": false, 00:09:48.149 "zcopy": false, 00:09:48.149 "c2h_success": true, 00:09:48.149 "sock_priority": 0, 00:09:48.149 "abort_timeout_sec": 1, 00:09:48.149 "ack_timeout": 0, 00:09:48.149 "data_wr_pool_size": 0 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 }, 00:09:48.149 { 00:09:48.149 "subsystem": "iscsi", 00:09:48.149 "config": [ 00:09:48.149 { 00:09:48.149 "method": "iscsi_set_options", 00:09:48.149 "params": { 00:09:48.149 "node_base": "iqn.2016-06.io.spdk", 00:09:48.149 "max_sessions": 128, 00:09:48.149 "max_connections_per_session": 2, 00:09:48.149 "max_queue_depth": 64, 00:09:48.149 "default_time2wait": 2, 00:09:48.149 "default_time2retain": 20, 00:09:48.149 "first_burst_length": 8192, 00:09:48.149 "immediate_data": true, 00:09:48.149 "allow_duplicated_isid": false, 00:09:48.149 "error_recovery_level": 0, 00:09:48.149 "nop_timeout": 60, 00:09:48.149 "nop_in_interval": 30, 00:09:48.149 "disable_chap": false, 00:09:48.149 "require_chap": false, 00:09:48.149 "mutual_chap": false, 00:09:48.149 "chap_group": 0, 00:09:48.149 "max_large_datain_per_connection": 64, 00:09:48.149 "max_r2t_per_connection": 4, 00:09:48.149 "pdu_pool_size": 36864, 00:09:48.149 "immediate_data_pool_size": 16384, 00:09:48.149 "data_out_pool_size": 2048 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 } 00:09:48.149 ] 00:09:48.149 } 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46058 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46058 ']' 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46058 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46058 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:09:48.149 killing process with pid 46058 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46058' 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46058 00:09:48.149 02:10:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46058 00:09:48.409 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46072 00:09:48.409 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:48.409 02:10:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46072 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46072 ']' 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46072 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46072 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:09:53.733 killing process with pid 46072 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46072' 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46072 00:09:53.733 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46072 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:53.992 00:09:53.992 real 0m6.890s 00:09:53.992 user 0m6.227s 00:09:53.992 sys 0m1.183s 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:53.992 ************************************ 00:09:53.992 END TEST skip_rpc_with_json 00:09:53.992 ************************************ 00:09:53.992 02:10:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:53.992 02:10:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:53.992 02:10:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.992 02:10:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.992 ************************************ 00:09:53.992 START TEST skip_rpc_with_delay 00:09:53.992 ************************************ 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.992 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:53.993 [2024-05-15 02:10:41.859626] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:53.993 [2024-05-15 02:10:41.859853] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:53.993 00:09:53.993 real 0m0.013s 00:09:53.993 user 0m0.002s 00:09:53.993 sys 0m0.015s 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.993 02:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 ************************************ 00:09:53.993 END TEST skip_rpc_with_delay 00:09:53.993 ************************************ 00:09:53.993 02:10:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:53.993 02:10:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:09:53.993 02:10:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:53.993 00:09:53.993 real 0m12.513s 00:09:53.993 user 0m11.166s 00:09:53.993 sys 0m1.958s 00:09:53.993 02:10:41 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.993 02:10:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 ************************************ 00:09:53.993 END TEST skip_rpc 00:09:53.993 ************************************ 00:09:53.993 02:10:41 -- spdk/autotest.sh@167 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:53.993 02:10:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:53.993 02:10:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.993 02:10:41 -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 ************************************ 00:09:53.993 START TEST rpc_client 00:09:53.993 ************************************ 00:09:53.993 02:10:41 rpc_client -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:54.251 * Looking for test storage... 00:09:54.251 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:54.251 02:10:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:54.251 OK 00:09:54.251 02:10:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:54.251 00:09:54.251 real 0m0.182s 00:09:54.251 user 0m0.138s 00:09:54.251 sys 0m0.125s 00:09:54.251 02:10:42 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.251 02:10:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:54.251 ************************************ 00:09:54.251 END TEST rpc_client 00:09:54.251 ************************************ 00:09:54.251 02:10:42 -- spdk/autotest.sh@168 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:54.251 02:10:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:54.251 02:10:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.251 02:10:42 -- common/autotest_common.sh@10 -- # set +x 00:09:54.251 ************************************ 00:09:54.251 START TEST json_config 00:09:54.251 ************************************ 00:09:54.251 02:10:42 json_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.510 02:10:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:54.510 02:10:42 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:09:54.510 02:10:42 json_config -- nvmf/common.sh@7 -- # return 0 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:54.510 02:10:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:54.511 INFO: JSON configuration test init 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 02:10:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:54.511 02:10:42 json_config -- json_config/common.sh@9 -- # local app=target 00:09:54.511 02:10:42 json_config -- json_config/common.sh@10 -- # shift 00:09:54.511 02:10:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:54.511 02:10:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:54.511 02:10:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:54.511 02:10:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:54.511 02:10:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:54.511 02:10:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46231 00:09:54.511 02:10:42 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:54.511 Waiting for target to run... 00:09:54.511 02:10:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:54.511 02:10:42 json_config -- json_config/common.sh@25 -- # waitforlisten 46231 /var/tmp/spdk_tgt.sock 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 46231 ']' 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:54.511 02:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 [2024-05-15 02:10:42.336991] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:54.511 [2024-05-15 02:10:42.337207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:54.770 EAL: TSC is not safe to use in SMP mode 00:09:54.770 EAL: TSC is not invariant 00:09:54.770 [2024-05-15 02:10:42.564184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.770 [2024-05-15 02:10:42.643150] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:54.770 [2024-05-15 02:10:42.645349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:09:55.707 00:09:55.707 02:10:43 json_config -- json_config/common.sh@26 -- # echo '' 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.707 02:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@273 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:55.707 02:10:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:55.707 02:10:43 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:55.965 [2024-05-15 02:10:43.805174] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:55.965 02:10:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.965 02:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:55.965 02:10:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:55.965 02:10:43 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:56.224 02:10:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.224 02:10:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:56.224 02:10:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:56.224 02:10:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:56.224 02:10:44 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:56.224 02:10:44 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:56.484 02:10:44 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:56.484 02:10:44 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:56.742 Nvme0n1p0 Nvme0n1p1 00:09:56.742 02:10:44 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:56.742 02:10:44 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:57.001 [2024-05-15 02:10:44.929068] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:57.001 [2024-05-15 02:10:44.929122] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:57.001 00:09:57.001 02:10:44 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:57.001 02:10:44 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:57.260 Malloc3 00:09:57.260 02:10:45 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:57.260 02:10:45 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:57.518 [2024-05-15 02:10:45.517107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:57.518 [2024-05-15 02:10:45.517169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.518 [2024-05-15 02:10:45.517212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b56b180 00:09:57.518 [2024-05-15 02:10:45.517220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.518 [2024-05-15 02:10:45.517724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.518 [2024-05-15 02:10:45.517753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:57.777 PTBdevFromMalloc3 00:09:57.777 02:10:45 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:57.777 02:10:45 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:58.036 Null0 00:09:58.036 02:10:45 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:58.036 02:10:45 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:58.294 Malloc0 00:09:58.294 02:10:46 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:58.294 02:10:46 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:58.551 Malloc1 00:09:58.551 02:10:46 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:58.551 02:10:46 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:58.847 102400+0 records in 00:09:58.847 102400+0 records out 00:09:58.847 104857600 bytes transferred in 0.341008 secs (307492701 bytes/sec) 00:09:58.847 02:10:46 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:58.847 02:10:46 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:59.106 aio_disk 00:09:59.106 02:10:46 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:59.106 02:10:46 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:59.106 02:10:46 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:59.365 57e83af6-1260-11ef-99fd-bfc7c66e2865 00:09:59.365 02:10:47 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:59.365 02:10:47 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:59.365 02:10:47 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:59.624 02:10:47 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:59.624 02:10:47 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:59.883 02:10:47 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:59.883 02:10:47 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:00.451 02:10:48 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:00.451 02:10:48 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@71 -- # sort 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@72 -- # sort 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:00.710 02:10:48 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:00.710 02:10:48 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:00.969 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\8\0\e\b\0\0\c\-\1\2\6\0\-\1\1\e\f\-\9\9\f\d\-\b\f\c\7\c\6\6\e\2\8\6\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\8\3\d\1\3\c\4\-\1\2\6\0\-\1\1\e\f\-\9\9\f\d\-\b\f\c\7\c\6\6\e\2\8\6\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\8\6\a\3\f\1\a\-\1\2\6\0\-\1\1\e\f\-\9\9\f\d\-\b\f\c\7\c\6\6\e\2\8\6\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\8\9\b\b\0\d\9\-\1\2\6\0\-\1\1\e\f\-\9\9\f\d\-\b\f\c\7\c\6\6\e\2\8\6\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@86 -- # cat 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:10:00.970 Expected events matched: 00:10:00.970 bdev_register:580eb00c-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 bdev_register:583d13c4-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 bdev_register:586a3f1a-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 bdev_register:589bb0d9-1260-11ef-99fd-bfc7c66e2865 00:10:00.970 bdev_register:Malloc0 00:10:00.970 bdev_register:Malloc0p0 00:10:00.970 bdev_register:Malloc0p1 00:10:00.970 bdev_register:Malloc0p2 00:10:00.970 bdev_register:Malloc1 00:10:00.970 bdev_register:Malloc3 00:10:00.970 bdev_register:Null0 00:10:00.970 bdev_register:Nvme0n1 00:10:00.970 bdev_register:Nvme0n1p0 00:10:00.970 bdev_register:Nvme0n1p1 00:10:00.970 bdev_register:PTBdevFromMalloc3 00:10:00.970 bdev_register:aio_disk 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:00.970 02:10:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.970 02:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:00.970 02:10:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.970 02:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:00.970 02:10:48 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:00.970 02:10:48 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:01.228 MallocBdevForConfigChangeCheck 00:10:01.228 02:10:49 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:01.228 02:10:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.228 02:10:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.228 02:10:49 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:01.228 02:10:49 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:01.793 INFO: shutting down applications... 00:10:01.793 02:10:49 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:01.793 02:10:49 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:01.793 02:10:49 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:01.793 02:10:49 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:01.793 02:10:49 json_config -- json_config/json_config.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:01.793 [2024-05-15 02:10:49.717415] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:02.050 Calling clear_iscsi_subsystem 00:10:02.050 Calling clear_nvmf_subsystem 00:10:02.050 Calling clear_bdev_subsystem 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@337 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:02.050 02:10:49 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:02.616 02:10:50 json_config -- json_config/json_config.sh@345 -- # break 00:10:02.616 02:10:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:02.616 02:10:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:02.616 02:10:50 json_config -- json_config/common.sh@31 -- # local app=target 00:10:02.616 02:10:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:02.616 02:10:50 json_config -- json_config/common.sh@35 -- # [[ -n 46231 ]] 00:10:02.616 02:10:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46231 00:10:02.616 02:10:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:02.616 02:10:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:02.616 02:10:50 json_config -- json_config/common.sh@41 -- # kill -0 46231 00:10:02.616 02:10:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:02.874 02:10:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:02.874 02:10:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:02.874 02:10:50 json_config -- json_config/common.sh@41 -- # kill -0 46231 00:10:02.874 02:10:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:02.874 02:10:50 json_config -- json_config/common.sh@43 -- # break 00:10:02.874 02:10:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:02.874 SPDK target shutdown done 00:10:02.874 02:10:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:02.874 INFO: relaunching applications... 00:10:02.874 02:10:50 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:02.874 02:10:50 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.874 02:10:50 json_config -- json_config/common.sh@9 -- # local app=target 00:10:02.874 02:10:50 json_config -- json_config/common.sh@10 -- # shift 00:10:02.874 02:10:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:02.874 02:10:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:02.874 02:10:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:02.874 02:10:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.874 02:10:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.874 02:10:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46421 00:10:02.874 02:10:50 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.874 02:10:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:02.874 Waiting for target to run... 00:10:02.874 02:10:50 json_config -- json_config/common.sh@25 -- # waitforlisten 46421 /var/tmp/spdk_tgt.sock 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@827 -- # '[' -z 46421 ']' 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:02.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:02.874 02:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.874 [2024-05-15 02:10:50.858796] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:02.874 [2024-05-15 02:10:50.859036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:03.133 EAL: TSC is not safe to use in SMP mode 00:10:03.133 EAL: TSC is not invariant 00:10:03.133 [2024-05-15 02:10:51.122367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.392 [2024-05-15 02:10:51.257658] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:03.392 [2024-05-15 02:10:51.262097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.650 [2024-05-15 02:10:51.399164] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:03.650 [2024-05-15 02:10:51.399226] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:03.650 [2024-05-15 02:10:51.407146] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:03.650 [2024-05-15 02:10:51.407176] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:03.650 [2024-05-15 02:10:51.415168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:03.650 [2024-05-15 02:10:51.415197] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:03.650 [2024-05-15 02:10:51.415207] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:03.650 [2024-05-15 02:10:51.423175] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:03.650 [2024-05-15 02:10:51.491180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:03.650 [2024-05-15 02:10:51.491249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.650 [2024-05-15 02:10:51.491272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c67f780 00:10:03.650 [2024-05-15 02:10:51.491283] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.650 [2024-05-15 02:10:51.491359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.650 [2024-05-15 02:10:51.491371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:04.217 02:10:51 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:04.217 02:10:51 json_config -- common/autotest_common.sh@860 -- # return 0 00:10:04.217 00:10:04.217 02:10:51 json_config -- json_config/common.sh@26 -- # echo '' 00:10:04.217 02:10:51 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:04.217 02:10:51 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:04.217 INFO: Checking if target configuration is the same... 00:10:04.217 02:10:51 json_config -- json_config/json_config.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.bU25SW /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.217 + '[' 2 -ne 2 ']' 00:10:04.217 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:04.217 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:04.217 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:10:04.217 +++ basename /tmp//sh-np.bU25SW 00:10:04.217 ++ mktemp /tmp/sh-np.bU25SW.XXX 00:10:04.217 + tmp_file_1=/tmp/sh-np.bU25SW.BoK 00:10:04.217 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.217 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:04.217 + tmp_file_2=/tmp/spdk_tgt_config.json.zYz 00:10:04.217 + ret=0 00:10:04.217 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:04.217 02:10:51 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:04.217 02:10:51 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:04.515 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:04.515 + diff -u /tmp/sh-np.bU25SW.BoK /tmp/spdk_tgt_config.json.zYz 00:10:04.515 + echo 'INFO: JSON config files are the same' 00:10:04.515 INFO: JSON config files are the same 00:10:04.515 + rm /tmp/sh-np.bU25SW.BoK /tmp/spdk_tgt_config.json.zYz 00:10:04.515 + exit 0 00:10:04.515 02:10:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:04.515 INFO: changing configuration and checking if this can be detected... 00:10:04.515 02:10:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:04.515 02:10:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:04.515 02:10:52 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:04.773 02:10:52 json_config -- json_config/json_config.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.s1aiYh /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.773 + '[' 2 -ne 2 ']' 00:10:04.773 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:04.773 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:04.773 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:10:04.773 +++ basename /tmp//sh-np.s1aiYh 00:10:04.773 ++ mktemp /tmp/sh-np.s1aiYh.XXX 00:10:04.773 + tmp_file_1=/tmp/sh-np.s1aiYh.l0n 00:10:04.773 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.773 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:04.773 + tmp_file_2=/tmp/spdk_tgt_config.json.Y7h 00:10:04.773 + ret=0 00:10:04.773 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:04.773 02:10:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:04.773 02:10:52 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:05.339 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:05.339 + diff -u /tmp/sh-np.s1aiYh.l0n /tmp/spdk_tgt_config.json.Y7h 00:10:05.339 + ret=1 00:10:05.339 + echo '=== Start of file: /tmp/sh-np.s1aiYh.l0n ===' 00:10:05.339 + cat /tmp/sh-np.s1aiYh.l0n 00:10:05.339 + echo '=== End of file: /tmp/sh-np.s1aiYh.l0n ===' 00:10:05.339 + echo '' 00:10:05.339 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Y7h ===' 00:10:05.339 + cat /tmp/spdk_tgt_config.json.Y7h 00:10:05.339 + echo '=== End of file: /tmp/spdk_tgt_config.json.Y7h ===' 00:10:05.339 + echo '' 00:10:05.339 + rm /tmp/sh-np.s1aiYh.l0n /tmp/spdk_tgt_config.json.Y7h 00:10:05.339 + exit 1 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:05.339 INFO: configuration change detected. 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:05.339 02:10:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:05.339 02:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 46421 ]] 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:05.339 02:10:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:05.339 02:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:05.339 02:10:53 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:05.339 02:10:53 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:05.597 02:10:53 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:05.597 02:10:53 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:05.855 02:10:53 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:05.855 02:10:53 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:06.113 02:10:53 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:06.113 02:10:53 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:06.372 02:10:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:06.372 02:10:54 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:10:06.372 02:10:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:06.372 02:10:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.372 02:10:54 json_config -- json_config/json_config.sh@323 -- # killprocess 46421 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@946 -- # '[' -z 46421 ']' 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@950 -- # kill -0 46421 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@951 -- # uname 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@954 -- # ps -c -o command 46421 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@954 -- # tail -1 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:10:06.372 killing process with pid 46421 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46421' 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@965 -- # kill 46421 00:10:06.372 02:10:54 json_config -- common/autotest_common.sh@970 -- # wait 46421 00:10:06.631 02:10:54 json_config -- json_config/json_config.sh@326 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:06.631 02:10:54 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:06.631 02:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.631 02:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.631 02:10:54 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:06.631 INFO: Success 00:10:06.631 02:10:54 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:06.631 00:10:06.631 real 0m12.327s 00:10:06.631 user 0m19.625s 00:10:06.631 sys 0m2.060s 00:10:06.631 02:10:54 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.631 ************************************ 00:10:06.631 END TEST json_config 00:10:06.631 ************************************ 00:10:06.631 02:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.631 02:10:54 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:06.631 02:10:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.631 02:10:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.631 02:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:06.631 ************************************ 00:10:06.631 START TEST json_config_extra_key 00:10:06.631 ************************************ 00:10:06.631 02:10:54 json_config_extra_key -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.890 02:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:06.890 02:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:10:06.890 02:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:06.890 INFO: launching applications... 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:06.890 02:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46554 00:10:06.890 Waiting for target to run... 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46554 /var/tmp/spdk_tgt.sock 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 46554 ']' 00:10:06.890 02:10:54 json_config_extra_key -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:06.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:06.890 02:10:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:06.890 [2024-05-15 02:10:54.697430] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:06.890 [2024-05-15 02:10:54.697666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:07.148 EAL: TSC is not safe to use in SMP mode 00:10:07.148 EAL: TSC is not invariant 00:10:07.148 [2024-05-15 02:10:54.927226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.148 [2024-05-15 02:10:55.006505] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:07.148 [2024-05-15 02:10:55.008627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.120 02:10:55 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:08.120 02:10:55 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:10:08.120 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:08.120 INFO: shutting down applications... 00:10:08.120 02:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:08.120 02:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46554 ]] 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46554 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46554 00:10:08.120 02:10:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46554 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:08.379 SPDK target shutdown done 00:10:08.379 02:10:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:08.379 Success 00:10:08.379 02:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:08.379 00:10:08.379 real 0m1.804s 00:10:08.379 user 0m1.622s 00:10:08.379 sys 0m0.487s 00:10:08.379 02:10:56 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.379 02:10:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:08.379 ************************************ 00:10:08.379 END TEST json_config_extra_key 00:10:08.379 ************************************ 00:10:08.379 02:10:56 -- spdk/autotest.sh@170 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:08.379 02:10:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:08.379 02:10:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.379 02:10:56 -- common/autotest_common.sh@10 -- # set +x 00:10:08.379 ************************************ 00:10:08.379 START TEST alias_rpc 00:10:08.379 ************************************ 00:10:08.379 02:10:56 alias_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:08.638 * Looking for test storage... 00:10:08.638 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:08.638 02:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:08.638 02:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46608 00:10:08.638 02:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46608 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 46608 ']' 00:10:08.638 02:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:08.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:08.638 02:10:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.638 [2024-05-15 02:10:56.552199] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:08.638 [2024-05-15 02:10:56.552473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:09.203 EAL: TSC is not safe to use in SMP mode 00:10:09.203 EAL: TSC is not invariant 00:10:09.203 [2024-05-15 02:10:57.050015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.203 [2024-05-15 02:10:57.128057] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:09.203 [2024-05-15 02:10:57.130072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.770 02:10:57 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:09.770 02:10:57 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:09.770 02:10:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:10.027 02:10:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46608 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 46608 ']' 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 46608 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46608 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@954 -- # tail -1 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:10:10.027 killing process with pid 46608 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46608' 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@965 -- # kill 46608 00:10:10.027 02:10:57 alias_rpc -- common/autotest_common.sh@970 -- # wait 46608 00:10:10.285 00:10:10.285 real 0m1.715s 00:10:10.285 user 0m1.823s 00:10:10.285 sys 0m0.761s 00:10:10.285 ************************************ 00:10:10.285 END TEST alias_rpc 00:10:10.285 ************************************ 00:10:10.285 02:10:58 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:10.285 02:10:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.285 02:10:58 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:10:10.285 02:10:58 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:10.285 02:10:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:10.285 02:10:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:10.285 02:10:58 -- common/autotest_common.sh@10 -- # set +x 00:10:10.285 ************************************ 00:10:10.285 START TEST spdkcli_tcp 00:10:10.285 ************************************ 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:10.285 * Looking for test storage... 00:10:10.285 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46673 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46673 00:10:10.285 02:10:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 46673 ']' 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:10.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:10.285 02:10:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.592 [2024-05-15 02:10:58.287825] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:10.592 [2024-05-15 02:10:58.288047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:10.850 EAL: TSC is not safe to use in SMP mode 00:10:10.850 EAL: TSC is not invariant 00:10:10.850 [2024-05-15 02:10:58.792866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:11.109 [2024-05-15 02:10:58.888737] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:11.109 [2024-05-15 02:10:58.888814] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:10:11.109 [2024-05-15 02:10:58.892139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.109 [2024-05-15 02:10:58.892132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.367 02:10:59 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.367 02:10:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:10:11.367 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46681 00:10:11.367 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:11.367 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:11.626 [ 00:10:11.626 "spdk_get_version", 00:10:11.626 "rpc_get_methods", 00:10:11.626 "env_dpdk_get_mem_stats", 00:10:11.626 "trace_get_info", 00:10:11.626 "trace_get_tpoint_group_mask", 00:10:11.626 "trace_disable_tpoint_group", 00:10:11.626 "trace_enable_tpoint_group", 00:10:11.626 "trace_clear_tpoint_mask", 00:10:11.626 "trace_set_tpoint_mask", 00:10:11.626 "notify_get_notifications", 00:10:11.626 "notify_get_types", 00:10:11.626 "accel_get_stats", 00:10:11.626 "accel_set_options", 00:10:11.626 "accel_set_driver", 00:10:11.626 "accel_crypto_key_destroy", 00:10:11.626 "accel_crypto_keys_get", 00:10:11.626 "accel_crypto_key_create", 00:10:11.626 "accel_assign_opc", 00:10:11.626 "accel_get_module_info", 00:10:11.626 "accel_get_opc_assignments", 00:10:11.626 "bdev_get_histogram", 00:10:11.626 "bdev_enable_histogram", 00:10:11.626 "bdev_set_qos_limit", 00:10:11.626 "bdev_set_qd_sampling_period", 00:10:11.626 "bdev_get_bdevs", 00:10:11.626 "bdev_reset_iostat", 00:10:11.626 "bdev_get_iostat", 00:10:11.626 "bdev_examine", 00:10:11.626 "bdev_wait_for_examine", 00:10:11.626 "bdev_set_options", 00:10:11.626 "keyring_get_keys", 00:10:11.626 "framework_get_pci_devices", 00:10:11.626 "framework_get_config", 00:10:11.626 "framework_get_subsystems", 00:10:11.626 "sock_get_default_impl", 00:10:11.626 "sock_set_default_impl", 00:10:11.626 "sock_impl_set_options", 00:10:11.626 "sock_impl_get_options", 00:10:11.626 "thread_set_cpumask", 00:10:11.626 "framework_get_scheduler", 00:10:11.626 "framework_set_scheduler", 00:10:11.626 "framework_get_reactors", 00:10:11.626 "thread_get_io_channels", 00:10:11.626 "thread_get_pollers", 00:10:11.626 "thread_get_stats", 00:10:11.626 "framework_monitor_context_switch", 00:10:11.626 "spdk_kill_instance", 00:10:11.626 "log_enable_timestamps", 00:10:11.626 "log_get_flags", 00:10:11.626 "log_clear_flag", 00:10:11.626 "log_set_flag", 00:10:11.626 "log_get_level", 00:10:11.626 "log_set_level", 00:10:11.626 "log_get_print_level", 00:10:11.626 "log_set_print_level", 00:10:11.626 "framework_enable_cpumask_locks", 00:10:11.626 "framework_disable_cpumask_locks", 00:10:11.626 "framework_wait_init", 00:10:11.626 "framework_start_init", 00:10:11.626 "iobuf_get_stats", 00:10:11.626 "iobuf_set_options", 00:10:11.626 "vmd_rescan", 00:10:11.626 "vmd_remove_device", 00:10:11.626 "vmd_enable", 00:10:11.626 "nvmf_stop_mdns_prr", 00:10:11.626 "nvmf_publish_mdns_prr", 00:10:11.626 "nvmf_subsystem_get_listeners", 00:10:11.626 "nvmf_subsystem_get_qpairs", 00:10:11.626 "nvmf_subsystem_get_controllers", 00:10:11.626 "nvmf_get_stats", 00:10:11.626 "nvmf_get_transports", 00:10:11.626 "nvmf_create_transport", 00:10:11.626 "nvmf_get_targets", 00:10:11.626 "nvmf_delete_target", 00:10:11.626 "nvmf_create_target", 00:10:11.626 "nvmf_subsystem_allow_any_host", 00:10:11.626 "nvmf_subsystem_remove_host", 00:10:11.626 "nvmf_subsystem_add_host", 00:10:11.626 "nvmf_ns_remove_host", 00:10:11.626 "nvmf_ns_add_host", 00:10:11.626 "nvmf_subsystem_remove_ns", 00:10:11.626 "nvmf_subsystem_add_ns", 00:10:11.626 "nvmf_subsystem_listener_set_ana_state", 00:10:11.626 "nvmf_discovery_get_referrals", 00:10:11.626 "nvmf_discovery_remove_referral", 00:10:11.626 "nvmf_discovery_add_referral", 00:10:11.626 "nvmf_subsystem_remove_listener", 00:10:11.626 "nvmf_subsystem_add_listener", 00:10:11.626 "nvmf_delete_subsystem", 00:10:11.626 "nvmf_create_subsystem", 00:10:11.626 "nvmf_get_subsystems", 00:10:11.626 "nvmf_set_crdt", 00:10:11.626 "nvmf_set_config", 00:10:11.626 "nvmf_set_max_subsystems", 00:10:11.626 "scsi_get_devices", 00:10:11.626 "iscsi_get_histogram", 00:10:11.626 "iscsi_enable_histogram", 00:10:11.626 "iscsi_set_options", 00:10:11.626 "iscsi_get_auth_groups", 00:10:11.626 "iscsi_auth_group_remove_secret", 00:10:11.626 "iscsi_auth_group_add_secret", 00:10:11.626 "iscsi_delete_auth_group", 00:10:11.626 "iscsi_create_auth_group", 00:10:11.626 "iscsi_set_discovery_auth", 00:10:11.626 "iscsi_get_options", 00:10:11.626 "iscsi_target_node_request_logout", 00:10:11.626 "iscsi_target_node_set_redirect", 00:10:11.626 "iscsi_target_node_set_auth", 00:10:11.626 "iscsi_target_node_add_lun", 00:10:11.626 "iscsi_get_stats", 00:10:11.626 "iscsi_get_connections", 00:10:11.626 "iscsi_portal_group_set_auth", 00:10:11.626 "iscsi_start_portal_group", 00:10:11.626 "iscsi_delete_portal_group", 00:10:11.626 "iscsi_create_portal_group", 00:10:11.626 "iscsi_get_portal_groups", 00:10:11.626 "iscsi_delete_target_node", 00:10:11.626 "iscsi_target_node_remove_pg_ig_maps", 00:10:11.626 "iscsi_target_node_add_pg_ig_maps", 00:10:11.626 "iscsi_create_target_node", 00:10:11.626 "iscsi_get_target_nodes", 00:10:11.626 "iscsi_delete_initiator_group", 00:10:11.626 "iscsi_initiator_group_remove_initiators", 00:10:11.626 "iscsi_initiator_group_add_initiators", 00:10:11.626 "iscsi_create_initiator_group", 00:10:11.626 "iscsi_get_initiator_groups", 00:10:11.626 "keyring_file_remove_key", 00:10:11.626 "keyring_file_add_key", 00:10:11.626 "iaa_scan_accel_module", 00:10:11.626 "dsa_scan_accel_module", 00:10:11.626 "ioat_scan_accel_module", 00:10:11.626 "accel_error_inject_error", 00:10:11.626 "bdev_aio_delete", 00:10:11.626 "bdev_aio_rescan", 00:10:11.626 "bdev_aio_create", 00:10:11.626 "blobfs_create", 00:10:11.626 "blobfs_detect", 00:10:11.626 "blobfs_set_cache_size", 00:10:11.626 "bdev_zone_block_delete", 00:10:11.626 "bdev_zone_block_create", 00:10:11.626 "bdev_delay_delete", 00:10:11.626 "bdev_delay_create", 00:10:11.626 "bdev_delay_update_latency", 00:10:11.626 "bdev_split_delete", 00:10:11.626 "bdev_split_create", 00:10:11.626 "bdev_error_inject_error", 00:10:11.626 "bdev_error_delete", 00:10:11.626 "bdev_error_create", 00:10:11.626 "bdev_raid_set_options", 00:10:11.626 "bdev_raid_remove_base_bdev", 00:10:11.626 "bdev_raid_add_base_bdev", 00:10:11.626 "bdev_raid_delete", 00:10:11.626 "bdev_raid_create", 00:10:11.626 "bdev_raid_get_bdevs", 00:10:11.626 "bdev_lvol_check_shallow_copy", 00:10:11.626 "bdev_lvol_start_shallow_copy", 00:10:11.626 "bdev_lvol_grow_lvstore", 00:10:11.626 "bdev_lvol_get_lvols", 00:10:11.626 "bdev_lvol_get_lvstores", 00:10:11.626 "bdev_lvol_delete", 00:10:11.626 "bdev_lvol_set_read_only", 00:10:11.626 "bdev_lvol_resize", 00:10:11.626 "bdev_lvol_decouple_parent", 00:10:11.626 "bdev_lvol_inflate", 00:10:11.626 "bdev_lvol_rename", 00:10:11.626 "bdev_lvol_clone_bdev", 00:10:11.626 "bdev_lvol_clone", 00:10:11.626 "bdev_lvol_snapshot", 00:10:11.626 "bdev_lvol_create", 00:10:11.626 "bdev_lvol_delete_lvstore", 00:10:11.626 "bdev_lvol_rename_lvstore", 00:10:11.626 "bdev_lvol_create_lvstore", 00:10:11.626 "bdev_passthru_delete", 00:10:11.626 "bdev_passthru_create", 00:10:11.626 "bdev_nvme_send_cmd", 00:10:11.626 "bdev_nvme_get_path_iostat", 00:10:11.626 "bdev_nvme_get_mdns_discovery_info", 00:10:11.626 "bdev_nvme_stop_mdns_discovery", 00:10:11.626 "bdev_nvme_start_mdns_discovery", 00:10:11.626 "bdev_nvme_set_multipath_policy", 00:10:11.626 "bdev_nvme_set_preferred_path", 00:10:11.626 "bdev_nvme_get_io_paths", 00:10:11.626 "bdev_nvme_remove_error_injection", 00:10:11.626 "bdev_nvme_add_error_injection", 00:10:11.626 "bdev_nvme_get_discovery_info", 00:10:11.626 "bdev_nvme_stop_discovery", 00:10:11.626 "bdev_nvme_start_discovery", 00:10:11.626 "bdev_nvme_get_controller_health_info", 00:10:11.626 "bdev_nvme_disable_controller", 00:10:11.626 "bdev_nvme_enable_controller", 00:10:11.626 "bdev_nvme_reset_controller", 00:10:11.626 "bdev_nvme_get_transport_statistics", 00:10:11.626 "bdev_nvme_apply_firmware", 00:10:11.626 "bdev_nvme_detach_controller", 00:10:11.626 "bdev_nvme_get_controllers", 00:10:11.626 "bdev_nvme_attach_controller", 00:10:11.626 "bdev_nvme_set_hotplug", 00:10:11.626 "bdev_nvme_set_options", 00:10:11.626 "bdev_null_resize", 00:10:11.626 "bdev_null_delete", 00:10:11.626 "bdev_null_create", 00:10:11.626 "bdev_malloc_delete", 00:10:11.626 "bdev_malloc_create" 00:10:11.626 ] 00:10:11.626 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.626 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:11.626 02:10:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46673 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 46673 ']' 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 46673 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps -c -o command 46673 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # tail -1 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:10:11.626 killing process with pid 46673 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46673' 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 46673 00:10:11.626 02:10:59 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 46673 00:10:11.885 00:10:11.885 real 0m1.661s 00:10:11.885 user 0m2.580s 00:10:11.885 sys 0m0.722s 00:10:11.885 02:10:59 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:11.885 02:10:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.885 ************************************ 00:10:11.885 END TEST spdkcli_tcp 00:10:11.885 ************************************ 00:10:11.885 02:10:59 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:11.885 02:10:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:11.885 02:10:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:11.885 02:10:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.885 ************************************ 00:10:11.885 START TEST dpdk_mem_utility 00:10:11.885 ************************************ 00:10:11.885 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:12.142 * Looking for test storage... 00:10:12.142 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:12.142 02:10:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:12.142 02:10:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46752 00:10:12.142 02:10:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46752 00:10:12.142 02:10:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 46752 ']' 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:12.142 02:10:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:12.142 [2024-05-15 02:10:59.968919] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:12.142 [2024-05-15 02:10:59.969114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:12.706 EAL: TSC is not safe to use in SMP mode 00:10:12.706 EAL: TSC is not invariant 00:10:12.706 [2024-05-15 02:11:00.416952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.706 [2024-05-15 02:11:00.500600] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:12.706 [2024-05-15 02:11:00.502807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.964 02:11:00 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:12.964 02:11:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:10:12.964 02:11:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:12.964 02:11:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:12.964 02:11:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.964 02:11:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:12.964 { 00:10:12.964 "filename": "/tmp/spdk_mem_dump.txt" 00:10:12.964 } 00:10:12.964 02:11:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.964 02:11:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:13.222 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:10:13.222 1 heaps totaling size 2048.000000 MiB 00:10:13.222 size: 2048.000000 MiB heap id: 0 00:10:13.222 end heaps---------- 00:10:13.222 8 mempools totaling size 592.563660 MiB 00:10:13.222 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:10:13.222 size: 153.489014 MiB name: PDU_data_out_Pool 00:10:13.222 size: 84.500549 MiB name: bdev_io_46752 00:10:13.222 size: 51.008362 MiB name: evtpool_46752 00:10:13.222 size: 50.000549 MiB name: msgpool_46752 00:10:13.222 size: 21.758911 MiB name: PDU_Pool 00:10:13.222 size: 19.508911 MiB name: SCSI_TASK_Pool 00:10:13.222 size: 0.026123 MiB name: Session_Pool 00:10:13.222 end mempools------- 00:10:13.222 6 memzones totaling size 4.142822 MiB 00:10:13.222 size: 1.000366 MiB name: RG_ring_0_46752 00:10:13.222 size: 1.000366 MiB name: RG_ring_1_46752 00:10:13.222 size: 1.000366 MiB name: RG_ring_4_46752 00:10:13.222 size: 1.000366 MiB name: RG_ring_5_46752 00:10:13.222 size: 0.125366 MiB name: RG_ring_2_46752 00:10:13.222 size: 0.015991 MiB name: RG_ring_3_46752 00:10:13.222 end memzones------- 00:10:13.222 02:11:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:13.222 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:10:13.222 list of free elements. size: 1254.071899 MiB 00:10:13.222 element at address: 0x1060000000 with size: 1254.001099 MiB 00:10:13.222 element at address: 0x10c8000000 with size: 0.070129 MiB 00:10:13.222 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:10:13.222 list of standard malloc elements. size: 197.217957 MiB 00:10:13.222 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:10:13.222 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:10:13.222 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:10:13.222 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:10:13.222 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:10:13.222 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:10:13.222 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:10:13.222 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:10:13.222 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:10:13.222 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:10:13.222 list of memzone associated elements. size: 596.710144 MiB 00:10:13.222 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:10:13.222 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:10:13.222 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:10:13.222 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:10:13.222 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:10:13.222 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46752_0 00:10:13.222 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:10:13.222 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46752_0 00:10:13.223 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:10:13.223 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46752_0 00:10:13.223 element at address: 0x10c683d780 with size: 20.250671 MiB 00:10:13.223 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:10:13.223 element at address: 0x10ae700680 with size: 18.000671 MiB 00:10:13.223 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:10:13.223 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:10:13.223 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46752 00:10:13.223 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:10:13.223 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46752 00:10:13.223 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:10:13.223 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46752 00:10:13.223 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:10:13.223 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:13.223 element at address: 0x10c673b640 with size: 1.008118 MiB 00:10:13.223 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:13.223 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:10:13.223 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:13.223 element at address: 0x10af980b40 with size: 1.008118 MiB 00:10:13.223 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:13.223 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:10:13.223 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46752 00:10:13.223 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:10:13.223 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46752 00:10:13.223 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:10:13.223 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46752 00:10:13.223 element at address: 0x10ae600480 with size: 1.000488 MiB 00:10:13.223 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46752 00:10:13.223 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:10:13.223 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46752 00:10:13.223 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:10:13.223 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:13.223 element at address: 0x10af900940 with size: 0.500488 MiB 00:10:13.223 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:13.223 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:10:13.223 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:13.223 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:10:13.223 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46752 00:10:13.223 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:10:13.223 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:13.223 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:10:13.223 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:13.223 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:10:13.223 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46752 00:10:13.223 element at address: 0x10c8018080 with size: 0.002441 MiB 00:10:13.223 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:13.223 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:10:13.223 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46752 00:10:13.223 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:10:13.223 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46752 00:10:13.223 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:10:13.223 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:13.223 02:11:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:13.223 02:11:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46752 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 46752 ']' 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 46752 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps -c -o command 46752 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # tail -1 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:10:13.223 killing process with pid 46752 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46752' 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 46752 00:10:13.223 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 46752 00:10:13.481 00:10:13.481 real 0m1.500s 00:10:13.481 user 0m1.474s 00:10:13.481 sys 0m0.688s 00:10:13.481 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.481 02:11:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:13.481 ************************************ 00:10:13.481 END TEST dpdk_mem_utility 00:10:13.481 ************************************ 00:10:13.481 02:11:01 -- spdk/autotest.sh@177 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:13.481 02:11:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:13.481 02:11:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.481 02:11:01 -- common/autotest_common.sh@10 -- # set +x 00:10:13.481 ************************************ 00:10:13.481 START TEST event 00:10:13.481 ************************************ 00:10:13.481 02:11:01 event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:13.738 * Looking for test storage... 00:10:13.738 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:10:13.738 02:11:01 event -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:13.738 02:11:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:13.738 02:11:01 event -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:13.738 02:11:01 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:13.739 02:11:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.739 02:11:01 event -- common/autotest_common.sh@10 -- # set +x 00:10:13.739 ************************************ 00:10:13.739 START TEST event_perf 00:10:13.739 ************************************ 00:10:13.739 02:11:01 event.event_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:13.739 Running I/O for 1 seconds...[2024-05-15 02:11:01.562410] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:13.739 [2024-05-15 02:11:01.562576] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:14.305 EAL: TSC is not safe to use in SMP mode 00:10:14.305 EAL: TSC is not invariant 00:10:14.305 [2024-05-15 02:11:02.069331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.305 [2024-05-15 02:11:02.169157] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:14.305 [2024-05-15 02:11:02.169255] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:10:14.305 [2024-05-15 02:11:02.169281] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:10:14.305 [2024-05-15 02:11:02.169306] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:10:14.306 [2024-05-15 02:11:02.174234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.306 [2024-05-15 02:11:02.174389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.306 [2024-05-15 02:11:02.174311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.306 [2024-05-15 02:11:02.174384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.239 Running I/O for 1 seconds... 00:10:15.239 lcore 0: 2119223 00:10:15.239 lcore 1: 2119221 00:10:15.239 lcore 2: 2119220 00:10:15.239 lcore 3: 2119222 00:10:15.497 done. 00:10:15.497 00:10:15.497 real 0m1.712s 00:10:15.497 user 0m4.155s 00:10:15.497 sys 0m0.553s 00:10:15.497 02:11:03 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:15.497 ************************************ 00:10:15.497 02:11:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 END TEST event_perf 00:10:15.497 ************************************ 00:10:15.497 02:11:03 event -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:15.497 02:11:03 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:15.497 02:11:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:15.497 02:11:03 event -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 ************************************ 00:10:15.497 START TEST event_reactor 00:10:15.497 ************************************ 00:10:15.497 02:11:03 event.event_reactor -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:15.497 [2024-05-15 02:11:03.314025] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:15.497 [2024-05-15 02:11:03.314308] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:16.064 EAL: TSC is not safe to use in SMP mode 00:10:16.064 EAL: TSC is not invariant 00:10:16.064 [2024-05-15 02:11:03.767625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.064 [2024-05-15 02:11:03.849998] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:16.064 [2024-05-15 02:11:03.852127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.001 test_start 00:10:17.001 oneshot 00:10:17.001 tick 100 00:10:17.001 tick 100 00:10:17.001 tick 250 00:10:17.001 tick 100 00:10:17.001 tick 100 00:10:17.001 tick 100 00:10:17.001 tick 250 00:10:17.001 tick 500 00:10:17.001 tick 100 00:10:17.001 tick 100 00:10:17.001 tick 250 00:10:17.001 tick 100 00:10:17.001 tick 100 00:10:17.001 test_end 00:10:17.001 00:10:17.001 real 0m1.639s 00:10:17.001 user 0m1.158s 00:10:17.001 sys 0m0.482s 00:10:17.001 02:11:04 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:17.001 02:11:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:17.001 ************************************ 00:10:17.001 END TEST event_reactor 00:10:17.001 ************************************ 00:10:17.001 02:11:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:17.001 02:11:04 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:17.001 02:11:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.001 02:11:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:17.001 ************************************ 00:10:17.001 START TEST event_reactor_perf 00:10:17.001 ************************************ 00:10:17.001 02:11:04 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:17.001 [2024-05-15 02:11:04.988731] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:17.001 [2024-05-15 02:11:04.988899] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:17.590 EAL: TSC is not safe to use in SMP mode 00:10:17.590 EAL: TSC is not invariant 00:10:17.590 [2024-05-15 02:11:05.466930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.590 [2024-05-15 02:11:05.546989] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:17.590 [2024-05-15 02:11:05.549093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.963 test_start 00:10:18.963 test_end 00:10:18.963 Performance: 3894979 events per second 00:10:18.963 00:10:18.963 real 0m1.658s 00:10:18.963 user 0m1.134s 00:10:18.963 sys 0m0.522s 00:10:18.963 02:11:06 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:18.963 02:11:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 ************************************ 00:10:18.963 END TEST event_reactor_perf 00:10:18.963 ************************************ 00:10:18.963 02:11:06 event -- event/event.sh@49 -- # uname -s 00:10:18.963 02:11:06 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:10:18.963 00:10:18.963 real 0m5.315s 00:10:18.963 user 0m6.624s 00:10:18.963 sys 0m1.749s 00:10:18.963 02:11:06 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:18.963 02:11:06 event -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 ************************************ 00:10:18.963 END TEST event 00:10:18.963 ************************************ 00:10:18.963 02:11:06 -- spdk/autotest.sh@178 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:18.963 02:11:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:18.963 02:11:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.963 02:11:06 -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 ************************************ 00:10:18.963 START TEST thread 00:10:18.963 ************************************ 00:10:18.963 02:11:06 thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:18.963 * Looking for test storage... 00:10:18.963 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:10:18.963 02:11:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.963 02:11:06 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:18.963 02:11:06 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.963 02:11:06 thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 ************************************ 00:10:18.963 START TEST thread_poller_perf 00:10:18.964 ************************************ 00:10:18.964 02:11:06 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.964 [2024-05-15 02:11:06.913423] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:18.964 [2024-05-15 02:11:06.913696] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:19.529 EAL: TSC is not safe to use in SMP mode 00:10:19.529 EAL: TSC is not invariant 00:10:19.529 [2024-05-15 02:11:07.440451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.792 [2024-05-15 02:11:07.538097] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:19.792 [2024-05-15 02:11:07.541402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.792 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:20.729 ====================================== 00:10:20.729 busy:2102749330 (cyc) 00:10:20.729 total_run_count: 6062000 00:10:20.729 tsc_hz: 2100005139 (cyc) 00:10:20.729 ====================================== 00:10:20.729 poller_cost: 346 (cyc), 164 (nsec) 00:10:20.729 00:10:20.729 real 0m1.726s 00:10:20.729 user 0m1.148s 00:10:20.729 sys 0m0.576s 00:10:20.729 02:11:08 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.729 02:11:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 ************************************ 00:10:20.729 END TEST thread_poller_perf 00:10:20.729 ************************************ 00:10:20.729 02:11:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:20.729 02:11:08 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:20.729 02:11:08 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.729 02:11:08 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 ************************************ 00:10:20.729 START TEST thread_poller_perf 00:10:20.729 ************************************ 00:10:20.729 02:11:08 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:20.729 [2024-05-15 02:11:08.680449] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:20.729 [2024-05-15 02:11:08.680752] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:21.297 EAL: TSC is not safe to use in SMP mode 00:10:21.297 EAL: TSC is not invariant 00:10:21.297 [2024-05-15 02:11:09.149627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.297 [2024-05-15 02:11:09.233143] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:21.297 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:21.297 [2024-05-15 02:11:09.235251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.673 ====================================== 00:10:22.673 busy:2100933708 (cyc) 00:10:22.673 total_run_count: 82453000 00:10:22.673 tsc_hz: 2100005139 (cyc) 00:10:22.673 ====================================== 00:10:22.673 poller_cost: 25 (cyc), 11 (nsec) 00:10:22.673 00:10:22.673 real 0m1.656s 00:10:22.673 user 0m1.139s 00:10:22.673 sys 0m0.515s 00:10:22.673 02:11:10 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.673 02:11:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 ************************************ 00:10:22.673 END TEST thread_poller_perf 00:10:22.673 ************************************ 00:10:22.673 02:11:10 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:22.673 02:11:10 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:22.673 02:11:10 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:22.673 02:11:10 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.673 02:11:10 thread -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 ************************************ 00:10:22.673 START TEST thread_spdk_lock 00:10:22.673 ************************************ 00:10:22.673 02:11:10 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:22.673 [2024-05-15 02:11:10.376432] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:22.673 [2024-05-15 02:11:10.376677] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:22.931 EAL: TSC is not safe to use in SMP mode 00:10:22.931 EAL: TSC is not invariant 00:10:22.931 [2024-05-15 02:11:10.846289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.190 [2024-05-15 02:11:10.941935] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:23.190 [2024-05-15 02:11:10.942012] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:10:23.190 [2024-05-15 02:11:10.945311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.190 [2024-05-15 02:11:10.945300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.509 [2024-05-15 02:11:11.388701] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.509 [2024-05-15 02:11:11.388775] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:23.509 [2024-05-15 02:11:11.388785] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x311b60 00:10:23.509 [2024-05-15 02:11:11.389157] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.509 [2024-05-15 02:11:11.389257] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.509 [2024-05-15 02:11:11.389271] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:23.784 Starting test contend 00:10:23.784 Worker Delay Wait us Hold us Total us 00:10:23.784 0 3 256576 165725 422302 00:10:23.784 1 5 167611 264504 432115 00:10:23.784 PASS test contend 00:10:23.784 Starting test hold_by_poller 00:10:23.784 PASS test hold_by_poller 00:10:23.784 Starting test hold_by_message 00:10:23.784 PASS test hold_by_message 00:10:23.784 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:23.784 100014 assertions passed 00:10:23.784 0 assertions failed 00:10:23.784 00:10:23.784 real 0m1.116s 00:10:23.784 user 0m1.043s 00:10:23.784 sys 0m0.514s 00:10:23.784 02:11:11 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.784 02:11:11 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:10:23.784 ************************************ 00:10:23.784 END TEST thread_spdk_lock 00:10:23.784 ************************************ 00:10:23.784 00:10:23.784 real 0m4.792s 00:10:23.784 user 0m3.450s 00:10:23.784 sys 0m1.845s 00:10:23.784 02:11:11 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.784 02:11:11 thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.784 ************************************ 00:10:23.784 END TEST thread 00:10:23.784 ************************************ 00:10:23.784 02:11:11 -- spdk/autotest.sh@179 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:23.784 02:11:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:23.784 02:11:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.784 02:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:23.784 ************************************ 00:10:23.784 START TEST accel 00:10:23.784 ************************************ 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:23.784 * Looking for test storage... 00:10:23.784 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:10:23.784 02:11:11 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:23.784 02:11:11 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:23.784 02:11:11 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:23.784 02:11:11 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=47052 00:10:23.784 02:11:11 accel -- accel/accel.sh@63 -- # waitforlisten 47052 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@827 -- # '[' -z 47052 ']' 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.784 02:11:11 accel -- accel/accel.sh@61 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.pt77Ot 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:23.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:23.784 02:11:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.784 [2024-05-15 02:11:11.699613] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:23.784 [2024-05-15 02:11:11.699790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:24.350 EAL: TSC is not safe to use in SMP mode 00:10:24.350 EAL: TSC is not invariant 00:10:24.350 [2024-05-15 02:11:12.185620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.350 [2024-05-15 02:11:12.292316] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:24.350 02:11:12 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:24.350 02:11:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:24.350 02:11:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:24.350 02:11:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.350 02:11:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.350 02:11:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:24.350 02:11:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:24.350 02:11:12 accel -- accel/accel.sh@41 -- # jq -r . 00:10:24.350 [2024-05-15 02:11:12.302662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@860 -- # return 0 00:10:24.916 02:11:12 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:24.916 02:11:12 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:24.916 02:11:12 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:24.916 02:11:12 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:24.916 02:11:12 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:24.916 02:11:12 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:24.916 02:11:12 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@10 -- # set +x 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.916 02:11:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.916 02:11:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.916 02:11:12 accel -- accel/accel.sh@75 -- # killprocess 47052 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@946 -- # '[' -z 47052 ']' 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@950 -- # kill -0 47052 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@951 -- # uname 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@954 -- # ps -c -o command 47052 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@954 -- # tail -1 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:10:24.916 killing process with pid 47052 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47052' 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@965 -- # kill 47052 00:10:24.916 02:11:12 accel -- common/autotest_common.sh@970 -- # wait 47052 00:10:25.174 02:11:13 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:25.174 02:11:13 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.174 02:11:13 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:10:25.174 02:11:13 accel.accel_help -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Qk0WIb -h 00:10:25.174 02:11:13 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.174 02:11:13 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:25.174 02:11:13 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.174 02:11:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.174 ************************************ 00:10:25.174 START TEST accel_missing_filename 00:10:25.174 ************************************ 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:25.174 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:10:25.174 02:11:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vgmWlC -t 1 -w compress 00:10:25.174 [2024-05-15 02:11:13.117344] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:25.174 [2024-05-15 02:11:13.117624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:25.740 EAL: TSC is not safe to use in SMP mode 00:10:25.740 EAL: TSC is not invariant 00:10:25.740 [2024-05-15 02:11:13.586479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.740 [2024-05-15 02:11:13.684471] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:25.740 02:11:13 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:25.740 [2024-05-15 02:11:13.695918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.740 [2024-05-15 02:11:13.698976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.740 [2024-05-15 02:11:13.729479] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:10:25.997 A filename is required. 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:25.997 00:10:25.997 real 0m0.733s 00:10:25.997 user 0m0.218s 00:10:25.997 sys 0m0.515s 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.997 02:11:13 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 ************************************ 00:10:25.997 END TEST accel_missing_filename 00:10:25.997 ************************************ 00:10:25.997 02:11:13 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.997 02:11:13 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:25.997 02:11:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.997 02:11:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 ************************************ 00:10:25.997 START TEST accel_compress_verify 00:10:25.997 ************************************ 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:25.997 02:11:13 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.997 02:11:13 accel.accel_compress_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tkce4S -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:25.997 [2024-05-15 02:11:13.887604] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:25.997 [2024-05-15 02:11:13.887804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:26.567 EAL: TSC is not safe to use in SMP mode 00:10:26.567 EAL: TSC is not invariant 00:10:26.567 [2024-05-15 02:11:14.380381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.567 [2024-05-15 02:11:14.475004] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:26.567 02:11:14 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:26.567 [2024-05-15 02:11:14.483371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.567 [2024-05-15 02:11:14.485782] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:26.567 [2024-05-15 02:11:14.514740] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:10:26.825 00:10:26.825 Compression does not support the verify option, aborting. 00:10:26.825 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:10:26.825 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.825 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:10:26.825 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:26.826 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:26.826 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.826 00:10:26.826 real 0m0.731s 00:10:26.826 user 0m0.193s 00:10:26.826 sys 0m0.538s 00:10:26.826 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.826 ************************************ 00:10:26.826 END TEST accel_compress_verify 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 02:11:14 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 ************************************ 00:10:26.826 START TEST accel_wrong_workload 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:26.826 02:11:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vBNwh0 -t 1 -w foobar 00:10:26.826 Unsupported workload type: foobar 00:10:26.826 [2024-05-15 02:11:14.656762] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:26.826 accel_perf options: 00:10:26.826 [-h help message] 00:10:26.826 [-q queue depth per core] 00:10:26.826 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:26.826 [-T number of threads per core 00:10:26.826 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:26.826 [-t time in seconds] 00:10:26.826 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:26.826 [ dif_verify, , dif_generate, dif_generate_copy 00:10:26.826 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:26.826 [-l for compress/decompress workloads, name of uncompressed input file 00:10:26.826 [-S for crc32c workload, use this seed value (default 0) 00:10:26.826 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:26.826 [-f for fill workload, use this BYTE value (default 255) 00:10:26.826 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:26.826 [-y verify result if this switch is on] 00:10:26.826 [-a tasks to allocate per core (default: same value as -q)] 00:10:26.826 Can be used to spread operations across a wider range of memory. 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.826 00:10:26.826 real 0m0.008s 00:10:26.826 user 0m0.006s 00:10:26.826 sys 0m0.003s 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.826 ************************************ 00:10:26.826 END TEST accel_wrong_workload 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 02:11:14 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 ************************************ 00:10:26.826 START TEST accel_negative_buffers 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:26.826 02:11:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hKxRal -t 1 -w xor -y -x -1 00:10:26.826 -x option must be non-negative. 00:10:26.826 [2024-05-15 02:11:14.706136] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:26.826 accel_perf options: 00:10:26.826 [-h help message] 00:10:26.826 [-q queue depth per core] 00:10:26.826 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:26.826 [-T number of threads per core 00:10:26.826 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:26.826 [-t time in seconds] 00:10:26.826 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:26.826 [ dif_verify, , dif_generate, dif_generate_copy 00:10:26.826 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:26.826 [-l for compress/decompress workloads, name of uncompressed input file 00:10:26.826 [-S for crc32c workload, use this seed value (default 0) 00:10:26.826 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:26.826 [-f for fill workload, use this BYTE value (default 255) 00:10:26.826 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:26.826 [-y verify result if this switch is on] 00:10:26.826 [-a tasks to allocate per core (default: same value as -q)] 00:10:26.826 Can be used to spread operations across a wider range of memory. 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.826 00:10:26.826 real 0m0.011s 00:10:26.826 user 0m0.000s 00:10:26.826 sys 0m0.012s 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.826 02:11:14 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 ************************************ 00:10:26.826 END TEST accel_negative_buffers 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.826 02:11:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.826 ************************************ 00:10:26.826 START TEST accel_crc32c 00:10:26.826 ************************************ 00:10:26.826 02:11:14 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:26.826 02:11:14 accel.accel_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CC5slW -t 1 -w crc32c -S 32 -y 00:10:26.827 [2024-05-15 02:11:14.754222] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:26.827 [2024-05-15 02:11:14.754453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:27.393 EAL: TSC is not safe to use in SMP mode 00:10:27.393 EAL: TSC is not invariant 00:10:27.393 [2024-05-15 02:11:15.222825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.394 [2024-05-15 02:11:15.309917] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:27.394 [2024-05-15 02:11:15.318283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.394 02:11:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:28.822 02:11:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.822 00:10:28.822 real 0m1.707s 00:10:28.822 user 0m1.206s 00:10:28.822 sys 0m0.517s 00:10:28.822 02:11:16 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.822 02:11:16 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:28.822 ************************************ 00:10:28.822 END TEST accel_crc32c 00:10:28.822 ************************************ 00:10:28.822 02:11:16 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:28.822 02:11:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:28.822 02:11:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.822 02:11:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.822 ************************************ 00:10:28.822 START TEST accel_crc32c_C2 00:10:28.822 ************************************ 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:28.822 02:11:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.737Vi1 -t 1 -w crc32c -y -C 2 00:10:28.822 [2024-05-15 02:11:16.497503] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:28.822 [2024-05-15 02:11:16.497750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:29.082 EAL: TSC is not safe to use in SMP mode 00:10:29.082 EAL: TSC is not invariant 00:10:29.082 [2024-05-15 02:11:17.002558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.340 [2024-05-15 02:11:17.092389] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:29.340 [2024-05-15 02:11:17.102282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.340 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.341 02:11:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.277 00:10:30.277 real 0m1.750s 00:10:30.277 user 0m1.207s 00:10:30.277 sys 0m0.553s 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.277 ************************************ 00:10:30.277 END TEST accel_crc32c_C2 00:10:30.277 02:11:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:30.277 ************************************ 00:10:30.277 02:11:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:30.277 02:11:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:30.277 02:11:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.277 02:11:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.277 ************************************ 00:10:30.277 START TEST accel_copy 00:10:30.277 ************************************ 00:10:30.277 02:11:18 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.277 02:11:18 accel.accel_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ERKGLs -t 1 -w copy -y 00:10:30.277 [2024-05-15 02:11:18.277251] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:30.277 [2024-05-15 02:11:18.277407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:30.844 EAL: TSC is not safe to use in SMP mode 00:10:30.844 EAL: TSC is not invariant 00:10:30.844 [2024-05-15 02:11:18.759387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.844 [2024-05-15 02:11:18.843204] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:30.844 02:11:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:31.103 [2024-05-15 02:11:18.853870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.103 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:31.104 02:11:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:32.039 ************************************ 00:10:32.039 END TEST accel_copy 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:32.039 02:11:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.039 00:10:32.039 real 0m1.717s 00:10:32.039 user 0m1.199s 00:10:32.039 sys 0m0.520s 00:10:32.039 02:11:19 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:32.039 02:11:19 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:32.039 ************************************ 00:10:32.039 02:11:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.039 02:11:20 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:32.039 02:11:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:32.039 02:11:20 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.039 ************************************ 00:10:32.039 START TEST accel_fill 00:10:32.039 ************************************ 00:10:32.039 02:11:20 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.039 02:11:20 accel.accel_fill -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fN9KA9 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.039 [2024-05-15 02:11:20.031507] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:32.039 [2024-05-15 02:11:20.031733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:32.606 EAL: TSC is not safe to use in SMP mode 00:10:32.606 EAL: TSC is not invariant 00:10:32.606 [2024-05-15 02:11:20.517865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.606 [2024-05-15 02:11:20.603007] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:32.606 02:11:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:32.865 [2024-05-15 02:11:20.612322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.865 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:32.866 02:11:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:33.801 02:11:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.801 00:10:33.801 real 0m1.720s 00:10:33.801 user 0m1.203s 00:10:33.801 sys 0m0.527s 00:10:33.801 ************************************ 00:10:33.801 END TEST accel_fill 00:10:33.801 ************************************ 00:10:33.801 02:11:21 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:33.801 02:11:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:33.801 02:11:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:33.801 02:11:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:33.801 02:11:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.801 02:11:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:33.801 ************************************ 00:10:33.801 START TEST accel_copy_crc32c 00:10:33.801 ************************************ 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:33.801 02:11:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MmKWJv -t 1 -w copy_crc32c -y 00:10:33.801 [2024-05-15 02:11:21.787238] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:33.801 [2024-05-15 02:11:21.787463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:34.367 EAL: TSC is not safe to use in SMP mode 00:10:34.367 EAL: TSC is not invariant 00:10:34.367 [2024-05-15 02:11:22.256252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.367 [2024-05-15 02:11:22.359671] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:34.367 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:34.641 [2024-05-15 02:11:22.369627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.641 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:34.642 02:11:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.575 00:10:35.575 real 0m1.723s 00:10:35.575 user 0m1.201s 00:10:35.575 sys 0m0.521s 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:35.575 02:11:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:35.575 ************************************ 00:10:35.575 END TEST accel_copy_crc32c 00:10:35.575 ************************************ 00:10:35.575 02:11:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:35.575 02:11:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:35.575 02:11:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:35.575 02:11:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:35.575 ************************************ 00:10:35.575 START TEST accel_copy_crc32c_C2 00:10:35.575 ************************************ 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:35.575 02:11:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zVTIXq -t 1 -w copy_crc32c -y -C 2 00:10:35.575 [2024-05-15 02:11:23.544666] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:35.575 [2024-05-15 02:11:23.544897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:36.140 EAL: TSC is not safe to use in SMP mode 00:10:36.140 EAL: TSC is not invariant 00:10:36.140 [2024-05-15 02:11:24.021220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.140 [2024-05-15 02:11:24.119004] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:36.140 [2024-05-15 02:11:24.127254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.140 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.141 02:11:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.514 00:10:37.514 real 0m1.729s 00:10:37.514 user 0m1.220s 00:10:37.514 sys 0m0.519s 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:37.514 02:11:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:37.514 ************************************ 00:10:37.514 END TEST accel_copy_crc32c_C2 00:10:37.514 ************************************ 00:10:37.514 02:11:25 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:37.515 02:11:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:37.515 02:11:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.515 02:11:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:37.515 ************************************ 00:10:37.515 START TEST accel_dualcast 00:10:37.515 ************************************ 00:10:37.515 02:11:25 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:37.515 02:11:25 accel.accel_dualcast -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.46BDS8 -t 1 -w dualcast -y 00:10:37.515 [2024-05-15 02:11:25.309036] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:37.515 [2024-05-15 02:11:25.309236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:37.772 EAL: TSC is not safe to use in SMP mode 00:10:37.772 EAL: TSC is not invariant 00:10:38.031 [2024-05-15 02:11:25.776794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.031 [2024-05-15 02:11:25.877383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:38.031 [2024-05-15 02:11:25.887881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.031 02:11:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:39.453 02:11:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.453 00:10:39.453 real 0m1.719s 00:10:39.453 user 0m1.219s 00:10:39.453 sys 0m0.519s 00:10:39.453 02:11:27 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:39.453 02:11:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 ************************************ 00:10:39.453 END TEST accel_dualcast 00:10:39.453 ************************************ 00:10:39.453 02:11:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:39.453 02:11:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:39.453 02:11:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:39.453 02:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 ************************************ 00:10:39.453 START TEST accel_compare 00:10:39.453 ************************************ 00:10:39.453 02:11:27 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:10:39.453 02:11:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:39.453 02:11:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:39.453 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.453 02:11:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:39.453 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.454 02:11:27 accel.accel_compare -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yboorQ -t 1 -w compare -y 00:10:39.454 [2024-05-15 02:11:27.063815] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:39.454 [2024-05-15 02:11:27.064080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:39.712 EAL: TSC is not safe to use in SMP mode 00:10:39.712 EAL: TSC is not invariant 00:10:39.712 [2024-05-15 02:11:27.536992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.713 [2024-05-15 02:11:27.638116] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:39.713 [2024-05-15 02:11:27.652784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.713 02:11:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:41.094 02:11:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.094 00:10:41.094 real 0m1.730s 00:10:41.094 user 0m1.205s 00:10:41.094 sys 0m0.536s 00:10:41.094 02:11:28 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:41.094 02:11:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:41.094 ************************************ 00:10:41.094 END TEST accel_compare 00:10:41.094 ************************************ 00:10:41.094 02:11:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:41.094 02:11:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:41.094 02:11:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:41.094 02:11:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.094 ************************************ 00:10:41.094 START TEST accel_xor 00:10:41.094 ************************************ 00:10:41.094 02:11:28 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:41.094 02:11:28 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PFo877 -t 1 -w xor -y 00:10:41.094 [2024-05-15 02:11:28.829795] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:41.094 [2024-05-15 02:11:28.829990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:41.384 EAL: TSC is not safe to use in SMP mode 00:10:41.384 EAL: TSC is not invariant 00:10:41.384 [2024-05-15 02:11:29.298653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.642 [2024-05-15 02:11:29.399828] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:41.642 [2024-05-15 02:11:29.409135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.642 02:11:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.643 02:11:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.578 00:10:42.578 real 0m1.722s 00:10:42.578 user 0m1.216s 00:10:42.578 sys 0m0.526s 00:10:42.578 02:11:30 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.578 02:11:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:42.578 ************************************ 00:10:42.578 END TEST accel_xor 00:10:42.578 ************************************ 00:10:42.578 02:11:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:42.578 02:11:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:42.578 02:11:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.578 02:11:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.578 ************************************ 00:10:42.578 START TEST accel_xor 00:10:42.578 ************************************ 00:10:42.578 02:11:30 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:42.578 02:11:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:42.837 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:42.837 02:11:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:42.837 02:11:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:42.837 02:11:30 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zfi0pr -t 1 -w xor -y -x 3 00:10:42.837 [2024-05-15 02:11:30.586369] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:42.837 [2024-05-15 02:11:30.586603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:43.097 EAL: TSC is not safe to use in SMP mode 00:10:43.097 EAL: TSC is not invariant 00:10:43.097 [2024-05-15 02:11:31.068145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.356 [2024-05-15 02:11:31.170765] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:43.356 [2024-05-15 02:11:31.182970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.356 02:11:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.729 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:44.730 02:11:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.730 00:10:44.730 real 0m1.764s 00:10:44.730 user 0m1.229s 00:10:44.730 sys 0m0.544s 00:10:44.730 02:11:32 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:44.730 02:11:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:44.730 ************************************ 00:10:44.730 END TEST accel_xor 00:10:44.730 ************************************ 00:10:44.730 02:11:32 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:44.730 02:11:32 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:44.730 02:11:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:44.730 02:11:32 accel -- common/autotest_common.sh@10 -- # set +x 00:10:44.730 ************************************ 00:10:44.730 START TEST accel_dif_verify 00:10:44.730 ************************************ 00:10:44.730 02:11:32 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:44.730 02:11:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2ijKsa -t 1 -w dif_verify 00:10:44.730 [2024-05-15 02:11:32.388529] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:44.730 [2024-05-15 02:11:32.388807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:44.987 EAL: TSC is not safe to use in SMP mode 00:10:44.987 EAL: TSC is not invariant 00:10:44.987 [2024-05-15 02:11:32.868827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.987 [2024-05-15 02:11:32.975056] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:44.987 02:11:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:44.988 02:11:32 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:44.988 [2024-05-15 02:11:32.986777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.244 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:45.245 02:11:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:46.176 02:11:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.176 00:10:46.176 real 0m1.740s 00:10:46.176 user 0m1.215s 00:10:46.176 sys 0m0.528s 00:10:46.177 02:11:34 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.177 02:11:34 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:46.177 ************************************ 00:10:46.177 END TEST accel_dif_verify 00:10:46.177 ************************************ 00:10:46.177 02:11:34 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:46.177 02:11:34 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:46.177 02:11:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.177 02:11:34 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.177 ************************************ 00:10:46.177 START TEST accel_dif_generate 00:10:46.177 ************************************ 00:10:46.177 02:11:34 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.177 02:11:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.LSmhmT -t 1 -w dif_generate 00:10:46.177 [2024-05-15 02:11:34.162373] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:46.177 [2024-05-15 02:11:34.162619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:46.742 EAL: TSC is not safe to use in SMP mode 00:10:46.742 EAL: TSC is not invariant 00:10:46.742 [2024-05-15 02:11:34.685407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.001 [2024-05-15 02:11:34.773505] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:47.001 [2024-05-15 02:11:34.782622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.001 02:11:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:47.962 02:11:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.962 00:10:47.962 real 0m1.762s 00:10:47.962 user 0m1.187s 00:10:47.962 sys 0m0.586s 00:10:47.962 02:11:35 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.962 02:11:35 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 ************************************ 00:10:47.962 END TEST accel_dif_generate 00:10:47.962 ************************************ 00:10:47.962 02:11:35 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:47.962 02:11:35 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:47.962 02:11:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:47.962 02:11:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 ************************************ 00:10:47.962 START TEST accel_dif_generate_copy 00:10:47.962 ************************************ 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:47.962 02:11:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.oFwprr -t 1 -w dif_generate_copy 00:10:47.962 [2024-05-15 02:11:35.958170] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:47.962 [2024-05-15 02:11:35.958392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:48.528 EAL: TSC is not safe to use in SMP mode 00:10:48.528 EAL: TSC is not invariant 00:10:48.528 [2024-05-15 02:11:36.459740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.786 [2024-05-15 02:11:36.551435] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:48.786 [2024-05-15 02:11:36.561860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:48.786 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:48.787 02:11:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.723 00:10:49.723 real 0m1.744s 00:10:49.723 user 0m1.204s 00:10:49.723 sys 0m0.552s 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:49.723 02:11:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:49.723 ************************************ 00:10:49.723 END TEST accel_dif_generate_copy 00:10:49.723 ************************************ 00:10:49.723 02:11:37 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:49.723 02:11:37 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.723 02:11:37 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:49.723 02:11:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.723 02:11:37 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 ************************************ 00:10:49.982 START TEST accel_comp 00:10:49.982 ************************************ 00:10:49.982 02:11:37 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.982 02:11:37 accel.accel_comp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.9mnpy7 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.982 [2024-05-15 02:11:37.738151] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:49.982 [2024-05-15 02:11:37.738380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:50.240 EAL: TSC is not safe to use in SMP mode 00:10:50.240 EAL: TSC is not invariant 00:10:50.240 [2024-05-15 02:11:38.229155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.499 [2024-05-15 02:11:38.321290] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:50.499 [2024-05-15 02:11:38.330217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.499 02:11:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:51.874 02:11:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.874 00:10:51.874 real 0m1.736s 00:10:51.874 user 0m1.200s 00:10:51.874 sys 0m0.544s 00:10:51.874 02:11:39 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:51.874 02:11:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:51.874 ************************************ 00:10:51.874 END TEST accel_comp 00:10:51.874 ************************************ 00:10:51.874 02:11:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.874 02:11:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:51.874 02:11:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.874 02:11:39 accel -- common/autotest_common.sh@10 -- # set +x 00:10:51.874 ************************************ 00:10:51.874 START TEST accel_decomp 00:10:51.874 ************************************ 00:10:51.874 02:11:39 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.874 02:11:39 accel.accel_decomp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7LTQNh -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.874 [2024-05-15 02:11:39.510356] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:51.874 [2024-05-15 02:11:39.510578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:52.133 EAL: TSC is not safe to use in SMP mode 00:10:52.133 EAL: TSC is not invariant 00:10:52.133 [2024-05-15 02:11:39.993902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.133 [2024-05-15 02:11:40.081965] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:52.133 [2024-05-15 02:11:40.094569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.133 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.134 02:11:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.509 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:53.510 02:11:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.510 00:10:53.510 real 0m1.728s 00:10:53.510 user 0m1.202s 00:10:53.510 sys 0m0.534s 00:10:53.510 02:11:41 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:53.510 02:11:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:53.510 ************************************ 00:10:53.510 END TEST accel_decomp 00:10:53.510 ************************************ 00:10:53.510 02:11:41 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:53.510 02:11:41 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:53.510 02:11:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:53.510 02:11:41 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.510 ************************************ 00:10:53.510 START TEST accel_decmop_full 00:10:53.510 ************************************ 00:10:53.510 02:11:41 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:53.510 02:11:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.5xdLF6 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:53.510 [2024-05-15 02:11:41.278652] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:53.510 [2024-05-15 02:11:41.278820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:53.768 EAL: TSC is not safe to use in SMP mode 00:10:53.768 EAL: TSC is not invariant 00:10:53.768 [2024-05-15 02:11:41.760448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.026 [2024-05-15 02:11:41.858205] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:10:54.026 [2024-05-15 02:11:41.870693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.026 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:54.027 02:11:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:55.401 02:11:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:55.402 02:11:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.402 00:10:55.402 real 0m1.751s 00:10:55.402 user 0m1.227s 00:10:55.402 sys 0m0.535s 00:10:55.402 02:11:43 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:55.402 02:11:43 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 ************************************ 00:10:55.402 END TEST accel_decmop_full 00:10:55.402 ************************************ 00:10:55.402 02:11:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:55.402 02:11:43 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:55.402 02:11:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:55.402 02:11:43 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.402 ************************************ 00:10:55.402 START TEST accel_decomp_mcore 00:10:55.402 ************************************ 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:55.402 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ry8ame -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:55.402 [2024-05-15 02:11:43.067027] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:55.402 [2024-05-15 02:11:43.067191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:55.660 EAL: TSC is not safe to use in SMP mode 00:10:55.660 EAL: TSC is not invariant 00:10:55.660 [2024-05-15 02:11:43.566283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.918 [2024-05-15 02:11:43.681904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:55.918 [2024-05-15 02:11:43.682017] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:10:55.918 [2024-05-15 02:11:43.682039] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:10:55.918 [2024-05-15 02:11:43.682060] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:55.918 [2024-05-15 02:11:43.696221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.918 [2024-05-15 02:11:43.696094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.918 [2024-05-15 02:11:43.696168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.918 [2024-05-15 02:11:43.696210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.918 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.919 02:11:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.854 00:10:56.854 real 0m1.787s 00:10:56.854 user 0m4.343s 00:10:56.854 sys 0m0.566s 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:56.854 ************************************ 00:10:56.854 02:11:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:56.854 END TEST accel_decomp_mcore 00:10:56.854 ************************************ 00:10:57.112 02:11:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.113 02:11:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:57.113 02:11:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:57.113 02:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.113 ************************************ 00:10:57.113 START TEST accel_decomp_full_mcore 00:10:57.113 ************************************ 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.113 02:11:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Jvm2am -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.113 [2024-05-15 02:11:44.893068] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:57.113 [2024-05-15 02:11:44.893293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:57.725 EAL: TSC is not safe to use in SMP mode 00:10:57.725 EAL: TSC is not invariant 00:10:57.726 [2024-05-15 02:11:45.384295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.726 [2024-05-15 02:11:45.476145] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:57.726 [2024-05-15 02:11:45.476224] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:10:57.726 [2024-05-15 02:11:45.476244] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:10:57.726 [2024-05-15 02:11:45.476264] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:57.726 [2024-05-15 02:11:45.487834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.726 [2024-05-15 02:11:45.487988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.726 [2024-05-15 02:11:45.487921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.726 [2024-05-15 02:11:45.487976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:57.726 02:11:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.682 00:10:58.682 real 0m1.758s 00:10:58.682 user 0m4.370s 00:10:58.682 sys 0m0.554s 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.682 02:11:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 ************************************ 00:10:58.682 END TEST accel_decomp_full_mcore 00:10:58.682 ************************************ 00:10:58.682 02:11:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.682 02:11:46 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:58.682 02:11:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.682 02:11:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 ************************************ 00:10:58.682 START TEST accel_decomp_mthread 00:10:58.682 ************************************ 00:10:58.682 02:11:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.682 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:58.682 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:58.682 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.940 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.940 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.940 02:11:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rzbyh9 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.940 [2024-05-15 02:11:46.690795] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:58.940 [2024-05-15 02:11:46.690995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:59.198 EAL: TSC is not safe to use in SMP mode 00:10:59.198 EAL: TSC is not invariant 00:10:59.198 [2024-05-15 02:11:47.152094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.457 [2024-05-15 02:11:47.266430] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:59.457 [2024-05-15 02:11:47.276638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.457 02:11:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.834 00:11:00.834 real 0m1.734s 00:11:00.834 user 0m1.246s 00:11:00.834 sys 0m0.498s 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.834 02:11:48 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:00.834 ************************************ 00:11:00.834 END TEST accel_decomp_mthread 00:11:00.834 ************************************ 00:11:00.834 02:11:48 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.834 02:11:48 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:00.834 02:11:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.834 02:11:48 accel -- common/autotest_common.sh@10 -- # set +x 00:11:00.834 ************************************ 00:11:00.834 START TEST accel_decomp_full_mthread 00:11:00.834 ************************************ 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:00.834 02:11:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zgjJgx -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.834 [2024-05-15 02:11:48.462310] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:00.834 [2024-05-15 02:11:48.462515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:01.093 EAL: TSC is not safe to use in SMP mode 00:11:01.093 EAL: TSC is not invariant 00:11:01.093 [2024-05-15 02:11:48.955599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.093 [2024-05-15 02:11:49.049364] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:01.093 [2024-05-15 02:11:49.060516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.093 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:01.094 02:11:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.468 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.469 00:11:02.469 real 0m1.777s 00:11:02.469 user 0m1.240s 00:11:02.469 sys 0m0.550s 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.469 02:11:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:02.469 ************************************ 00:11:02.469 END TEST accel_decomp_full_mthread 00:11:02.469 ************************************ 00:11:02.469 02:11:50 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:11:02.469 02:11:50 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.VbRJIl 00:11:02.469 02:11:50 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:02.469 02:11:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.469 02:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:11:02.469 ************************************ 00:11:02.469 START TEST accel_dif_functional_tests 00:11:02.469 ************************************ 00:11:02.469 02:11:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.VbRJIl 00:11:02.469 [2024-05-15 02:11:50.279604] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:02.469 [2024-05-15 02:11:50.279872] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:03.036 EAL: TSC is not safe to use in SMP mode 00:11:03.036 EAL: TSC is not invariant 00:11:03.036 [2024-05-15 02:11:50.747253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.036 [2024-05-15 02:11:50.845479] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:03.036 [2024-05-15 02:11:50.845559] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:11:03.036 [2024-05-15 02:11:50.845572] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:11:03.036 02:11:50 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:03.036 02:11:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:03.036 02:11:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:03.036 02:11:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.036 02:11:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.036 02:11:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:03.036 02:11:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:03.036 02:11:50 accel -- accel/accel.sh@41 -- # jq -r . 00:11:03.036 [2024-05-15 02:11:50.856498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.036 [2024-05-15 02:11:50.856820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.036 [2024-05-15 02:11:50.856762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.036 00:11:03.036 00:11:03.036 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.036 http://cunit.sourceforge.net/ 00:11:03.036 00:11:03.036 00:11:03.036 Suite: accel_dif 00:11:03.036 Test: verify: DIF generated, GUARD check ...passed 00:11:03.036 Test: verify: DIF generated, APPTAG check ...passed 00:11:03.036 Test: verify: DIF generated, REFTAG check ...passed 00:11:03.036 Test: verify: DIF not generated, GUARD check ...[2024-05-15 02:11:50.874239] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:03.036 [2024-05-15 02:11:50.874315] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:03.036 passed 00:11:03.036 Test: verify: DIF not generated, APPTAG check ...passed 00:11:03.036 Test: verify: DIF not generated, REFTAG check ...passed 00:11:03.036 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:03.036 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:03.036 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-05-15 02:11:50.874362] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:03.036 [2024-05-15 02:11:50.874393] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:03.037 [2024-05-15 02:11:50.874413] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:03.037 [2024-05-15 02:11:50.874447] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:03.037 [2024-05-15 02:11:50.874485] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:03.037 passed 00:11:03.037 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:03.037 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:03.037 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:11:03.037 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 02:11:50.874580] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:03.037 passed 00:11:03.037 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:03.037 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:03.037 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:03.037 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:03.037 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:03.037 Test: generate copy: iovecs-len validate ...passed 00:11:03.037 Test: generate copy: buffer alignment validate ...[2024-05-15 02:11:50.874752] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:03.037 passed 00:11:03.037 00:11:03.037 Run Summary: Type Total Ran Passed Failed Inactive 00:11:03.037 suites 1 1 n/a 0 0 00:11:03.037 tests 20 20 20 0 0 00:11:03.037 asserts 204 204 204 0 n/a 00:11:03.037 00:11:03.037 Elapsed time = 0.000 seconds 00:11:03.296 00:11:03.296 real 0m0.771s 00:11:03.296 user 0m0.357s 00:11:03.296 sys 0m0.552s 00:11:03.296 02:11:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.296 02:11:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:03.296 ************************************ 00:11:03.296 END TEST accel_dif_functional_tests 00:11:03.296 ************************************ 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:03.296 00:11:03.296 real 0m39.523s 00:11:03.296 user 0m33.124s 00:11:03.296 sys 0m13.560s 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel -- common/autotest_common.sh@10 -- # set +x 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.296 ************************************ 00:11:03.296 END TEST accel 00:11:03.296 ************************************ 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:03.296 02:11:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:03.296 02:11:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:03.296 02:11:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:03.296 02:11:51 -- spdk/autotest.sh@180 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:03.296 02:11:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:03.296 02:11:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.296 02:11:51 -- common/autotest_common.sh@10 -- # set +x 00:11:03.296 ************************************ 00:11:03.296 START TEST accel_rpc 00:11:03.296 ************************************ 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:03.296 * Looking for test storage... 00:11:03.296 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:11:03.296 02:11:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:03.296 02:11:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47802 00:11:03.296 02:11:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47802 00:11:03.296 02:11:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 47802 ']' 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:03.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:03.296 02:11:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.296 [2024-05-15 02:11:51.277699] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:03.296 [2024-05-15 02:11:51.277924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:03.862 EAL: TSC is not safe to use in SMP mode 00:11:03.862 EAL: TSC is not invariant 00:11:03.862 [2024-05-15 02:11:51.793748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.121 [2024-05-15 02:11:51.881780] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:04.121 [2024-05-15 02:11:51.884069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 ************************************ 00:11:04.380 START TEST accel_assign_opcode 00:11:04.380 ************************************ 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 [2024-05-15 02:11:52.280387] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 [2024-05-15 02:11:52.288378] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.380 software 00:11:04.380 00:11:04.380 real 0m0.065s 00:11:04.380 user 0m0.006s 00:11:04.380 sys 0m0.011s 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.380 02:11:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:04.380 ************************************ 00:11:04.380 END TEST accel_assign_opcode 00:11:04.380 ************************************ 00:11:04.380 02:11:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47802 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 47802 ']' 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 47802 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 47802 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@954 -- # tail -1 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:11:04.380 killing process with pid 47802 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47802' 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@965 -- # kill 47802 00:11:04.380 02:11:52 accel_rpc -- common/autotest_common.sh@970 -- # wait 47802 00:11:04.692 00:11:04.692 real 0m1.485s 00:11:04.692 user 0m1.348s 00:11:04.692 sys 0m0.766s 00:11:04.692 02:11:52 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.692 02:11:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.692 ************************************ 00:11:04.692 END TEST accel_rpc 00:11:04.692 ************************************ 00:11:04.692 02:11:52 -- spdk/autotest.sh@181 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:04.692 02:11:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:04.692 02:11:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.692 02:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:04.692 ************************************ 00:11:04.692 START TEST app_cmdline 00:11:04.692 ************************************ 00:11:04.692 02:11:52 app_cmdline -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:04.950 * Looking for test storage... 00:11:04.950 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:11:04.950 02:11:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:04.950 02:11:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47884 00:11:04.950 02:11:52 app_cmdline -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:04.950 02:11:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47884 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 47884 ']' 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.950 02:11:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 [2024-05-15 02:11:52.802356] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:04.950 [2024-05-15 02:11:52.802600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:05.514 EAL: TSC is not safe to use in SMP mode 00:11:05.514 EAL: TSC is not invariant 00:11:05.514 [2024-05-15 02:11:53.329223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.514 [2024-05-15 02:11:53.426061] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:05.514 [2024-05-15 02:11:53.428304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.078 02:11:53 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:06.078 02:11:53 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:11:06.078 02:11:53 app_cmdline -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:06.336 { 00:11:06.336 "version": "SPDK v24.05-pre git sha1 2dc74a001", 00:11:06.336 "fields": { 00:11:06.336 "major": 24, 00:11:06.336 "minor": 5, 00:11:06.336 "patch": 0, 00:11:06.336 "suffix": "-pre", 00:11:06.336 "commit": "2dc74a001" 00:11:06.336 } 00:11:06.336 } 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:06.336 02:11:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:06.336 02:11:54 app_cmdline -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:06.593 request: 00:11:06.593 { 00:11:06.593 "method": "env_dpdk_get_mem_stats", 00:11:06.593 "req_id": 1 00:11:06.593 } 00:11:06.593 Got JSON-RPC error response 00:11:06.593 response: 00:11:06.594 { 00:11:06.594 "code": -32601, 00:11:06.594 "message": "Method not found" 00:11:06.594 } 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.594 02:11:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47884 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 47884 ']' 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 47884 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@954 -- # ps -c -o command 47884 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@954 -- # tail -1 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:11:06.594 killing process with pid 47884 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47884' 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@965 -- # kill 47884 00:11:06.594 02:11:54 app_cmdline -- common/autotest_common.sh@970 -- # wait 47884 00:11:06.852 00:11:06.852 real 0m2.123s 00:11:06.852 user 0m2.609s 00:11:06.852 sys 0m0.802s 00:11:06.852 ************************************ 00:11:06.852 END TEST app_cmdline 00:11:06.852 ************************************ 00:11:06.852 02:11:54 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.852 02:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:06.852 02:11:54 -- spdk/autotest.sh@182 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:06.852 02:11:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:06.852 02:11:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.852 02:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:06.852 ************************************ 00:11:06.852 START TEST version 00:11:06.852 ************************************ 00:11:06.852 02:11:54 version -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:07.110 * Looking for test storage... 00:11:07.110 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:11:07.110 02:11:54 version -- app/version.sh@17 -- # get_header_version major 00:11:07.110 02:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # cut -f2 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:07.110 02:11:54 version -- app/version.sh@17 -- # major=24 00:11:07.110 02:11:54 version -- app/version.sh@18 -- # get_header_version minor 00:11:07.110 02:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # cut -f2 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:07.110 02:11:54 version -- app/version.sh@18 -- # minor=5 00:11:07.110 02:11:54 version -- app/version.sh@19 -- # get_header_version patch 00:11:07.110 02:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # cut -f2 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:07.110 02:11:54 version -- app/version.sh@19 -- # patch=0 00:11:07.110 02:11:54 version -- app/version.sh@20 -- # get_header_version suffix 00:11:07.110 02:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:07.110 02:11:54 version -- app/version.sh@14 -- # cut -f2 00:11:07.110 02:11:54 version -- app/version.sh@20 -- # suffix=-pre 00:11:07.110 02:11:54 version -- app/version.sh@22 -- # version=24.5 00:11:07.110 02:11:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:07.110 02:11:54 version -- app/version.sh@28 -- # version=24.5rc0 00:11:07.110 02:11:54 version -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:11:07.110 02:11:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:07.110 02:11:54 version -- app/version.sh@30 -- # py_version=24.5rc0 00:11:07.110 02:11:54 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:11:07.110 00:11:07.110 real 0m0.200s 00:11:07.110 user 0m0.151s 00:11:07.110 sys 0m0.150s 00:11:07.110 02:11:54 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.110 ************************************ 00:11:07.110 END TEST version 00:11:07.110 ************************************ 00:11:07.110 02:11:54 version -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 02:11:55 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:11:07.110 02:11:55 -- spdk/autotest.sh@185 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:07.110 02:11:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:07.110 02:11:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.110 02:11:55 -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 ************************************ 00:11:07.110 START TEST blockdev_general 00:11:07.110 ************************************ 00:11:07.110 02:11:55 blockdev_general -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:07.413 * Looking for test storage... 00:11:07.413 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:07.413 02:11:55 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48019 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:07.413 02:11:55 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 48019 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 48019 ']' 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:07.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:07.413 02:11:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:07.413 [2024-05-15 02:11:55.193768] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:07.413 [2024-05-15 02:11:55.194022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:07.978 EAL: TSC is not safe to use in SMP mode 00:11:07.978 EAL: TSC is not invariant 00:11:07.978 [2024-05-15 02:11:55.707268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.978 [2024-05-15 02:11:55.795922] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:07.978 [2024-05-15 02:11:55.798262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.544 02:11:56 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:08.544 02:11:56 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:11:08.544 02:11:56 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:08.544 02:11:56 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:11:08.544 02:11:56 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:08.544 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.544 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.544 [2024-05-15 02:11:56.505148] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.544 [2024-05-15 02:11:56.505217] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.544 00:11:08.544 [2024-05-15 02:11:56.513130] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.544 [2024-05-15 02:11:56.513163] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.544 00:11:08.544 Malloc0 00:11:08.544 Malloc1 00:11:08.544 Malloc2 00:11:08.803 Malloc3 00:11:08.803 Malloc4 00:11:08.803 Malloc5 00:11:08.803 Malloc6 00:11:08.803 Malloc7 00:11:08.803 Malloc8 00:11:08.803 Malloc9 00:11:08.803 [2024-05-15 02:11:56.601146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:08.803 [2024-05-15 02:11:56.601198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.803 [2024-05-15 02:11:56.601223] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4ff980 00:11:08.803 [2024-05-15 02:11:56.601232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.803 [2024-05-15 02:11:56.601633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.803 [2024-05-15 02:11:56.601672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:08.803 TestPT 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:08.803 5000+0 records in 00:11:08.803 5000+0 records out 00:11:08.803 10240000 bytes transferred in 0.027040 secs (378698799 bytes/sec) 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.803 AIO0 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:08.803 02:11:56 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.803 02:11:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "812e038a-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "812e038a-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "95487175-8528-8859-9553-75e3bf0f6578"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "95487175-8528-8859-9553-75e3bf0f6578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "373a8640-ad6e-935a-851d-34c8e851fb0c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "373a8640-ad6e-935a-851d-34c8e851fb0c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f60ac666-0faf-f354-8aec-a46f5f5ef7a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f60ac666-0faf-f354-8aec-a46f5f5ef7a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a2470461-d595-9e58-bc67-b55c6e8adba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2470461-d595-9e58-bc67-b55c6e8adba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c46d1222-54b0-d656-a930-33f9b8ebdef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c46d1222-54b0-d656-a930-33f9b8ebdef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6bc5cadb-5180-ce57-a0ef-571431da6231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6bc5cadb-5180-ce57-a0ef-571431da6231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "56982d3a-ad05-d353-b837-7de6a0c7e06e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56982d3a-ad05-d353-b837-7de6a0c7e06e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "77fcb62d-abed-885a-a717-12956b28e5f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77fcb62d-abed-885a-a717-12956b28e5f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "35663645-dbfb-b85a-83c9-3ce0f53f34e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35663645-dbfb-b85a-83c9-3ce0f53f34e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "47743731-fc72-4c59-881e-b800987f0def"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "47743731-fc72-4c59-881e-b800987f0def",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b92509d8-6d68-bf52-9c07-676b80612ba9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b92509d8-6d68-bf52-9c07-676b80612ba9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "813b7fe3-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8132e535-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "81341dd3-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "813caa24-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "81355647-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "81368ed4-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "813de21b-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8137c747-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8138ffe7-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "81466eb7-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "81466eb7-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:09.062 02:11:56 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 48019 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 48019 ']' 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 48019 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@954 -- # ps -c -o command 48019 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@954 -- # tail -1 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:11:09.062 killing process with pid 48019 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48019' 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@965 -- # kill 48019 00:11:09.062 02:11:56 blockdev_general -- common/autotest_common.sh@970 -- # wait 48019 00:11:09.320 02:11:57 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:09.320 02:11:57 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:09.320 02:11:57 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:09.320 02:11:57 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.320 02:11:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:09.320 ************************************ 00:11:09.320 START TEST bdev_hello_world 00:11:09.320 ************************************ 00:11:09.320 02:11:57 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:09.320 [2024-05-15 02:11:57.282273] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:09.320 [2024-05-15 02:11:57.282563] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:09.958 EAL: TSC is not safe to use in SMP mode 00:11:09.959 EAL: TSC is not invariant 00:11:09.959 [2024-05-15 02:11:57.755496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.959 [2024-05-15 02:11:57.841407] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:09.959 [2024-05-15 02:11:57.843682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.959 [2024-05-15 02:11:57.901642] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:09.959 [2024-05-15 02:11:57.901712] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:09.959 [2024-05-15 02:11:57.909617] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:09.959 [2024-05-15 02:11:57.909657] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:09.959 [2024-05-15 02:11:57.917636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:09.959 [2024-05-15 02:11:57.917693] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:09.959 [2024-05-15 02:11:57.917703] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:10.217 [2024-05-15 02:11:57.965631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:10.217 [2024-05-15 02:11:57.965696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.217 [2024-05-15 02:11:57.965711] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cb77800 00:11:10.217 [2024-05-15 02:11:57.965719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.217 [2024-05-15 02:11:57.966155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.217 [2024-05-15 02:11:57.966186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:10.217 [2024-05-15 02:11:58.065749] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:10.217 [2024-05-15 02:11:58.065805] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:10.217 [2024-05-15 02:11:58.065820] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:10.217 [2024-05-15 02:11:58.065836] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:10.217 [2024-05-15 02:11:58.065853] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:10.217 [2024-05-15 02:11:58.065861] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:10.217 [2024-05-15 02:11:58.065873] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:10.217 00:11:10.217 [2024-05-15 02:11:58.065898] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:10.475 00:11:10.475 real 0m0.987s 00:11:10.475 user 0m0.475s 00:11:10.475 sys 0m0.510s 00:11:10.475 02:11:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:10.475 02:11:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:10.475 ************************************ 00:11:10.475 END TEST bdev_hello_world 00:11:10.475 ************************************ 00:11:10.475 02:11:58 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:11:10.475 02:11:58 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:10.475 02:11:58 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.475 02:11:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:10.475 ************************************ 00:11:10.475 START TEST bdev_bounds 00:11:10.475 ************************************ 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=48071 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 48071' 00:11:10.475 Process bdevio pid: 48071 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 48071 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 48071 ']' 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:10.475 02:11:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:10.475 [2024-05-15 02:11:58.314562] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:10.475 [2024-05-15 02:11:58.314797] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:11.042 EAL: TSC is not safe to use in SMP mode 00:11:11.042 EAL: TSC is not invariant 00:11:11.042 [2024-05-15 02:11:58.800253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.042 [2024-05-15 02:11:58.907213] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:11.042 [2024-05-15 02:11:58.907304] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:11:11.042 [2024-05-15 02:11:58.907321] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:11:11.042 [2024-05-15 02:11:58.911567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.042 [2024-05-15 02:11:58.911475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.042 [2024-05-15 02:11:58.911556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.042 [2024-05-15 02:11:58.970428] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.042 [2024-05-15 02:11:58.970502] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.042 [2024-05-15 02:11:58.978409] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.042 [2024-05-15 02:11:58.978451] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.042 [2024-05-15 02:11:58.986427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:11.042 [2024-05-15 02:11:58.986469] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:11.042 [2024-05-15 02:11:58.986477] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:11.042 [2024-05-15 02:11:59.034437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:11.042 [2024-05-15 02:11:59.034504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.042 [2024-05-15 02:11:59.034523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c98a800 00:11:11.042 [2024-05-15 02:11:59.034540] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.042 [2024-05-15 02:11:59.034966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.042 [2024-05-15 02:11:59.034989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:11.609 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:11.609 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:11:11.609 02:11:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:11.869 I/O targets: 00:11:11.869 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:11.869 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:11.869 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:11.869 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:11.869 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:11.869 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.869 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.869 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:11.869 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:11.869 00:11:11.869 00:11:11.869 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.869 http://cunit.sourceforge.net/ 00:11:11.869 00:11:11.869 00:11:11.869 Suite: bdevio tests on: AIO0 00:11:11.869 Test: blockdev write read block ...passed 00:11:11.869 Test: blockdev write zeroes read block ...passed 00:11:11.869 Test: blockdev write zeroes read no split ...passed 00:11:11.869 Test: blockdev write zeroes read split ...passed 00:11:11.869 Test: blockdev write zeroes read split partial ...passed 00:11:11.869 Test: blockdev reset ...passed 00:11:11.869 Test: blockdev write read 8 blocks ...passed 00:11:11.869 Test: blockdev write read size > 128k ...passed 00:11:11.869 Test: blockdev write read invalid size ...passed 00:11:11.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.869 Test: blockdev write read max offset ...passed 00:11:11.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.869 Test: blockdev writev readv 8 blocks ...passed 00:11:11.869 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.869 Test: blockdev writev readv block ...passed 00:11:11.869 Test: blockdev writev readv size > 128k ...passed 00:11:11.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.869 Test: blockdev comparev and writev ...passed 00:11:11.869 Test: blockdev nvme passthru rw ...passed 00:11:11.869 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.869 Test: blockdev nvme admin passthru ...passed 00:11:11.869 Test: blockdev copy ...passed 00:11:11.869 Suite: bdevio tests on: raid1 00:11:11.869 Test: blockdev write read block ...passed 00:11:11.869 Test: blockdev write zeroes read block ...passed 00:11:11.869 Test: blockdev write zeroes read no split ...passed 00:11:11.869 Test: blockdev write zeroes read split ...passed 00:11:11.869 Test: blockdev write zeroes read split partial ...passed 00:11:11.869 Test: blockdev reset ...passed 00:11:11.869 Test: blockdev write read 8 blocks ...passed 00:11:11.869 Test: blockdev write read size > 128k ...passed 00:11:11.869 Test: blockdev write read invalid size ...passed 00:11:11.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.869 Test: blockdev write read max offset ...passed 00:11:11.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.869 Test: blockdev writev readv 8 blocks ...passed 00:11:11.869 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.869 Test: blockdev writev readv block ...passed 00:11:11.869 Test: blockdev writev readv size > 128k ...passed 00:11:11.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.869 Test: blockdev comparev and writev ...passed 00:11:11.869 Test: blockdev nvme passthru rw ...passed 00:11:11.869 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.869 Test: blockdev nvme admin passthru ...passed 00:11:11.869 Test: blockdev copy ...passed 00:11:11.869 Suite: bdevio tests on: concat0 00:11:11.869 Test: blockdev write read block ...passed 00:11:11.869 Test: blockdev write zeroes read block ...passed 00:11:11.869 Test: blockdev write zeroes read no split ...passed 00:11:11.869 Test: blockdev write zeroes read split ...passed 00:11:11.869 Test: blockdev write zeroes read split partial ...passed 00:11:11.869 Test: blockdev reset ...passed 00:11:11.869 Test: blockdev write read 8 blocks ...passed 00:11:11.869 Test: blockdev write read size > 128k ...passed 00:11:11.869 Test: blockdev write read invalid size ...passed 00:11:11.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.869 Test: blockdev write read max offset ...passed 00:11:11.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.869 Test: blockdev writev readv 8 blocks ...passed 00:11:11.869 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.869 Test: blockdev writev readv block ...passed 00:11:11.869 Test: blockdev writev readv size > 128k ...passed 00:11:11.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.869 Test: blockdev comparev and writev ...passed 00:11:11.869 Test: blockdev nvme passthru rw ...passed 00:11:11.869 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.869 Test: blockdev nvme admin passthru ...passed 00:11:11.869 Test: blockdev copy ...passed 00:11:11.869 Suite: bdevio tests on: raid0 00:11:11.869 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: TestPT 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p7 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p6 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p5 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p4 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p3 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.870 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.870 Test: blockdev writev readv block ...passed 00:11:11.870 Test: blockdev writev readv size > 128k ...passed 00:11:11.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.870 Test: blockdev comparev and writev ...passed 00:11:11.870 Test: blockdev nvme passthru rw ...passed 00:11:11.870 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.870 Test: blockdev nvme admin passthru ...passed 00:11:11.870 Test: blockdev copy ...passed 00:11:11.870 Suite: bdevio tests on: Malloc2p2 00:11:11.870 Test: blockdev write read block ...passed 00:11:11.870 Test: blockdev write zeroes read block ...passed 00:11:11.870 Test: blockdev write zeroes read no split ...passed 00:11:11.870 Test: blockdev write zeroes read split ...passed 00:11:11.870 Test: blockdev write zeroes read split partial ...passed 00:11:11.870 Test: blockdev reset ...passed 00:11:11.870 Test: blockdev write read 8 blocks ...passed 00:11:11.870 Test: blockdev write read size > 128k ...passed 00:11:11.870 Test: blockdev write read invalid size ...passed 00:11:11.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.870 Test: blockdev write read max offset ...passed 00:11:11.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.870 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 Suite: bdevio tests on: Malloc2p1 00:11:11.871 Test: blockdev write read block ...passed 00:11:11.871 Test: blockdev write zeroes read block ...passed 00:11:11.871 Test: blockdev write zeroes read no split ...passed 00:11:11.871 Test: blockdev write zeroes read split ...passed 00:11:11.871 Test: blockdev write zeroes read split partial ...passed 00:11:11.871 Test: blockdev reset ...passed 00:11:11.871 Test: blockdev write read 8 blocks ...passed 00:11:11.871 Test: blockdev write read size > 128k ...passed 00:11:11.871 Test: blockdev write read invalid size ...passed 00:11:11.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.871 Test: blockdev write read max offset ...passed 00:11:11.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.871 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 Suite: bdevio tests on: Malloc2p0 00:11:11.871 Test: blockdev write read block ...passed 00:11:11.871 Test: blockdev write zeroes read block ...passed 00:11:11.871 Test: blockdev write zeroes read no split ...passed 00:11:11.871 Test: blockdev write zeroes read split ...passed 00:11:11.871 Test: blockdev write zeroes read split partial ...passed 00:11:11.871 Test: blockdev reset ...passed 00:11:11.871 Test: blockdev write read 8 blocks ...passed 00:11:11.871 Test: blockdev write read size > 128k ...passed 00:11:11.871 Test: blockdev write read invalid size ...passed 00:11:11.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.871 Test: blockdev write read max offset ...passed 00:11:11.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.871 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 Suite: bdevio tests on: Malloc1p1 00:11:11.871 Test: blockdev write read block ...passed 00:11:11.871 Test: blockdev write zeroes read block ...passed 00:11:11.871 Test: blockdev write zeroes read no split ...passed 00:11:11.871 Test: blockdev write zeroes read split ...passed 00:11:11.871 Test: blockdev write zeroes read split partial ...passed 00:11:11.871 Test: blockdev reset ...passed 00:11:11.871 Test: blockdev write read 8 blocks ...passed 00:11:11.871 Test: blockdev write read size > 128k ...passed 00:11:11.871 Test: blockdev write read invalid size ...passed 00:11:11.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.871 Test: blockdev write read max offset ...passed 00:11:11.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.871 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 Suite: bdevio tests on: Malloc1p0 00:11:11.871 Test: blockdev write read block ...passed 00:11:11.871 Test: blockdev write zeroes read block ...passed 00:11:11.871 Test: blockdev write zeroes read no split ...passed 00:11:11.871 Test: blockdev write zeroes read split ...passed 00:11:11.871 Test: blockdev write zeroes read split partial ...passed 00:11:11.871 Test: blockdev reset ...passed 00:11:11.871 Test: blockdev write read 8 blocks ...passed 00:11:11.871 Test: blockdev write read size > 128k ...passed 00:11:11.871 Test: blockdev write read invalid size ...passed 00:11:11.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.871 Test: blockdev write read max offset ...passed 00:11:11.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.871 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 Suite: bdevio tests on: Malloc0 00:11:11.871 Test: blockdev write read block ...passed 00:11:11.871 Test: blockdev write zeroes read block ...passed 00:11:11.871 Test: blockdev write zeroes read no split ...passed 00:11:11.871 Test: blockdev write zeroes read split ...passed 00:11:11.871 Test: blockdev write zeroes read split partial ...passed 00:11:11.871 Test: blockdev reset ...passed 00:11:11.871 Test: blockdev write read 8 blocks ...passed 00:11:11.871 Test: blockdev write read size > 128k ...passed 00:11:11.871 Test: blockdev write read invalid size ...passed 00:11:11.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.871 Test: blockdev write read max offset ...passed 00:11:11.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.871 Test: blockdev writev readv 8 blocks ...passed 00:11:11.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.871 Test: blockdev writev readv block ...passed 00:11:11.871 Test: blockdev writev readv size > 128k ...passed 00:11:11.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.871 Test: blockdev comparev and writev ...passed 00:11:11.871 Test: blockdev nvme passthru rw ...passed 00:11:11.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.871 Test: blockdev nvme admin passthru ...passed 00:11:11.871 Test: blockdev copy ...passed 00:11:11.871 00:11:11.871 Run Summary: Type Total Ran Passed Failed Inactive 00:11:11.871 suites 16 16 n/a 0 0 00:11:11.871 tests 368 368 368 0 0 00:11:11.871 asserts 2224 2224 2224 0 n/a 00:11:11.871 00:11:11.871 Elapsed time = 0.523 seconds 00:11:12.129 0 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 48071 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 48071 ']' 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 48071 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 48071 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:11:12.129 killing process with pid 48071 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48071' 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 48071 00:11:12.129 02:11:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 48071 00:11:12.129 02:12:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:11:12.129 00:11:12.129 real 0m1.823s 00:11:12.129 user 0m3.994s 00:11:12.129 sys 0m0.674s 00:11:12.129 ************************************ 00:11:12.129 END TEST bdev_bounds 00:11:12.129 ************************************ 00:11:12.129 02:12:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.129 02:12:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:12.386 02:12:00 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:12.386 02:12:00 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:12.386 02:12:00 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.386 02:12:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.386 ************************************ 00:11:12.386 START TEST bdev_nbd 00:11:12.386 ************************************ 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:11:12.387 00:11:12.387 real 0m0.004s 00:11:12.387 user 0m0.003s 00:11:12.387 sys 0m0.007s 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.387 ************************************ 00:11:12.387 END TEST bdev_nbd 00:11:12.387 ************************************ 00:11:12.387 02:12:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:12.387 02:12:00 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:11:12.387 02:12:00 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:11:12.387 02:12:00 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:11:12.387 02:12:00 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:11:12.387 02:12:00 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:12.387 02:12:00 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.387 02:12:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.387 ************************************ 00:11:12.387 START TEST bdev_fio 00:11:12.387 ************************************ 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:11:12.387 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:11:12.387 02:12:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:11:13.320 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.321 02:12:01 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:13.321 ************************************ 00:11:13.321 START TEST bdev_fio_rw_verify 00:11:13.321 ************************************ 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:13.321 02:12:01 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:13.321 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:13.321 fio-3.35 00:11:13.321 Starting 16 threads 00:11:13.886 EAL: TSC is not safe to use in SMP mode 00:11:13.886 EAL: TSC is not invariant 00:11:26.073 00:11:26.073 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102689: Wed May 15 02:12:12 2024 00:11:26.074 read: IOPS=167k, BW=651MiB/s (683MB/s)(6516MiB/10002msec) 00:11:26.074 slat (nsec): min=282, max=903911k, avg=4960.48, stdev=795977.91 00:11:26.074 clat (nsec): min=754, max=906021k, avg=65739.75, stdev=2921378.56 00:11:26.074 lat (nsec): min=1954, max=906029k, avg=70700.23, stdev=3028334.77 00:11:26.074 clat percentiles (usec): 00:11:26.074 | 50.000th=[ 12], 99.000th=[ 685], 99.900th=[ 1647], 00:11:26.074 | 99.990th=[ 94897], 99.999th=[497026] 00:11:26.074 write: IOPS=276k, BW=1079MiB/s (1131MB/s)(10.4GiB/9880msec); 0 zone resets 00:11:26.074 slat (nsec): min=663, max=406195k, avg=30536.39, stdev=1079431.90 00:11:26.074 clat (nsec): min=710, max=622556k, avg=146974.74, stdev=2544207.68 00:11:26.074 lat (usec): min=12, max=622580, avg=177.51, stdev=2763.68 00:11:26.074 clat percentiles (usec): 00:11:26.074 | 50.000th=[ 64], 99.000th=[ 676], 99.900th=[ 7504], 00:11:26.074 | 99.990th=[ 95945], 99.999th=[212861] 00:11:26.074 bw ( MiB/s): min= 355, max= 1961, per=98.88%, avg=1066.76, stdev=31.81, samples=296 00:11:26.074 iops : min=91109, max=502229, avg=273085.19, stdev=8144.63, samples=296 00:11:26.074 lat (nsec) : 750=0.01%, 1000=0.01% 00:11:26.074 lat (usec) : 2=0.04%, 4=5.66%, 10=16.38%, 20=19.28%, 50=19.04% 00:11:26.074 lat (usec) : 100=21.65%, 250=15.46%, 500=0.77%, 750=1.08%, 1000=0.40% 00:11:26.074 lat (msec) : 2=0.08%, 4=0.05%, 10=0.02%, 20=0.02%, 50=0.02% 00:11:26.074 lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:26.074 cpu : usr=55.60%, sys=3.14%, ctx=521110, majf=0, minf=564 00:11:26.074 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.074 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.074 issued rwts: total=1668136,2728728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:26.074 00:11:26.074 Run status group 0 (all jobs): 00:11:26.074 READ: bw=651MiB/s (683MB/s), 651MiB/s-651MiB/s (683MB/s-683MB/s), io=6516MiB (6833MB), run=10002-10002msec 00:11:26.074 WRITE: bw=1079MiB/s (1131MB/s), 1079MiB/s-1079MiB/s (1131MB/s-1131MB/s), io=10.4GiB (11.2GB), run=9880-9880msec 00:11:26.074 00:11:26.074 real 0m12.383s 00:11:26.074 user 1m33.540s 00:11:26.074 sys 0m7.534s 00:11:26.074 02:12:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.074 02:12:13 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:26.074 ************************************ 00:11:26.074 END TEST bdev_fio_rw_verify 00:11:26.074 ************************************ 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:11:26.074 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:26.075 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "812e038a-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "812e038a-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "95487175-8528-8859-9553-75e3bf0f6578"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "95487175-8528-8859-9553-75e3bf0f6578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "373a8640-ad6e-935a-851d-34c8e851fb0c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "373a8640-ad6e-935a-851d-34c8e851fb0c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f60ac666-0faf-f354-8aec-a46f5f5ef7a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f60ac666-0faf-f354-8aec-a46f5f5ef7a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a2470461-d595-9e58-bc67-b55c6e8adba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2470461-d595-9e58-bc67-b55c6e8adba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c46d1222-54b0-d656-a930-33f9b8ebdef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c46d1222-54b0-d656-a930-33f9b8ebdef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6bc5cadb-5180-ce57-a0ef-571431da6231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6bc5cadb-5180-ce57-a0ef-571431da6231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "56982d3a-ad05-d353-b837-7de6a0c7e06e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56982d3a-ad05-d353-b837-7de6a0c7e06e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "77fcb62d-abed-885a-a717-12956b28e5f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77fcb62d-abed-885a-a717-12956b28e5f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "35663645-dbfb-b85a-83c9-3ce0f53f34e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35663645-dbfb-b85a-83c9-3ce0f53f34e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "47743731-fc72-4c59-881e-b800987f0def"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "47743731-fc72-4c59-881e-b800987f0def",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b92509d8-6d68-bf52-9c07-676b80612ba9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b92509d8-6d68-bf52-9c07-676b80612ba9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "813b7fe3-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8132e535-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "81341dd3-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "813caa24-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "81355647-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "81368ed4-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "813de21b-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8137c747-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8138ffe7-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "81466eb7-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "81466eb7-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:26.075 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:11:26.075 Malloc1p0 00:11:26.075 Malloc1p1 00:11:26.075 Malloc2p0 00:11:26.075 Malloc2p1 00:11:26.075 Malloc2p2 00:11:26.075 Malloc2p3 00:11:26.075 Malloc2p4 00:11:26.075 Malloc2p5 00:11:26.075 Malloc2p6 00:11:26.075 Malloc2p7 00:11:26.075 TestPT 00:11:26.075 raid0 00:11:26.075 concat0 ]] 00:11:26.075 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "812e038a-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "812e038a-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "95487175-8528-8859-9553-75e3bf0f6578"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "95487175-8528-8859-9553-75e3bf0f6578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "373a8640-ad6e-935a-851d-34c8e851fb0c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "373a8640-ad6e-935a-851d-34c8e851fb0c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f60ac666-0faf-f354-8aec-a46f5f5ef7a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f60ac666-0faf-f354-8aec-a46f5f5ef7a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a2470461-d595-9e58-bc67-b55c6e8adba6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2470461-d595-9e58-bc67-b55c6e8adba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c46d1222-54b0-d656-a930-33f9b8ebdef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c46d1222-54b0-d656-a930-33f9b8ebdef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6bc5cadb-5180-ce57-a0ef-571431da6231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6bc5cadb-5180-ce57-a0ef-571431da6231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "56982d3a-ad05-d353-b837-7de6a0c7e06e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56982d3a-ad05-d353-b837-7de6a0c7e06e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "77fcb62d-abed-885a-a717-12956b28e5f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77fcb62d-abed-885a-a717-12956b28e5f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "35663645-dbfb-b85a-83c9-3ce0f53f34e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35663645-dbfb-b85a-83c9-3ce0f53f34e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "47743731-fc72-4c59-881e-b800987f0def"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "47743731-fc72-4c59-881e-b800987f0def",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b92509d8-6d68-bf52-9c07-676b80612ba9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b92509d8-6d68-bf52-9c07-676b80612ba9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "813b7fe3-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813b7fe3-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8132e535-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "81341dd3-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "813caa24-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813caa24-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "81355647-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "81368ed4-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "813de21b-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "813de21b-1260-11ef-99fd-bfc7c66e2865",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8137c747-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8138ffe7-1260-11ef-99fd-bfc7c66e2865",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "81466eb7-1260-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "81466eb7-1260-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:11:26.076 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.077 02:12:13 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:26.077 ************************************ 00:11:26.077 START TEST bdev_fio_trim 00:11:26.077 ************************************ 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:26.077 02:12:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:11:26.077 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:26.077 fio-3.35 00:11:26.077 Starting 14 threads 00:11:26.336 EAL: TSC is not safe to use in SMP mode 00:11:26.336 EAL: TSC is not invariant 00:11:38.549 00:11:38.549 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102708: Wed May 15 02:12:24 2024 00:11:38.549 write: IOPS=1825k, BW=7130MiB/s (7477MB/s)(69.6GiB/10001msec); 0 zone resets 00:11:38.549 slat (nsec): min=254, max=2362.2M, avg=2057.64, stdev=902824.29 00:11:38.549 clat (nsec): min=1342, max=2362.2M, avg=20153.07, stdev=1210236.42 00:11:38.549 lat (nsec): min=1877, max=2362.2M, avg=22210.71, stdev=1509888.82 00:11:38.549 clat percentiles (usec): 00:11:38.549 | 50.000th=[ 8], 99.000th=[ 27], 99.900th=[ 955], 99.990th=[ 7177], 00:11:38.549 | 99.999th=[94897] 00:11:38.549 bw ( MiB/s): min= 2213, max=12823, per=100.00%, avg=7551.03, stdev=245.83, samples=254 00:11:38.549 iops : min=566574, max=3282812, avg=1933060.14, stdev=62932.86, samples=254 00:11:38.549 trim: IOPS=1825k, BW=7130MiB/s (7477MB/s)(69.6GiB/10001msec); 0 zone resets 00:11:38.549 slat (nsec): min=620, max=314518k, avg=2044.16, stdev=224492.12 00:11:38.549 clat (nsec): min=377, max=2362.2M, avg=14593.62, stdev=1449623.65 00:11:38.549 lat (nsec): min=1611, max=2362.2M, avg=16637.78, stdev=1466906.86 00:11:38.549 clat percentiles (usec): 00:11:38.549 | 50.000th=[ 9], 99.000th=[ 26], 99.900th=[ 37], 99.990th=[ 88], 00:11:38.549 | 99.999th=[94897] 00:11:38.549 bw ( MiB/s): min= 2213, max=12823, per=100.00%, avg=7551.04, stdev=245.83, samples=254 00:11:38.549 iops : min=566574, max=3282812, avg=1933061.66, stdev=62932.90, samples=254 00:11:38.549 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:38.549 lat (usec) : 2=0.11%, 4=14.67%, 10=49.73%, 20=31.73%, 50=3.40% 00:11:38.549 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.30% 00:11:38.549 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:11:38.549 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:38.549 lat (msec) : 2000=0.01%, >=2000=0.01% 00:11:38.549 cpu : usr=62.79%, sys=4.69%, ctx=912024, majf=0, minf=0 00:11:38.549 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.549 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.549 issued rwts: total=0,18255329,18255337,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.549 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:38.549 00:11:38.549 Run status group 0 (all jobs): 00:11:38.549 WRITE: bw=7130MiB/s (7477MB/s), 7130MiB/s-7130MiB/s (7477MB/s-7477MB/s), io=69.6GiB (74.8GB), run=10001-10001msec 00:11:38.549 TRIM: bw=7130MiB/s (7477MB/s), 7130MiB/s-7130MiB/s (7477MB/s-7477MB/s), io=69.6GiB (74.8GB), run=10001-10001msec 00:11:38.549 00:11:38.549 real 0m12.231s 00:11:38.549 user 1m33.681s 00:11:38.549 sys 0m9.451s 00:11:38.549 ************************************ 00:11:38.549 END TEST bdev_fio_trim 00:11:38.549 02:12:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.550 02:12:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:11:38.550 ************************************ 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:11:38.550 /usr/home/vagrant/spdk_repo/spdk 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:11:38.550 00:11:38.550 real 0m25.640s 00:11:38.550 user 3m7.585s 00:11:38.550 sys 0m17.619s 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.550 02:12:25 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:38.550 ************************************ 00:11:38.550 END TEST bdev_fio 00:11:38.550 ************************************ 00:11:38.550 02:12:25 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:38.550 02:12:25 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:38.550 02:12:25 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:11:38.550 02:12:25 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.550 02:12:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:38.550 ************************************ 00:11:38.550 START TEST bdev_verify 00:11:38.550 ************************************ 00:11:38.550 02:12:25 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:38.550 [2024-05-15 02:12:25.896379] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:38.550 [2024-05-15 02:12:25.896646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:38.550 EAL: TSC is not safe to use in SMP mode 00:11:38.550 EAL: TSC is not invariant 00:11:38.550 [2024-05-15 02:12:26.378745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.550 [2024-05-15 02:12:26.485552] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:38.550 [2024-05-15 02:12:26.485639] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:11:38.550 [2024-05-15 02:12:26.489211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.550 [2024-05-15 02:12:26.489199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.550 [2024-05-15 02:12:26.550150] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:38.550 [2024-05-15 02:12:26.550231] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:38.808 [2024-05-15 02:12:26.558093] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:38.808 [2024-05-15 02:12:26.558155] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:38.808 [2024-05-15 02:12:26.566111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:38.808 [2024-05-15 02:12:26.566173] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:38.808 [2024-05-15 02:12:26.566186] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:38.808 [2024-05-15 02:12:26.614105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:38.808 [2024-05-15 02:12:26.614186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.808 [2024-05-15 02:12:26.614205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cbc4800 00:11:38.808 [2024-05-15 02:12:26.614216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.808 [2024-05-15 02:12:26.614769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.808 [2024-05-15 02:12:26.614798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:38.808 Running I/O for 5 seconds... 00:11:44.080 00:11:44.080 Latency(us) 00:11:44.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.080 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x1000 00:11:44.080 Malloc0 : 5.03 6530.91 25.51 0.00 0.00 19590.44 63.39 51180.37 00:11:44.080 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x1000 length 0x1000 00:11:44.080 Malloc0 : 5.03 396.04 1.55 0.00 0.00 322761.25 171.64 786929.40 00:11:44.080 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x800 00:11:44.080 Malloc1p0 : 5.03 4855.84 18.97 0.00 0.00 26342.80 269.17 25964.68 00:11:44.080 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x800 length 0x800 00:11:44.080 Malloc1p0 : 5.03 5421.79 21.18 0.00 0.00 23592.69 267.21 26588.83 00:11:44.080 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x800 00:11:44.080 Malloc1p1 : 5.04 4855.38 18.97 0.00 0.00 26339.99 271.12 26588.83 00:11:44.080 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x800 length 0x800 00:11:44.080 Malloc1p1 : 5.03 5421.42 21.18 0.00 0.00 23590.23 269.17 25465.35 00:11:44.080 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p0 : 5.04 4854.93 18.96 0.00 0.00 26338.01 276.97 26214.34 00:11:44.080 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p0 : 5.03 5421.08 21.18 0.00 0.00 23587.35 280.87 24466.71 00:11:44.080 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p1 : 5.04 4854.54 18.96 0.00 0.00 26335.64 261.36 25590.18 00:11:44.080 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p1 : 5.03 5420.74 21.17 0.00 0.00 23584.19 269.17 23842.56 00:11:44.080 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p2 : 5.04 4854.19 18.96 0.00 0.00 26332.54 263.31 25090.86 00:11:44.080 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p2 : 5.03 5420.33 21.17 0.00 0.00 23581.63 278.92 23343.24 00:11:44.080 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p3 : 5.04 4853.86 18.96 0.00 0.00 26329.56 265.26 24466.71 00:11:44.080 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p3 : 5.03 5419.96 21.17 0.00 0.00 23578.59 253.56 22469.43 00:11:44.080 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p4 : 5.04 4853.54 18.96 0.00 0.00 26326.35 267.21 23967.39 00:11:44.080 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p4 : 5.03 5419.54 21.17 0.00 0.00 23576.57 267.21 21595.62 00:11:44.080 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p5 : 5.04 4853.23 18.96 0.00 0.00 26323.12 255.51 23468.07 00:11:44.080 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p5 : 5.03 5419.21 21.17 0.00 0.00 23573.07 255.51 22219.77 00:11:44.080 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p6 : 5.04 4852.89 18.96 0.00 0.00 26320.13 263.31 21595.62 00:11:44.080 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p6 : 5.03 5418.74 21.17 0.00 0.00 23571.08 263.31 22719.09 00:11:44.080 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x200 00:11:44.080 Malloc2p7 : 5.04 4852.55 18.96 0.00 0.00 26317.13 267.21 21970.11 00:11:44.080 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x200 length 0x200 00:11:44.080 Malloc2p7 : 5.03 5418.32 21.17 0.00 0.00 23568.15 265.26 23343.24 00:11:44.080 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x1000 00:11:44.080 TestPT : 5.04 4826.87 18.85 0.00 0.00 26446.94 1341.92 23343.24 00:11:44.080 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x1000 length 0x1000 00:11:44.080 TestPT : 5.03 4756.67 18.58 0.00 0.00 26836.38 1224.90 89877.72 00:11:44.080 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.080 Verification LBA range: start 0x0 length 0x2000 00:11:44.081 raid0 : 5.04 4851.99 18.95 0.00 0.00 26306.62 288.67 23093.58 00:11:44.081 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x2000 length 0x2000 00:11:44.081 raid0 : 5.03 5417.61 21.16 0.00 0.00 23559.56 276.97 23093.58 00:11:44.081 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x0 length 0x2000 00:11:44.081 concat0 : 5.04 4851.66 18.95 0.00 0.00 26303.74 278.92 23967.39 00:11:44.081 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x2000 length 0x2000 00:11:44.081 concat0 : 5.03 5417.28 21.16 0.00 0.00 23557.16 278.92 24966.03 00:11:44.081 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x0 length 0x1000 00:11:44.081 raid1 : 5.04 4851.31 18.95 0.00 0.00 26299.24 329.63 25590.18 00:11:44.081 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x1000 length 0x1000 00:11:44.081 raid1 : 5.03 5416.75 21.16 0.00 0.00 23554.11 314.03 26339.17 00:11:44.081 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x0 length 0x4e2 00:11:44.081 AIO0 : 5.11 913.11 3.57 0.00 0.00 139201.34 13481.66 184748.65 00:11:44.081 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:44.081 Verification LBA range: start 0x4e2 length 0x4e2 00:11:44.081 AIO0 : 5.11 950.61 3.71 0.00 0.00 133898.29 1123.47 169769.03 00:11:44.081 =================================================================================================================== 00:11:44.081 Total : 151922.88 593.45 0.00 0.00 26926.14 63.39 786929.40 00:11:44.339 00:11:44.339 real 0m6.216s 00:11:44.339 user 0m9.996s 00:11:44.339 sys 0m0.596s 00:11:44.339 02:12:32 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:44.339 02:12:32 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:44.339 ************************************ 00:11:44.339 END TEST bdev_verify 00:11:44.339 ************************************ 00:11:44.339 02:12:32 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:44.339 02:12:32 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:11:44.339 02:12:32 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.339 02:12:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:44.339 ************************************ 00:11:44.339 START TEST bdev_verify_big_io 00:11:44.339 ************************************ 00:11:44.339 02:12:32 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:44.339 [2024-05-15 02:12:32.150092] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:44.339 [2024-05-15 02:12:32.150332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:44.904 EAL: TSC is not safe to use in SMP mode 00:11:44.904 EAL: TSC is not invariant 00:11:44.904 [2024-05-15 02:12:32.624277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:44.904 [2024-05-15 02:12:32.734463] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:44.904 [2024-05-15 02:12:32.734556] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:11:44.904 [2024-05-15 02:12:32.737464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.904 [2024-05-15 02:12:32.737464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.904 [2024-05-15 02:12:32.795449] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.904 [2024-05-15 02:12:32.795516] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.905 [2024-05-15 02:12:32.803439] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.905 [2024-05-15 02:12:32.803487] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.905 [2024-05-15 02:12:32.811460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.905 [2024-05-15 02:12:32.811511] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.905 [2024-05-15 02:12:32.811520] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.905 [2024-05-15 02:12:32.859460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.905 [2024-05-15 02:12:32.859531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.905 [2024-05-15 02:12:32.859547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcb7800 00:11:44.905 [2024-05-15 02:12:32.859555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.905 [2024-05-15 02:12:32.860051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.905 [2024-05-15 02:12:32.860083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:45.163 [2024-05-15 02:12:32.960359] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960510] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960586] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960663] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960759] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960839] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:45.163 [2024-05-15 02:12:32.960919] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961003] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961099] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961177] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961264] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961353] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961428] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961514] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961619] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.961714] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:45.164 [2024-05-15 02:12:32.962796] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:45.164 [2024-05-15 02:12:32.962928] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:45.164 Running I/O for 5 seconds... 00:11:50.452 00:11:50.452 Latency(us) 00:11:50.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.452 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x100 00:11:50.452 Malloc0 : 5.06 3772.72 235.79 0.00 0.00 33829.37 86.31 108851.91 00:11:50.452 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x100 length 0x100 00:11:50.452 Malloc0 : 5.04 3656.59 228.54 0.00 0.00 34905.21 86.80 111847.83 00:11:50.452 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x80 00:11:50.452 Malloc1p0 : 5.08 963.14 60.20 0.00 0.00 132210.87 709.97 172764.96 00:11:50.452 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x80 length 0x80 00:11:50.452 Malloc1p0 : 5.08 1273.50 79.59 0.00 0.00 99891.72 885.51 147798.92 00:11:50.452 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x80 00:11:50.452 Malloc1p1 : 5.10 492.80 30.80 0.00 0.00 257862.62 413.50 293600.56 00:11:50.452 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x80 length 0x80 00:11:50.452 Malloc1p1 : 5.10 477.19 29.82 0.00 0.00 266143.04 394.00 325557.09 00:11:50.452 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p0 : 5.07 476.58 29.79 0.00 0.00 66650.98 276.97 97866.85 00:11:50.452 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p0 : 5.08 463.29 28.96 0.00 0.00 68552.63 275.02 103858.70 00:11:50.452 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p1 : 5.07 476.54 29.78 0.00 0.00 66625.03 273.07 96868.21 00:11:50.452 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p1 : 5.08 463.25 28.95 0.00 0.00 68531.89 269.17 102860.06 00:11:50.452 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p2 : 5.07 476.50 29.78 0.00 0.00 66600.77 292.57 95869.57 00:11:50.452 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p2 : 5.08 463.21 28.95 0.00 0.00 68499.54 292.57 102360.74 00:11:50.452 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p3 : 5.07 476.46 29.78 0.00 0.00 66577.86 298.42 94870.93 00:11:50.452 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p3 : 5.08 463.17 28.95 0.00 0.00 68480.64 288.67 101362.10 00:11:50.452 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p4 : 5.07 476.42 29.78 0.00 0.00 66537.25 275.02 93872.29 00:11:50.452 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p4 : 5.08 463.14 28.95 0.00 0.00 68454.84 286.72 100862.78 00:11:50.452 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p5 : 5.07 476.38 29.77 0.00 0.00 66519.65 294.52 92873.65 00:11:50.452 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p5 : 5.08 463.10 28.94 0.00 0.00 68436.23 290.62 99864.14 00:11:50.452 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p6 : 5.07 476.35 29.77 0.00 0.00 66490.46 263.31 91875.01 00:11:50.452 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p6 : 5.08 463.06 28.94 0.00 0.00 68411.55 259.41 99364.82 00:11:50.452 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x20 00:11:50.452 Malloc2p7 : 5.07 476.31 29.77 0.00 0.00 66458.66 286.72 90876.36 00:11:50.452 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x20 length 0x20 00:11:50.452 Malloc2p7 : 5.08 463.02 28.94 0.00 0.00 68380.57 292.57 98366.17 00:11:50.452 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x100 00:11:50.452 TestPT : 5.13 487.01 30.44 0.00 0.00 258550.00 7240.15 261644.04 00:11:50.452 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x100 length 0x100 00:11:50.452 TestPT : 5.20 311.79 19.49 0.00 0.00 403201.61 7240.15 463369.59 00:11:50.452 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x200 00:11:50.452 raid0 : 5.10 495.79 30.99 0.00 0.00 254793.15 429.10 271630.45 00:11:50.452 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x200 length 0x200 00:11:50.452 raid0 : 5.10 480.17 30.01 0.00 0.00 263074.17 436.91 303586.98 00:11:50.452 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x200 00:11:50.452 concat0 : 5.10 495.76 30.99 0.00 0.00 254375.58 415.45 265638.60 00:11:50.452 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x200 length 0x200 00:11:50.452 concat0 : 5.10 483.56 30.22 0.00 0.00 260922.77 403.75 297595.13 00:11:50.452 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x100 00:11:50.452 raid1 : 5.10 499.13 31.20 0.00 0.00 252273.00 526.63 261644.04 00:11:50.452 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x100 length 0x100 00:11:50.452 raid1 : 5.10 483.25 30.20 0.00 0.00 260654.03 538.33 287608.71 00:11:50.452 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x0 length 0x4e 00:11:50.452 AIO0 : 5.09 492.67 30.79 0.00 0.00 155597.45 550.03 155788.05 00:11:50.452 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:11:50.452 Verification LBA range: start 0x4e length 0x4e 00:11:50.452 AIO0 : 5.09 477.06 29.82 0.00 0.00 160659.74 542.23 174762.24 00:11:50.452 =================================================================================================================== 00:11:50.452 Total : 22858.92 1428.68 0.00 0.00 106820.90 86.31 463369.59 00:11:50.452 00:11:50.452 real 0m6.276s 00:11:50.452 user 0m11.242s 00:11:50.452 sys 0m0.606s 00:11:50.452 02:12:38 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.452 ************************************ 00:11:50.452 END TEST bdev_verify_big_io 00:11:50.452 ************************************ 00:11:50.452 02:12:38 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.452 02:12:38 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:50.452 02:12:38 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:50.452 02:12:38 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.452 02:12:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.710 ************************************ 00:11:50.710 START TEST bdev_write_zeroes 00:11:50.710 ************************************ 00:11:50.710 02:12:38 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:50.710 [2024-05-15 02:12:38.464206] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:50.710 [2024-05-15 02:12:38.464426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:51.277 EAL: TSC is not safe to use in SMP mode 00:11:51.277 EAL: TSC is not invariant 00:11:51.277 [2024-05-15 02:12:38.992638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.277 [2024-05-15 02:12:39.102567] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:51.277 [2024-05-15 02:12:39.105562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.277 [2024-05-15 02:12:39.168117] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.277 [2024-05-15 02:12:39.168211] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.277 [2024-05-15 02:12:39.176094] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.277 [2024-05-15 02:12:39.176177] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.277 [2024-05-15 02:12:39.184126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.277 [2024-05-15 02:12:39.184229] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.277 [2024-05-15 02:12:39.184249] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:51.277 [2024-05-15 02:12:39.232103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.277 [2024-05-15 02:12:39.232181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.277 [2024-05-15 02:12:39.232206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829ea6800 00:11:51.277 [2024-05-15 02:12:39.232214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.277 [2024-05-15 02:12:39.232728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.277 [2024-05-15 02:12:39.232753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:51.535 Running I/O for 1 seconds... 00:11:52.469 00:11:52.469 Latency(us) 00:11:52.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.469 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc0 : 1.01 17884.43 69.86 0.00 0.00 7155.88 152.14 9924.00 00:11:52.469 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc1p0 : 1.01 17879.22 69.84 0.00 0.00 7155.22 172.62 10111.24 00:11:52.469 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc1p1 : 1.01 17874.76 69.82 0.00 0.00 7154.04 170.67 10111.24 00:11:52.469 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p0 : 1.01 17869.78 69.80 0.00 0.00 7153.48 172.62 10111.24 00:11:52.469 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p1 : 1.01 17863.81 69.78 0.00 0.00 7152.51 171.64 10111.24 00:11:52.469 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p2 : 1.01 17860.02 69.77 0.00 0.00 7152.30 174.57 10173.66 00:11:52.469 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p3 : 1.01 17855.69 69.75 0.00 0.00 7151.00 170.67 10111.24 00:11:52.469 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p4 : 1.01 17851.70 69.73 0.00 0.00 7151.10 172.62 10111.24 00:11:52.469 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p5 : 1.01 17847.29 69.72 0.00 0.00 7150.42 169.69 10111.24 00:11:52.469 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p6 : 1.01 17842.08 69.70 0.00 0.00 7149.09 170.67 10236.07 00:11:52.469 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 Malloc2p7 : 1.01 17838.33 69.68 0.00 0.00 7149.02 168.72 10298.49 00:11:52.469 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 TestPT : 1.01 17833.62 69.66 0.00 0.00 7148.34 175.54 10298.49 00:11:52.469 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 raid0 : 1.01 17826.68 69.64 0.00 0.00 7146.83 232.11 10360.90 00:11:52.469 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 concat0 : 1.01 17819.66 69.61 0.00 0.00 7145.99 246.73 10548.15 00:11:52.469 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 raid1 : 1.01 17811.80 69.58 0.00 0.00 7144.32 423.25 10485.73 00:11:52.469 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:52.469 AIO0 : 1.06 2792.88 10.91 0.00 0.00 44539.68 561.74 143804.36 00:11:52.469 =================================================================================================================== 00:11:52.469 Total : 270551.76 1056.84 0.00 0.00 7554.07 152.14 143804.36 00:11:52.727 00:11:52.727 real 0m2.185s 00:11:52.727 user 0m1.442s 00:11:52.727 sys 0m0.597s 00:11:52.727 ************************************ 00:11:52.727 END TEST bdev_write_zeroes 00:11:52.727 ************************************ 00:11:52.727 02:12:40 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:52.727 02:12:40 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:52.727 02:12:40 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:52.727 02:12:40 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:52.727 02:12:40 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:52.727 02:12:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:52.727 ************************************ 00:11:52.727 START TEST bdev_json_nonenclosed 00:11:52.727 ************************************ 00:11:52.727 02:12:40 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:52.727 [2024-05-15 02:12:40.691867] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:52.727 [2024-05-15 02:12:40.692134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:53.291 EAL: TSC is not safe to use in SMP mode 00:11:53.291 EAL: TSC is not invariant 00:11:53.291 [2024-05-15 02:12:41.223743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.549 [2024-05-15 02:12:41.358473] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:53.549 [2024-05-15 02:12:41.361617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.549 [2024-05-15 02:12:41.361672] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:53.549 [2024-05-15 02:12:41.361683] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:53.549 [2024-05-15 02:12:41.361693] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:53.549 00:11:53.549 real 0m0.836s 00:11:53.549 user 0m0.233s 00:11:53.549 sys 0m0.598s 00:11:53.549 ************************************ 00:11:53.549 END TEST bdev_json_nonenclosed 00:11:53.549 ************************************ 00:11:53.549 02:12:41 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.549 02:12:41 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 02:12:41 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.807 02:12:41 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:53.807 02:12:41 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.807 02:12:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 ************************************ 00:11:53.807 START TEST bdev_json_nonarray 00:11:53.807 ************************************ 00:11:53.807 02:12:41 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.807 [2024-05-15 02:12:41.570055] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:53.807 [2024-05-15 02:12:41.570315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:54.373 EAL: TSC is not safe to use in SMP mode 00:11:54.373 EAL: TSC is not invariant 00:11:54.373 [2024-05-15 02:12:42.082297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.373 [2024-05-15 02:12:42.221112] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:54.373 [2024-05-15 02:12:42.224093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.373 [2024-05-15 02:12:42.224144] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:54.373 [2024-05-15 02:12:42.224154] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:54.373 [2024-05-15 02:12:42.224162] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:54.373 00:11:54.373 real 0m0.810s 00:11:54.373 user 0m0.243s 00:11:54.373 sys 0m0.566s 00:11:54.373 02:12:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.373 02:12:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 ************************************ 00:11:54.373 END TEST bdev_json_nonarray 00:11:54.373 ************************************ 00:11:54.631 02:12:42 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:11:54.631 02:12:42 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:11:54.631 02:12:42 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:54.631 02:12:42 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.631 02:12:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:54.631 ************************************ 00:11:54.631 START TEST bdev_qos 00:11:54.631 ************************************ 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48483 00:11:54.631 Process qos testing pid: 48483 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48483' 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48483 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 48483 ']' 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:11:54.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:54.631 02:12:42 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:54.631 [2024-05-15 02:12:42.429407] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:54.631 [2024-05-15 02:12:42.429587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:55.197 EAL: TSC is not safe to use in SMP mode 00:11:55.197 EAL: TSC is not invariant 00:11:55.197 [2024-05-15 02:12:42.898942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.197 [2024-05-15 02:12:42.983030] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:11:55.197 [2024-05-15 02:12:42.985207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 [ 00:11:55.763 { 00:11:55.763 "name": "Malloc_0", 00:11:55.763 "aliases": [ 00:11:55.763 "9d32dfbe-1260-11ef-99fd-bfc7c66e2865" 00:11:55.763 ], 00:11:55.763 "product_name": "Malloc disk", 00:11:55.763 "block_size": 512, 00:11:55.763 "num_blocks": 262144, 00:11:55.763 "uuid": "9d32dfbe-1260-11ef-99fd-bfc7c66e2865", 00:11:55.763 "assigned_rate_limits": { 00:11:55.763 "rw_ios_per_sec": 0, 00:11:55.763 "rw_mbytes_per_sec": 0, 00:11:55.763 "r_mbytes_per_sec": 0, 00:11:55.763 "w_mbytes_per_sec": 0 00:11:55.763 }, 00:11:55.763 "claimed": false, 00:11:55.763 "zoned": false, 00:11:55.763 "supported_io_types": { 00:11:55.763 "read": true, 00:11:55.763 "write": true, 00:11:55.763 "unmap": true, 00:11:55.763 "write_zeroes": true, 00:11:55.763 "flush": true, 00:11:55.763 "reset": true, 00:11:55.763 "compare": false, 00:11:55.763 "compare_and_write": false, 00:11:55.763 "abort": true, 00:11:55.763 "nvme_admin": false, 00:11:55.763 "nvme_io": false 00:11:55.763 }, 00:11:55.763 "memory_domains": [ 00:11:55.763 { 00:11:55.763 "dma_device_id": "system", 00:11:55.763 "dma_device_type": 1 00:11:55.763 }, 00:11:55.763 { 00:11:55.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.763 "dma_device_type": 2 00:11:55.763 } 00:11:55.763 ], 00:11:55.763 "driver_specific": {} 00:11:55.763 } 00:11:55.763 ] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 Null_1 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:55.763 [ 00:11:55.763 { 00:11:55.763 "name": "Null_1", 00:11:55.763 "aliases": [ 00:11:55.763 "9d385d14-1260-11ef-99fd-bfc7c66e2865" 00:11:55.763 ], 00:11:55.763 "product_name": "Null disk", 00:11:55.763 "block_size": 512, 00:11:55.763 "num_blocks": 262144, 00:11:55.763 "uuid": "9d385d14-1260-11ef-99fd-bfc7c66e2865", 00:11:55.763 "assigned_rate_limits": { 00:11:55.763 "rw_ios_per_sec": 0, 00:11:55.763 "rw_mbytes_per_sec": 0, 00:11:55.763 "r_mbytes_per_sec": 0, 00:11:55.763 "w_mbytes_per_sec": 0 00:11:55.763 }, 00:11:55.763 "claimed": false, 00:11:55.763 "zoned": false, 00:11:55.763 "supported_io_types": { 00:11:55.763 "read": true, 00:11:55.763 "write": true, 00:11:55.763 "unmap": false, 00:11:55.763 "write_zeroes": true, 00:11:55.763 "flush": false, 00:11:55.763 "reset": true, 00:11:55.763 "compare": false, 00:11:55.763 "compare_and_write": false, 00:11:55.763 "abort": true, 00:11:55.763 "nvme_admin": false, 00:11:55.763 "nvme_io": false 00:11:55.763 }, 00:11:55.763 "driver_specific": {} 00:11:55.763 } 00:11:55.763 ] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:55.763 02:12:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:11:55.763 Running I/O for 60 seconds... 00:12:01.035 02:12:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 589255.07 2357020.28 0.00 0.00 2482176.00 0.00 0.00 ' 00:12:01.035 02:12:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:12:01.035 02:12:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=589255.07 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 589255 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=589255 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=147000 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 147000 -gt 1000 ']' 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 147000 Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 147000 IOPS Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.035 02:12:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:01.035 ************************************ 00:12:01.035 START TEST bdev_qos_iops 00:12:01.035 ************************************ 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 147000 IOPS Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=147000 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:01.035 02:12:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 146928.32 587713.28 0.00 0.00 628572.00 0.00 0.00 ' 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=146928.32 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 146928 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=146928 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=132300 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=161700 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 146928 -lt 132300 ']' 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 146928 -gt 161700 ']' 00:12:07.684 00:12:07.684 real 0m5.449s 00:12:07.684 user 0m0.155s 00:12:07.684 sys 0m0.018s 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:07.684 02:12:54 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:07.684 ************************************ 00:12:07.684 END TEST bdev_qos_iops 00:12:07.684 ************************************ 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:12:07.684 02:12:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 360136.10 1440544.40 0.00 0.00 1552384.00 0.00 0.00 ' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1552384.00 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1552384 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1552384 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=151 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 151 -lt 2 ']' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 151 Null_1 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 151 BANDWIDTH Null_1 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:12.999 02:13:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:12.999 ************************************ 00:12:12.999 START TEST bdev_qos_bw 00:12:12.999 ************************************ 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 151 BANDWIDTH Null_1 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=151 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:12:12.999 02:13:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 38647.97 154591.90 0.00 0.00 159104.00 0.00 0.00 ' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=159104.00 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 159104 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=159104 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=154624 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=139161 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=170086 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 159104 -lt 139161 ']' 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 159104 -gt 170086 ']' 00:12:18.275 00:12:18.275 real 0m5.419s 00:12:18.275 user 0m0.132s 00:12:18.275 sys 0m0.041s 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 ************************************ 00:12:18.275 END TEST bdev_qos_bw 00:12:18.275 ************************************ 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.275 02:13:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 ************************************ 00:12:18.275 START TEST bdev_qos_ro_bw 00:12:18.275 ************************************ 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:18.275 02:13:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.85 2051.41 0.00 0.00 2216.00 0.00 0.00 ' 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2216.00 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2216 00:12:23.615 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2216 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -lt 1843 ']' 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -gt 2252 ']' 00:12:23.616 00:12:23.616 real 0m5.523s 00:12:23.616 user 0m0.120s 00:12:23.616 sys 0m0.033s 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.616 02:13:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:12:23.616 ************************************ 00:12:23.616 END TEST bdev_qos_ro_bw 00:12:23.616 ************************************ 00:12:23.616 02:13:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:23.616 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.616 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:23.875 00:12:23.875 Latency(us) 00:12:23.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.875 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:23.875 Malloc_0 : 27.91 203134.82 793.50 0.00 0.00 1248.76 335.48 501317.97 00:12:23.875 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:23.875 Null_1 : 27.93 289362.07 1130.32 0.00 0.00 884.31 69.73 25715.02 00:12:23.875 =================================================================================================================== 00:12:23.875 Total : 492496.89 1923.82 0.00 0.00 1034.54 69.73 501317.97 00:12:23.875 0 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48483 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 48483 ']' 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 48483 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps -c -o command 48483 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # tail -1 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:12:23.875 killing process with pid 48483 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48483' 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 48483 00:12:23.875 Received shutdown signal, test time was about 27.951991 seconds 00:12:23.875 00:12:23.875 Latency(us) 00:12:23.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.875 =================================================================================================================== 00:12:23.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 48483 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:12:23.875 00:12:23.875 real 0m29.379s 00:12:23.875 user 0m30.181s 00:12:23.875 sys 0m0.888s 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.875 ************************************ 00:12:23.875 END TEST bdev_qos 00:12:23.875 ************************************ 00:12:23.875 02:13:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:23.875 02:13:11 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:23.875 02:13:11 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:23.875 02:13:11 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.875 02:13:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:23.875 ************************************ 00:12:23.875 START TEST bdev_qd_sampling 00:12:23.875 ************************************ 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48705 00:12:23.875 Process bdev QD sampling period testing pid: 48705 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48705' 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48705 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 48705 ']' 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.875 02:13:11 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:23.875 [2024-05-15 02:13:11.854891] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:23.875 [2024-05-15 02:13:11.855125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:24.443 EAL: TSC is not safe to use in SMP mode 00:12:24.443 EAL: TSC is not invariant 00:12:24.443 [2024-05-15 02:13:12.326743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:24.702 [2024-05-15 02:13:12.445547] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:24.702 [2024-05-15 02:13:12.445633] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:12:24.702 [2024-05-15 02:13:12.449432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.702 [2024-05-15 02:13:12.449425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:24.960 Malloc_QD 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:24.960 [ 00:12:24.960 { 00:12:24.960 "name": "Malloc_QD", 00:12:24.960 "aliases": [ 00:12:24.960 "aeb89999-1260-11ef-99fd-bfc7c66e2865" 00:12:24.960 ], 00:12:24.960 "product_name": "Malloc disk", 00:12:24.960 "block_size": 512, 00:12:24.960 "num_blocks": 262144, 00:12:24.960 "uuid": "aeb89999-1260-11ef-99fd-bfc7c66e2865", 00:12:24.960 "assigned_rate_limits": { 00:12:24.960 "rw_ios_per_sec": 0, 00:12:24.960 "rw_mbytes_per_sec": 0, 00:12:24.960 "r_mbytes_per_sec": 0, 00:12:24.960 "w_mbytes_per_sec": 0 00:12:24.960 }, 00:12:24.960 "claimed": false, 00:12:24.960 "zoned": false, 00:12:24.960 "supported_io_types": { 00:12:24.960 "read": true, 00:12:24.960 "write": true, 00:12:24.960 "unmap": true, 00:12:24.960 "write_zeroes": true, 00:12:24.960 "flush": true, 00:12:24.960 "reset": true, 00:12:24.960 "compare": false, 00:12:24.960 "compare_and_write": false, 00:12:24.960 "abort": true, 00:12:24.960 "nvme_admin": false, 00:12:24.960 "nvme_io": false 00:12:24.960 }, 00:12:24.960 "memory_domains": [ 00:12:24.960 { 00:12:24.960 "dma_device_id": "system", 00:12:24.960 "dma_device_type": 1 00:12:24.960 }, 00:12:24.960 { 00:12:24.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.960 "dma_device_type": 2 00:12:24.960 } 00:12:24.960 ], 00:12:24.960 "driver_specific": {} 00:12:24.960 } 00:12:24.960 ] 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:12:24.960 02:13:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:12:25.218 02:13:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.218 Running I/O for 5 seconds... 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:12:27.193 "tick_rate": 2100005139, 00:12:27.193 "ticks": 665180053972, 00:12:27.193 "bdevs": [ 00:12:27.193 { 00:12:27.193 "name": "Malloc_QD", 00:12:27.193 "bytes_read": 12717167104, 00:12:27.193 "num_read_ops": 3104771, 00:12:27.193 "bytes_written": 0, 00:12:27.193 "num_write_ops": 0, 00:12:27.193 "bytes_unmapped": 0, 00:12:27.193 "num_unmap_ops": 0, 00:12:27.193 "bytes_copied": 0, 00:12:27.193 "num_copy_ops": 0, 00:12:27.193 "read_latency_ticks": 2066072011740, 00:12:27.193 "max_read_latency_ticks": 2281942, 00:12:27.193 "min_read_latency_ticks": 66732, 00:12:27.193 "write_latency_ticks": 0, 00:12:27.193 "max_write_latency_ticks": 0, 00:12:27.193 "min_write_latency_ticks": 0, 00:12:27.193 "unmap_latency_ticks": 0, 00:12:27.193 "max_unmap_latency_ticks": 0, 00:12:27.193 "min_unmap_latency_ticks": 0, 00:12:27.193 "copy_latency_ticks": 0, 00:12:27.193 "max_copy_latency_ticks": 0, 00:12:27.193 "min_copy_latency_ticks": 0, 00:12:27.193 "io_error": {}, 00:12:27.193 "queue_depth_polling_period": 10, 00:12:27.193 "queue_depth": 512, 00:12:27.193 "io_time": 390, 00:12:27.193 "weighted_io_time": 199680 00:12:27.193 } 00:12:27.193 ] 00:12:27.193 }' 00:12:27.193 02:13:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:27.194 00:12:27.194 Latency(us) 00:12:27.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.194 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:27.194 Malloc_QD : 1.95 799480.70 3122.97 0.00 0.00 319.88 66.32 1092.26 00:12:27.194 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:27.194 Malloc_QD : 1.95 814677.11 3182.33 0.00 0.00 313.92 65.34 589.04 00:12:27.194 =================================================================================================================== 00:12:27.194 Total : 1614157.81 6305.30 0.00 0.00 316.87 65.34 1092.26 00:12:27.194 0 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48705 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 48705 ']' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 48705 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # tail -1 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps -c -o command 48705 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:12:27.194 killing process with pid 48705 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48705' 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 48705 00:12:27.194 Received shutdown signal, test time was about 1.981505 seconds 00:12:27.194 00:12:27.194 Latency(us) 00:12:27.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.194 =================================================================================================================== 00:12:27.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 48705 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:12:27.194 00:12:27.194 real 0m3.343s 00:12:27.194 user 0m6.021s 00:12:27.194 sys 0m0.653s 00:12:27.194 ************************************ 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.194 02:13:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:27.194 END TEST bdev_qd_sampling 00:12:27.194 ************************************ 00:12:27.452 02:13:15 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:12:27.452 02:13:15 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:27.452 02:13:15 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.452 02:13:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.452 ************************************ 00:12:27.452 START TEST bdev_error 00:12:27.452 ************************************ 00:12:27.452 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48748 00:12:27.452 Process error testing pid: 48748 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48748' 00:12:27.452 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48748 00:12:27.453 02:13:15 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 48748 ']' 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:27.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:27.453 02:13:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:27.453 [2024-05-15 02:13:15.247865] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:27.453 [2024-05-15 02:13:15.248109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:27.711 EAL: TSC is not safe to use in SMP mode 00:12:27.711 EAL: TSC is not invariant 00:12:27.711 [2024-05-15 02:13:15.710430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.969 [2024-05-15 02:13:15.812555] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:12:27.969 [2024-05-15 02:13:15.815277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:12:28.535 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 Dev_1 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 [ 00:12:28.535 { 00:12:28.535 "name": "Dev_1", 00:12:28.535 "aliases": [ 00:12:28.535 "b0b6d6c0-1260-11ef-99fd-bfc7c66e2865" 00:12:28.535 ], 00:12:28.535 "product_name": "Malloc disk", 00:12:28.535 "block_size": 512, 00:12:28.535 "num_blocks": 262144, 00:12:28.535 "uuid": "b0b6d6c0-1260-11ef-99fd-bfc7c66e2865", 00:12:28.535 "assigned_rate_limits": { 00:12:28.535 "rw_ios_per_sec": 0, 00:12:28.535 "rw_mbytes_per_sec": 0, 00:12:28.535 "r_mbytes_per_sec": 0, 00:12:28.535 "w_mbytes_per_sec": 0 00:12:28.535 }, 00:12:28.535 "claimed": false, 00:12:28.535 "zoned": false, 00:12:28.535 "supported_io_types": { 00:12:28.535 "read": true, 00:12:28.535 "write": true, 00:12:28.535 "unmap": true, 00:12:28.535 "write_zeroes": true, 00:12:28.535 "flush": true, 00:12:28.535 "reset": true, 00:12:28.535 "compare": false, 00:12:28.535 "compare_and_write": false, 00:12:28.535 "abort": true, 00:12:28.535 "nvme_admin": false, 00:12:28.535 "nvme_io": false 00:12:28.535 }, 00:12:28.535 "memory_domains": [ 00:12:28.535 { 00:12:28.535 "dma_device_id": "system", 00:12:28.535 "dma_device_type": 1 00:12:28.535 }, 00:12:28.535 { 00:12:28.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.535 "dma_device_type": 2 00:12:28.535 } 00:12:28.535 ], 00:12:28.535 "driver_specific": {} 00:12:28.535 } 00:12:28.535 ] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:12:28.535 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 true 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 Dev_2 00:12:28.535 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.535 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.536 [ 00:12:28.536 { 00:12:28.536 "name": "Dev_2", 00:12:28.536 "aliases": [ 00:12:28.536 "b0bec556-1260-11ef-99fd-bfc7c66e2865" 00:12:28.536 ], 00:12:28.536 "product_name": "Malloc disk", 00:12:28.536 "block_size": 512, 00:12:28.536 "num_blocks": 262144, 00:12:28.536 "uuid": "b0bec556-1260-11ef-99fd-bfc7c66e2865", 00:12:28.536 "assigned_rate_limits": { 00:12:28.536 "rw_ios_per_sec": 0, 00:12:28.536 "rw_mbytes_per_sec": 0, 00:12:28.536 "r_mbytes_per_sec": 0, 00:12:28.536 "w_mbytes_per_sec": 0 00:12:28.536 }, 00:12:28.536 "claimed": false, 00:12:28.536 "zoned": false, 00:12:28.536 "supported_io_types": { 00:12:28.536 "read": true, 00:12:28.536 "write": true, 00:12:28.536 "unmap": true, 00:12:28.536 "write_zeroes": true, 00:12:28.536 "flush": true, 00:12:28.536 "reset": true, 00:12:28.536 "compare": false, 00:12:28.536 "compare_and_write": false, 00:12:28.536 "abort": true, 00:12:28.536 "nvme_admin": false, 00:12:28.536 "nvme_io": false 00:12:28.536 }, 00:12:28.536 "memory_domains": [ 00:12:28.536 { 00:12:28.536 "dma_device_id": "system", 00:12:28.536 "dma_device_type": 1 00:12:28.536 }, 00:12:28.536 { 00:12:28.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.536 "dma_device_type": 2 00:12:28.536 } 00:12:28.536 ], 00:12:28.536 "driver_specific": {} 00:12:28.536 } 00:12:28.536 ] 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:12:28.536 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:28.536 02:13:16 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.536 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:12:28.536 02:13:16 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:28.536 Running I/O for 5 seconds... 00:12:29.469 02:13:17 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48748 00:12:29.469 Process is existed as continue on error is set. Pid: 48748 00:12:29.469 02:13:17 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48748' 00:12:29.469 02:13:17 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.469 02:13:17 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:29.469 02:13:17 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.469 02:13:17 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:12:29.728 Timeout while waiting for response: 00:12:29.728 00:12:29.728 00:12:33.948 00:12:33.948 Latency(us) 00:12:33.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.948 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:33.948 EE_Dev_1 : 0.93 390264.18 1524.47 5.39 0.00 40.75 19.14 114.10 00:12:33.948 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:33.948 Dev_2 : 5.00 729025.57 2847.76 0.00 0.00 21.67 14.32 18849.36 00:12:33.948 =================================================================================================================== 00:12:33.948 Total : 1119289.74 4372.23 5.39 0.00 23.40 14.32 18849.36 00:12:34.885 02:13:22 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48748 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 48748 ']' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 48748 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps -c -o command 48748 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # tail -1 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:12:34.885 killing process with pid 48748 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48748' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 48748 00:12:34.885 Received shutdown signal, test time was about 5.000000 seconds 00:12:34.885 00:12:34.885 Latency(us) 00:12:34.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.885 =================================================================================================================== 00:12:34.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 48748 00:12:34.885 02:13:22 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48788 00:12:34.885 02:13:22 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:12:34.885 Process error testing pid: 48788 00:12:34.885 02:13:22 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48788' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48788 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 48788 ']' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.885 02:13:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:34.885 [2024-05-15 02:13:22.740964] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:34.885 [2024-05-15 02:13:22.741228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:35.453 EAL: TSC is not safe to use in SMP mode 00:12:35.453 EAL: TSC is not invariant 00:12:35.454 [2024-05-15 02:13:23.259924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.454 [2024-05-15 02:13:23.346487] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:12:35.454 [2024-05-15 02:13:23.348657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 Dev_1 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 [ 00:12:36.020 { 00:12:36.020 "name": "Dev_1", 00:12:36.020 "aliases": [ 00:12:36.020 "b54314dd-1260-11ef-99fd-bfc7c66e2865" 00:12:36.020 ], 00:12:36.020 "product_name": "Malloc disk", 00:12:36.020 "block_size": 512, 00:12:36.020 "num_blocks": 262144, 00:12:36.020 "uuid": "b54314dd-1260-11ef-99fd-bfc7c66e2865", 00:12:36.020 "assigned_rate_limits": { 00:12:36.020 "rw_ios_per_sec": 0, 00:12:36.020 "rw_mbytes_per_sec": 0, 00:12:36.020 "r_mbytes_per_sec": 0, 00:12:36.020 "w_mbytes_per_sec": 0 00:12:36.020 }, 00:12:36.020 "claimed": false, 00:12:36.020 "zoned": false, 00:12:36.020 "supported_io_types": { 00:12:36.020 "read": true, 00:12:36.020 "write": true, 00:12:36.020 "unmap": true, 00:12:36.020 "write_zeroes": true, 00:12:36.020 "flush": true, 00:12:36.020 "reset": true, 00:12:36.020 "compare": false, 00:12:36.020 "compare_and_write": false, 00:12:36.020 "abort": true, 00:12:36.020 "nvme_admin": false, 00:12:36.020 "nvme_io": false 00:12:36.020 }, 00:12:36.020 "memory_domains": [ 00:12:36.020 { 00:12:36.020 "dma_device_id": "system", 00:12:36.020 "dma_device_type": 1 00:12:36.020 }, 00:12:36.020 { 00:12:36.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.020 "dma_device_type": 2 00:12:36.020 } 00:12:36.020 ], 00:12:36.020 "driver_specific": {} 00:12:36.020 } 00:12:36.020 ] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 true 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 Dev_2 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 [ 00:12:36.020 { 00:12:36.020 "name": "Dev_2", 00:12:36.020 "aliases": [ 00:12:36.020 "b549cabf-1260-11ef-99fd-bfc7c66e2865" 00:12:36.020 ], 00:12:36.020 "product_name": "Malloc disk", 00:12:36.020 "block_size": 512, 00:12:36.020 "num_blocks": 262144, 00:12:36.020 "uuid": "b549cabf-1260-11ef-99fd-bfc7c66e2865", 00:12:36.020 "assigned_rate_limits": { 00:12:36.020 "rw_ios_per_sec": 0, 00:12:36.020 "rw_mbytes_per_sec": 0, 00:12:36.020 "r_mbytes_per_sec": 0, 00:12:36.020 "w_mbytes_per_sec": 0 00:12:36.020 }, 00:12:36.020 "claimed": false, 00:12:36.020 "zoned": false, 00:12:36.020 "supported_io_types": { 00:12:36.020 "read": true, 00:12:36.020 "write": true, 00:12:36.020 "unmap": true, 00:12:36.020 "write_zeroes": true, 00:12:36.020 "flush": true, 00:12:36.020 "reset": true, 00:12:36.020 "compare": false, 00:12:36.020 "compare_and_write": false, 00:12:36.020 "abort": true, 00:12:36.020 "nvme_admin": false, 00:12:36.020 "nvme_io": false 00:12:36.020 }, 00:12:36.020 "memory_domains": [ 00:12:36.020 { 00:12:36.020 "dma_device_id": "system", 00:12:36.020 "dma_device_type": 1 00:12:36.020 }, 00:12:36.020 { 00:12:36.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.020 "dma_device_type": 2 00:12:36.020 } 00:12:36.020 ], 00:12:36.020 "driver_specific": {} 00:12:36.020 } 00:12:36.020 ] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:12:36.020 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.021 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48788 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48788 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:12:36.021 02:13:23 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.021 02:13:23 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48788 00:12:36.280 Running I/O for 5 seconds... 00:12:36.280 task offset: 104352 on job bdev=EE_Dev_1 fails 00:12:36.280 00:12:36.280 Latency(us) 00:12:36.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.280 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:36.280 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:12:36.280 EE_Dev_1 : 0.00 183333.33 716.15 41666.67 0.00 56.54 20.60 107.76 00:12:36.280 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:36.280 Dev_2 : 0.00 220689.66 862.07 0.00 0.00 34.62 25.48 49.98 00:12:36.280 =================================================================================================================== 00:12:36.280 Total : 404022.99 1578.21 41666.67 0.00 44.65 20.60 107.76 00:12:36.280 [2024-05-15 02:13:24.126697] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:36.280 request: 00:12:36.280 { 00:12:36.280 "method": "perform_tests", 00:12:36.280 "req_id": 1 00:12:36.280 } 00:12:36.280 Got JSON-RPC error response 00:12:36.280 response: 00:12:36.280 { 00:12:36.280 "code": -32603, 00:12:36.280 "message": "bdevperf failed with error Operation not permitted" 00:12:36.280 } 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.540 00:12:36.540 real 0m9.077s 00:12:36.540 user 0m9.268s 00:12:36.540 sys 0m1.295s 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.540 ************************************ 00:12:36.540 END TEST bdev_error 00:12:36.540 ************************************ 00:12:36.540 02:13:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:36.540 02:13:24 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:12:36.540 02:13:24 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.540 02:13:24 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.540 02:13:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:36.540 ************************************ 00:12:36.540 START TEST bdev_stat 00:12:36.540 ************************************ 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48819 00:12:36.540 Process Bdev IO statistics testing pid: 48819 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48819' 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48819 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 48819 ']' 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:36.540 02:13:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:12:36.540 [2024-05-15 02:13:24.382524] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:36.540 [2024-05-15 02:13:24.382761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:37.477 EAL: TSC is not safe to use in SMP mode 00:12:37.477 EAL: TSC is not invariant 00:12:37.477 [2024-05-15 02:13:25.204495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.477 [2024-05-15 02:13:25.300654] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:37.477 [2024-05-15 02:13:25.300732] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:12:37.477 [2024-05-15 02:13:25.305628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.477 [2024-05-15 02:13:25.305553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:37.734 Malloc_STAT 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.734 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:37.734 [ 00:12:37.734 { 00:12:37.734 "name": "Malloc_STAT", 00:12:37.734 "aliases": [ 00:12:37.734 "b63bde6d-1260-11ef-99fd-bfc7c66e2865" 00:12:37.734 ], 00:12:37.734 "product_name": "Malloc disk", 00:12:37.734 "block_size": 512, 00:12:37.734 "num_blocks": 262144, 00:12:37.734 "uuid": "b63bde6d-1260-11ef-99fd-bfc7c66e2865", 00:12:37.734 "assigned_rate_limits": { 00:12:37.734 "rw_ios_per_sec": 0, 00:12:37.734 "rw_mbytes_per_sec": 0, 00:12:37.734 "r_mbytes_per_sec": 0, 00:12:37.734 "w_mbytes_per_sec": 0 00:12:37.734 }, 00:12:37.734 "claimed": false, 00:12:37.734 "zoned": false, 00:12:37.734 "supported_io_types": { 00:12:37.734 "read": true, 00:12:37.734 "write": true, 00:12:37.734 "unmap": true, 00:12:37.734 "write_zeroes": true, 00:12:37.734 "flush": true, 00:12:37.734 "reset": true, 00:12:37.734 "compare": false, 00:12:37.735 "compare_and_write": false, 00:12:37.735 "abort": true, 00:12:37.735 "nvme_admin": false, 00:12:37.735 "nvme_io": false 00:12:37.735 }, 00:12:37.735 "memory_domains": [ 00:12:37.735 { 00:12:37.735 "dma_device_id": "system", 00:12:37.735 "dma_device_type": 1 00:12:37.735 }, 00:12:37.735 { 00:12:37.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.735 "dma_device_type": 2 00:12:37.735 } 00:12:37.735 ], 00:12:37.735 "driver_specific": {} 00:12:37.735 } 00:12:37.735 ] 00:12:37.735 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.735 02:13:25 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:12:37.735 02:13:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:12:37.735 02:13:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.735 Running I/O for 10 seconds... 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:12:40.265 "tick_rate": 2100005139, 00:12:40.265 "ticks": 691884309686, 00:12:40.265 "bdevs": [ 00:12:40.265 { 00:12:40.265 "name": "Malloc_STAT", 00:12:40.265 "bytes_read": 12552540672, 00:12:40.265 "num_read_ops": 3064579, 00:12:40.265 "bytes_written": 0, 00:12:40.265 "num_write_ops": 0, 00:12:40.265 "bytes_unmapped": 0, 00:12:40.265 "num_unmap_ops": 0, 00:12:40.265 "bytes_copied": 0, 00:12:40.265 "num_copy_ops": 0, 00:12:40.265 "read_latency_ticks": 2155802279004, 00:12:40.265 "max_read_latency_ticks": 1626786, 00:12:40.265 "min_read_latency_ticks": 58124, 00:12:40.265 "write_latency_ticks": 0, 00:12:40.265 "max_write_latency_ticks": 0, 00:12:40.265 "min_write_latency_ticks": 0, 00:12:40.265 "unmap_latency_ticks": 0, 00:12:40.265 "max_unmap_latency_ticks": 0, 00:12:40.265 "min_unmap_latency_ticks": 0, 00:12:40.265 "copy_latency_ticks": 0, 00:12:40.265 "max_copy_latency_ticks": 0, 00:12:40.265 "min_copy_latency_ticks": 0, 00:12:40.265 "io_error": {} 00:12:40.265 } 00:12:40.265 ] 00:12:40.265 }' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3064579 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:12:40.265 "tick_rate": 2100005139, 00:12:40.265 "ticks": 691944463960, 00:12:40.265 "name": "Malloc_STAT", 00:12:40.265 "channels": [ 00:12:40.265 { 00:12:40.265 "thread_id": 2, 00:12:40.265 "bytes_read": 6642728960, 00:12:40.265 "num_read_ops": 1621760, 00:12:40.265 "bytes_written": 0, 00:12:40.265 "num_write_ops": 0, 00:12:40.265 "bytes_unmapped": 0, 00:12:40.265 "num_unmap_ops": 0, 00:12:40.265 "bytes_copied": 0, 00:12:40.265 "num_copy_ops": 0, 00:12:40.265 "read_latency_ticks": 1093212070958, 00:12:40.265 "max_read_latency_ticks": 1626786, 00:12:40.265 "min_read_latency_ticks": 593566, 00:12:40.265 "write_latency_ticks": 0, 00:12:40.265 "max_write_latency_ticks": 0, 00:12:40.265 "min_write_latency_ticks": 0, 00:12:40.265 "unmap_latency_ticks": 0, 00:12:40.265 "max_unmap_latency_ticks": 0, 00:12:40.265 "min_unmap_latency_ticks": 0, 00:12:40.265 "copy_latency_ticks": 0, 00:12:40.265 "max_copy_latency_ticks": 0, 00:12:40.265 "min_copy_latency_ticks": 0 00:12:40.265 }, 00:12:40.265 { 00:12:40.265 "thread_id": 3, 00:12:40.265 "bytes_read": 6094323712, 00:12:40.265 "num_read_ops": 1487872, 00:12:40.265 "bytes_written": 0, 00:12:40.265 "num_write_ops": 0, 00:12:40.265 "bytes_unmapped": 0, 00:12:40.265 "num_unmap_ops": 0, 00:12:40.265 "bytes_copied": 0, 00:12:40.265 "num_copy_ops": 0, 00:12:40.265 "read_latency_ticks": 1093430642230, 00:12:40.265 "max_read_latency_ticks": 1306272, 00:12:40.265 "min_read_latency_ticks": 630632, 00:12:40.265 "write_latency_ticks": 0, 00:12:40.265 "max_write_latency_ticks": 0, 00:12:40.265 "min_write_latency_ticks": 0, 00:12:40.265 "unmap_latency_ticks": 0, 00:12:40.265 "max_unmap_latency_ticks": 0, 00:12:40.265 "min_unmap_latency_ticks": 0, 00:12:40.265 "copy_latency_ticks": 0, 00:12:40.265 "max_copy_latency_ticks": 0, 00:12:40.265 "min_copy_latency_ticks": 0 00:12:40.265 } 00:12:40.265 ] 00:12:40.265 }' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1621760 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1621760 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1487872 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3109632 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:12:40.265 "tick_rate": 2100005139, 00:12:40.265 "ticks": 692030676336, 00:12:40.265 "bdevs": [ 00:12:40.265 { 00:12:40.265 "name": "Malloc_STAT", 00:12:40.265 "bytes_read": 12997136896, 00:12:40.265 "num_read_ops": 3173123, 00:12:40.265 "bytes_written": 0, 00:12:40.265 "num_write_ops": 0, 00:12:40.265 "bytes_unmapped": 0, 00:12:40.265 "num_unmap_ops": 0, 00:12:40.265 "bytes_copied": 0, 00:12:40.265 "num_copy_ops": 0, 00:12:40.265 "read_latency_ticks": 2230685663524, 00:12:40.265 "max_read_latency_ticks": 1626786, 00:12:40.265 "min_read_latency_ticks": 58124, 00:12:40.265 "write_latency_ticks": 0, 00:12:40.265 "max_write_latency_ticks": 0, 00:12:40.265 "min_write_latency_ticks": 0, 00:12:40.265 "unmap_latency_ticks": 0, 00:12:40.265 "max_unmap_latency_ticks": 0, 00:12:40.265 "min_unmap_latency_ticks": 0, 00:12:40.265 "copy_latency_ticks": 0, 00:12:40.265 "max_copy_latency_ticks": 0, 00:12:40.265 "min_copy_latency_ticks": 0, 00:12:40.265 "io_error": {} 00:12:40.265 } 00:12:40.265 ] 00:12:40.265 }' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3173123 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3109632 -lt 3064579 ']' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3109632 -gt 3173123 ']' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 00:12:40.265 Latency(us) 00:12:40.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.265 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:40.265 Malloc_STAT : 2.10 797389.05 3114.80 0.00 0.00 320.72 56.81 776.29 00:12:40.265 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:40.265 Malloc_STAT : 2.10 730728.23 2854.41 0.00 0.00 350.01 70.22 624.15 00:12:40.265 =================================================================================================================== 00:12:40.265 Total : 1528117.28 5969.21 0.00 0.00 334.73 56.81 776.29 00:12:40.265 0 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48819 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 48819 ']' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 48819 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps -c -o command 48819 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # tail -1 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:12:40.265 killing process with pid 48819 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48819' 00:12:40.265 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 48819 00:12:40.265 Received shutdown signal, test time was about 2.136699 seconds 00:12:40.265 00:12:40.265 Latency(us) 00:12:40.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.266 =================================================================================================================== 00:12:40.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.266 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 48819 00:12:40.266 02:13:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:12:40.266 00:12:40.266 real 0m3.620s 00:12:40.266 user 0m6.046s 00:12:40.266 sys 0m0.987s 00:12:40.266 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.266 02:13:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:40.266 ************************************ 00:12:40.266 END TEST bdev_stat 00:12:40.266 ************************************ 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:12:40.266 02:13:28 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:12:40.266 00:12:40.266 real 1m33.006s 00:12:40.266 user 4m29.292s 00:12:40.266 sys 0m26.882s 00:12:40.266 02:13:28 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.266 02:13:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:40.266 ************************************ 00:12:40.266 END TEST blockdev_general 00:12:40.266 ************************************ 00:12:40.266 02:13:28 -- spdk/autotest.sh@186 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:40.266 02:13:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:40.266 02:13:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.266 02:13:28 -- common/autotest_common.sh@10 -- # set +x 00:12:40.266 ************************************ 00:12:40.266 START TEST bdev_raid 00:12:40.266 ************************************ 00:12:40.266 02:13:28 bdev_raid -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:40.266 * Looking for test storage... 00:12:40.266 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:12:40.266 02:13:28 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:40.266 02:13:28 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:40.266 02:13:28 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:12:40.525 02:13:28 bdev_raid -- bdev/bdev_raid.sh@788 -- # trap 'on_error_exit;' ERR 00:12:40.525 02:13:28 bdev_raid -- bdev/bdev_raid.sh@790 -- # base_blocklen=512 00:12:40.525 02:13:28 bdev_raid -- bdev/bdev_raid.sh@792 -- # uname -s 00:12:40.525 02:13:28 bdev_raid -- bdev/bdev_raid.sh@792 -- # '[' FreeBSD = Linux ']' 00:12:40.525 02:13:28 bdev_raid -- bdev/bdev_raid.sh@799 -- # run_test raid0_resize_test raid0_resize_test 00:12:40.525 02:13:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:40.525 02:13:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.525 02:13:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.525 ************************************ 00:12:40.525 START TEST raid0_resize_test 00:12:40.525 ************************************ 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=48923 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 48923' 00:12:40.525 Process raid pid: 48923 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 48923 /var/tmp/spdk-raid.sock 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 48923 ']' 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:40.525 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:40.526 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:40.526 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.526 02:13:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.526 [2024-05-15 02:13:28.295572] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:40.526 [2024-05-15 02:13:28.295845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:41.093 EAL: TSC is not safe to use in SMP mode 00:12:41.093 EAL: TSC is not invariant 00:12:41.093 [2024-05-15 02:13:29.083762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.352 [2024-05-15 02:13:29.173425] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:41.352 [2024-05-15 02:13:29.175639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.352 [2024-05-15 02:13:29.176435] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.352 [2024-05-15 02:13:29.176454] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.611 02:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.611 02:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:12:41.611 02:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:12:41.869 Base_1 00:12:41.869 02:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:12:41.869 Base_2 00:12:41.869 02:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:12:42.128 [2024-05-15 02:13:30.071787] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:42.128 [2024-05-15 02:13:30.072267] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:42.128 [2024-05-15 02:13:30.072291] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x829da7a00 00:12:42.128 [2024-05-15 02:13:30.072295] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:42.128 [2024-05-15 02:13:30.072331] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e0ae20 00:12:42.128 [2024-05-15 02:13:30.072387] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829da7a00 00:12:42.128 [2024-05-15 02:13:30.072391] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x829da7a00 00:12:42.128 [2024-05-15 02:13:30.072424] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.128 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:12:42.387 [2024-05-15 02:13:30.295787] bdev_raid.c:2232:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:42.387 [2024-05-15 02:13:30.295821] bdev_raid.c:2246:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:42.387 true 00:12:42.387 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:42.387 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:12:42.646 [2024-05-15 02:13:30.519819] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.646 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:12:42.646 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:12:42.646 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:12:42.646 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:12:42.905 [2024-05-15 02:13:30.847778] bdev_raid.c:2232:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:42.905 [2024-05-15 02:13:30.847806] bdev_raid.c:2246:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:42.905 [2024-05-15 02:13:30.847845] bdev_raid.c:2260:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:42.905 true 00:12:42.905 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:42.905 02:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:12:43.163 [2024-05-15 02:13:31.135790] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 48923 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 48923 ']' 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 48923 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:12:43.163 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # tail -1 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps -c -o command 48923 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:12:43.422 killing process with pid 48923 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48923' 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 48923 00:12:43.422 [2024-05-15 02:13:31.169450] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.422 [2024-05-15 02:13:31.169485] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.422 [2024-05-15 02:13:31.169498] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.422 [2024-05-15 02:13:31.169503] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829da7a00 name Raid, state offline 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 48923 00:12:43.422 [2024-05-15 02:13:31.169653] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:43.422 00:12:43.422 real 0m3.052s 00:12:43.422 user 0m4.264s 00:12:43.422 sys 0m1.125s 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.422 ************************************ 00:12:43.422 END TEST raid0_resize_test 00:12:43.422 ************************************ 00:12:43.422 02:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.422 02:13:31 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:12:43.422 02:13:31 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:12:43.422 02:13:31 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:43.422 02:13:31 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:43.422 02:13:31 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.422 02:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.422 ************************************ 00:12:43.422 START TEST raid_state_function_test 00:12:43.422 ************************************ 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=48973 00:12:43.422 Process raid pid: 48973 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 48973' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 48973 /var/tmp/spdk-raid.sock 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 48973 ']' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:43.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:43.422 02:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.422 [2024-05-15 02:13:31.387270] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:43.422 [2024-05-15 02:13:31.387514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:43.988 EAL: TSC is not safe to use in SMP mode 00:12:43.988 EAL: TSC is not invariant 00:12:43.988 [2024-05-15 02:13:31.877605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.988 [2024-05-15 02:13:31.977271] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:43.988 [2024-05-15 02:13:31.979920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.988 [2024-05-15 02:13:31.980835] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.988 [2024-05-15 02:13:31.980853] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.555 02:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.555 02:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:44.555 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:44.812 [2024-05-15 02:13:32.753346] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.812 [2024-05-15 02:13:32.753424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.812 [2024-05-15 02:13:32.753442] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.812 [2024-05-15 02:13:32.753459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:44.812 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:44.813 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.813 02:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.378 02:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:45.378 "name": "Existed_Raid", 00:12:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.378 "strip_size_kb": 64, 00:12:45.378 "state": "configuring", 00:12:45.378 "raid_level": "raid0", 00:12:45.378 "superblock": false, 00:12:45.378 "num_base_bdevs": 2, 00:12:45.378 "num_base_bdevs_discovered": 0, 00:12:45.378 "num_base_bdevs_operational": 2, 00:12:45.378 "base_bdevs_list": [ 00:12:45.378 { 00:12:45.378 "name": "BaseBdev1", 00:12:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.378 "is_configured": false, 00:12:45.378 "data_offset": 0, 00:12:45.378 "data_size": 0 00:12:45.378 }, 00:12:45.378 { 00:12:45.378 "name": "BaseBdev2", 00:12:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.378 "is_configured": false, 00:12:45.378 "data_offset": 0, 00:12:45.378 "data_size": 0 00:12:45.378 } 00:12:45.378 ] 00:12:45.378 }' 00:12:45.378 02:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:45.378 02:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.657 02:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:45.941 [2024-05-15 02:13:33.729335] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.941 [2024-05-15 02:13:33.729371] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828aec500 name Existed_Raid, state configuring 00:12:45.941 02:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:46.200 [2024-05-15 02:13:34.061329] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.200 [2024-05-15 02:13:34.061389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.200 [2024-05-15 02:13:34.061394] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.200 [2024-05-15 02:13:34.061402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.200 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.459 [2024-05-15 02:13:34.362316] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.459 BaseBdev1 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:46.459 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.719 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:46.978 [ 00:12:46.978 { 00:12:46.978 "name": "BaseBdev1", 00:12:46.978 "aliases": [ 00:12:46.978 "bb7f3e86-1260-11ef-99fd-bfc7c66e2865" 00:12:46.978 ], 00:12:46.978 "product_name": "Malloc disk", 00:12:46.978 "block_size": 512, 00:12:46.978 "num_blocks": 65536, 00:12:46.978 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:46.978 "assigned_rate_limits": { 00:12:46.978 "rw_ios_per_sec": 0, 00:12:46.978 "rw_mbytes_per_sec": 0, 00:12:46.978 "r_mbytes_per_sec": 0, 00:12:46.978 "w_mbytes_per_sec": 0 00:12:46.978 }, 00:12:46.978 "claimed": true, 00:12:46.978 "claim_type": "exclusive_write", 00:12:46.978 "zoned": false, 00:12:46.978 "supported_io_types": { 00:12:46.978 "read": true, 00:12:46.978 "write": true, 00:12:46.978 "unmap": true, 00:12:46.978 "write_zeroes": true, 00:12:46.978 "flush": true, 00:12:46.978 "reset": true, 00:12:46.978 "compare": false, 00:12:46.978 "compare_and_write": false, 00:12:46.978 "abort": true, 00:12:46.978 "nvme_admin": false, 00:12:46.978 "nvme_io": false 00:12:46.978 }, 00:12:46.978 "memory_domains": [ 00:12:46.978 { 00:12:46.978 "dma_device_id": "system", 00:12:46.978 "dma_device_type": 1 00:12:46.978 }, 00:12:46.978 { 00:12:46.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.978 "dma_device_type": 2 00:12:46.978 } 00:12:46.978 ], 00:12:46.978 "driver_specific": {} 00:12:46.978 } 00:12:46.978 ] 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.237 02:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.496 02:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:47.496 "name": "Existed_Raid", 00:12:47.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.496 "strip_size_kb": 64, 00:12:47.496 "state": "configuring", 00:12:47.496 "raid_level": "raid0", 00:12:47.496 "superblock": false, 00:12:47.496 "num_base_bdevs": 2, 00:12:47.496 "num_base_bdevs_discovered": 1, 00:12:47.496 "num_base_bdevs_operational": 2, 00:12:47.496 "base_bdevs_list": [ 00:12:47.496 { 00:12:47.496 "name": "BaseBdev1", 00:12:47.496 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:47.496 "is_configured": true, 00:12:47.496 "data_offset": 0, 00:12:47.496 "data_size": 65536 00:12:47.496 }, 00:12:47.496 { 00:12:47.496 "name": "BaseBdev2", 00:12:47.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.496 "is_configured": false, 00:12:47.496 "data_offset": 0, 00:12:47.496 "data_size": 0 00:12:47.496 } 00:12:47.496 ] 00:12:47.496 }' 00:12:47.496 02:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:47.496 02:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.754 02:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.012 [2024-05-15 02:13:35.837377] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.013 [2024-05-15 02:13:35.837415] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828aec500 name Existed_Raid, state configuring 00:12:48.013 02:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:48.272 [2024-05-15 02:13:36.133390] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.272 [2024-05-15 02:13:36.134101] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.272 [2024-05-15 02:13:36.134148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.272 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.840 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:48.840 "name": "Existed_Raid", 00:12:48.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.840 "strip_size_kb": 64, 00:12:48.840 "state": "configuring", 00:12:48.840 "raid_level": "raid0", 00:12:48.840 "superblock": false, 00:12:48.840 "num_base_bdevs": 2, 00:12:48.840 "num_base_bdevs_discovered": 1, 00:12:48.840 "num_base_bdevs_operational": 2, 00:12:48.840 "base_bdevs_list": [ 00:12:48.840 { 00:12:48.840 "name": "BaseBdev1", 00:12:48.840 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:48.840 "is_configured": true, 00:12:48.840 "data_offset": 0, 00:12:48.840 "data_size": 65536 00:12:48.840 }, 00:12:48.840 { 00:12:48.840 "name": "BaseBdev2", 00:12:48.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.840 "is_configured": false, 00:12:48.840 "data_offset": 0, 00:12:48.840 "data_size": 0 00:12:48.840 } 00:12:48.840 ] 00:12:48.840 }' 00:12:48.840 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:48.840 02:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.098 02:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.356 [2024-05-15 02:13:37.141566] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.356 [2024-05-15 02:13:37.141603] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x828aeca00 00:12:49.356 [2024-05-15 02:13:37.141608] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:49.356 [2024-05-15 02:13:37.141629] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x828b4fec0 00:12:49.356 [2024-05-15 02:13:37.141716] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x828aeca00 00:12:49.356 [2024-05-15 02:13:37.141720] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x828aeca00 00:12:49.356 [2024-05-15 02:13:37.141762] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.357 BaseBdev2 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:49.357 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.614 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.873 [ 00:12:49.873 { 00:12:49.873 "name": "BaseBdev2", 00:12:49.873 "aliases": [ 00:12:49.873 "bd2773a4-1260-11ef-99fd-bfc7c66e2865" 00:12:49.873 ], 00:12:49.873 "product_name": "Malloc disk", 00:12:49.873 "block_size": 512, 00:12:49.873 "num_blocks": 65536, 00:12:49.873 "uuid": "bd2773a4-1260-11ef-99fd-bfc7c66e2865", 00:12:49.873 "assigned_rate_limits": { 00:12:49.873 "rw_ios_per_sec": 0, 00:12:49.873 "rw_mbytes_per_sec": 0, 00:12:49.873 "r_mbytes_per_sec": 0, 00:12:49.873 "w_mbytes_per_sec": 0 00:12:49.873 }, 00:12:49.873 "claimed": true, 00:12:49.873 "claim_type": "exclusive_write", 00:12:49.873 "zoned": false, 00:12:49.873 "supported_io_types": { 00:12:49.873 "read": true, 00:12:49.873 "write": true, 00:12:49.873 "unmap": true, 00:12:49.873 "write_zeroes": true, 00:12:49.873 "flush": true, 00:12:49.873 "reset": true, 00:12:49.873 "compare": false, 00:12:49.873 "compare_and_write": false, 00:12:49.873 "abort": true, 00:12:49.873 "nvme_admin": false, 00:12:49.873 "nvme_io": false 00:12:49.873 }, 00:12:49.873 "memory_domains": [ 00:12:49.873 { 00:12:49.873 "dma_device_id": "system", 00:12:49.873 "dma_device_type": 1 00:12:49.873 }, 00:12:49.873 { 00:12:49.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.873 "dma_device_type": 2 00:12:49.873 } 00:12:49.873 ], 00:12:49.873 "driver_specific": {} 00:12:49.873 } 00:12:49.873 ] 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.873 02:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.140 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:50.140 "name": "Existed_Raid", 00:12:50.140 "uuid": "bd277a49-1260-11ef-99fd-bfc7c66e2865", 00:12:50.140 "strip_size_kb": 64, 00:12:50.140 "state": "online", 00:12:50.140 "raid_level": "raid0", 00:12:50.140 "superblock": false, 00:12:50.140 "num_base_bdevs": 2, 00:12:50.140 "num_base_bdevs_discovered": 2, 00:12:50.140 "num_base_bdevs_operational": 2, 00:12:50.140 "base_bdevs_list": [ 00:12:50.140 { 00:12:50.140 "name": "BaseBdev1", 00:12:50.140 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:50.140 "is_configured": true, 00:12:50.140 "data_offset": 0, 00:12:50.140 "data_size": 65536 00:12:50.140 }, 00:12:50.140 { 00:12:50.140 "name": "BaseBdev2", 00:12:50.140 "uuid": "bd2773a4-1260-11ef-99fd-bfc7c66e2865", 00:12:50.140 "is_configured": true, 00:12:50.140 "data_offset": 0, 00:12:50.140 "data_size": 65536 00:12:50.140 } 00:12:50.140 ] 00:12:50.140 }' 00:12:50.140 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:50.140 02:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:50.410 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:50.411 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:50.669 [2024-05-15 02:13:38.593503] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:50.669 "name": "Existed_Raid", 00:12:50.669 "aliases": [ 00:12:50.669 "bd277a49-1260-11ef-99fd-bfc7c66e2865" 00:12:50.669 ], 00:12:50.669 "product_name": "Raid Volume", 00:12:50.669 "block_size": 512, 00:12:50.669 "num_blocks": 131072, 00:12:50.669 "uuid": "bd277a49-1260-11ef-99fd-bfc7c66e2865", 00:12:50.669 "assigned_rate_limits": { 00:12:50.669 "rw_ios_per_sec": 0, 00:12:50.669 "rw_mbytes_per_sec": 0, 00:12:50.669 "r_mbytes_per_sec": 0, 00:12:50.669 "w_mbytes_per_sec": 0 00:12:50.669 }, 00:12:50.669 "claimed": false, 00:12:50.669 "zoned": false, 00:12:50.669 "supported_io_types": { 00:12:50.669 "read": true, 00:12:50.669 "write": true, 00:12:50.669 "unmap": true, 00:12:50.669 "write_zeroes": true, 00:12:50.669 "flush": true, 00:12:50.669 "reset": true, 00:12:50.669 "compare": false, 00:12:50.669 "compare_and_write": false, 00:12:50.669 "abort": false, 00:12:50.669 "nvme_admin": false, 00:12:50.669 "nvme_io": false 00:12:50.669 }, 00:12:50.669 "memory_domains": [ 00:12:50.669 { 00:12:50.669 "dma_device_id": "system", 00:12:50.669 "dma_device_type": 1 00:12:50.669 }, 00:12:50.669 { 00:12:50.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.669 "dma_device_type": 2 00:12:50.669 }, 00:12:50.669 { 00:12:50.669 "dma_device_id": "system", 00:12:50.669 "dma_device_type": 1 00:12:50.669 }, 00:12:50.669 { 00:12:50.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.669 "dma_device_type": 2 00:12:50.669 } 00:12:50.669 ], 00:12:50.669 "driver_specific": { 00:12:50.669 "raid": { 00:12:50.669 "uuid": "bd277a49-1260-11ef-99fd-bfc7c66e2865", 00:12:50.669 "strip_size_kb": 64, 00:12:50.669 "state": "online", 00:12:50.669 "raid_level": "raid0", 00:12:50.669 "superblock": false, 00:12:50.669 "num_base_bdevs": 2, 00:12:50.669 "num_base_bdevs_discovered": 2, 00:12:50.669 "num_base_bdevs_operational": 2, 00:12:50.669 "base_bdevs_list": [ 00:12:50.669 { 00:12:50.669 "name": "BaseBdev1", 00:12:50.669 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:50.669 "is_configured": true, 00:12:50.669 "data_offset": 0, 00:12:50.669 "data_size": 65536 00:12:50.669 }, 00:12:50.669 { 00:12:50.669 "name": "BaseBdev2", 00:12:50.669 "uuid": "bd2773a4-1260-11ef-99fd-bfc7c66e2865", 00:12:50.669 "is_configured": true, 00:12:50.669 "data_offset": 0, 00:12:50.669 "data_size": 65536 00:12:50.669 } 00:12:50.669 ] 00:12:50.669 } 00:12:50.669 } 00:12:50.669 }' 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:50.669 BaseBdev2' 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:50.669 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:50.928 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:50.928 "name": "BaseBdev1", 00:12:50.928 "aliases": [ 00:12:50.928 "bb7f3e86-1260-11ef-99fd-bfc7c66e2865" 00:12:50.928 ], 00:12:50.928 "product_name": "Malloc disk", 00:12:50.928 "block_size": 512, 00:12:50.928 "num_blocks": 65536, 00:12:50.928 "uuid": "bb7f3e86-1260-11ef-99fd-bfc7c66e2865", 00:12:50.928 "assigned_rate_limits": { 00:12:50.928 "rw_ios_per_sec": 0, 00:12:50.928 "rw_mbytes_per_sec": 0, 00:12:50.928 "r_mbytes_per_sec": 0, 00:12:50.928 "w_mbytes_per_sec": 0 00:12:50.928 }, 00:12:50.928 "claimed": true, 00:12:50.928 "claim_type": "exclusive_write", 00:12:50.928 "zoned": false, 00:12:50.928 "supported_io_types": { 00:12:50.928 "read": true, 00:12:50.928 "write": true, 00:12:50.928 "unmap": true, 00:12:50.928 "write_zeroes": true, 00:12:50.928 "flush": true, 00:12:50.928 "reset": true, 00:12:50.928 "compare": false, 00:12:50.928 "compare_and_write": false, 00:12:50.928 "abort": true, 00:12:50.928 "nvme_admin": false, 00:12:50.928 "nvme_io": false 00:12:50.928 }, 00:12:50.928 "memory_domains": [ 00:12:50.928 { 00:12:50.928 "dma_device_id": "system", 00:12:50.928 "dma_device_type": 1 00:12:50.928 }, 00:12:50.928 { 00:12:50.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.928 "dma_device_type": 2 00:12:50.928 } 00:12:50.928 ], 00:12:50.928 "driver_specific": {} 00:12:50.928 }' 00:12:50.928 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:50.928 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:50.928 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:50.928 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:51.187 02:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:51.446 "name": "BaseBdev2", 00:12:51.446 "aliases": [ 00:12:51.446 "bd2773a4-1260-11ef-99fd-bfc7c66e2865" 00:12:51.446 ], 00:12:51.446 "product_name": "Malloc disk", 00:12:51.446 "block_size": 512, 00:12:51.446 "num_blocks": 65536, 00:12:51.446 "uuid": "bd2773a4-1260-11ef-99fd-bfc7c66e2865", 00:12:51.446 "assigned_rate_limits": { 00:12:51.446 "rw_ios_per_sec": 0, 00:12:51.446 "rw_mbytes_per_sec": 0, 00:12:51.446 "r_mbytes_per_sec": 0, 00:12:51.446 "w_mbytes_per_sec": 0 00:12:51.446 }, 00:12:51.446 "claimed": true, 00:12:51.446 "claim_type": "exclusive_write", 00:12:51.446 "zoned": false, 00:12:51.446 "supported_io_types": { 00:12:51.446 "read": true, 00:12:51.446 "write": true, 00:12:51.446 "unmap": true, 00:12:51.446 "write_zeroes": true, 00:12:51.446 "flush": true, 00:12:51.446 "reset": true, 00:12:51.446 "compare": false, 00:12:51.446 "compare_and_write": false, 00:12:51.446 "abort": true, 00:12:51.446 "nvme_admin": false, 00:12:51.446 "nvme_io": false 00:12:51.446 }, 00:12:51.446 "memory_domains": [ 00:12:51.446 { 00:12:51.446 "dma_device_id": "system", 00:12:51.446 "dma_device_type": 1 00:12:51.446 }, 00:12:51.446 { 00:12:51.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.446 "dma_device_type": 2 00:12:51.446 } 00:12:51.446 ], 00:12:51.446 "driver_specific": {} 00:12:51.446 }' 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:51.446 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:51.706 [2024-05-15 02:13:39.489496] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.706 [2024-05-15 02:13:39.489526] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.706 [2024-05-15 02:13:39.489541] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.706 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.964 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.964 "name": "Existed_Raid", 00:12:51.964 "uuid": "bd277a49-1260-11ef-99fd-bfc7c66e2865", 00:12:51.964 "strip_size_kb": 64, 00:12:51.964 "state": "offline", 00:12:51.964 "raid_level": "raid0", 00:12:51.964 "superblock": false, 00:12:51.964 "num_base_bdevs": 2, 00:12:51.964 "num_base_bdevs_discovered": 1, 00:12:51.964 "num_base_bdevs_operational": 1, 00:12:51.964 "base_bdevs_list": [ 00:12:51.964 { 00:12:51.964 "name": null, 00:12:51.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.964 "is_configured": false, 00:12:51.964 "data_offset": 0, 00:12:51.964 "data_size": 65536 00:12:51.964 }, 00:12:51.964 { 00:12:51.964 "name": "BaseBdev2", 00:12:51.964 "uuid": "bd2773a4-1260-11ef-99fd-bfc7c66e2865", 00:12:51.964 "is_configured": true, 00:12:51.964 "data_offset": 0, 00:12:51.964 "data_size": 65536 00:12:51.964 } 00:12:51.964 ] 00:12:51.964 }' 00:12:51.964 02:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.964 02:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.222 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:52.222 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.222 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:52.222 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.480 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:52.480 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.480 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:52.739 [2024-05-15 02:13:40.654326] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.739 [2024-05-15 02:13:40.654367] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828aeca00 name Existed_Raid, state offline 00:12:52.739 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.739 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.739 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:52.739 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 48973 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 48973 ']' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 48973 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 48973 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:12:52.996 killing process with pid 48973 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48973' 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 48973 00:12:52.996 [2024-05-15 02:13:40.916696] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.996 [2024-05-15 02:13:40.916737] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.996 02:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 48973 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:12:53.254 00:12:53.254 real 0m9.688s 00:12:53.254 user 0m17.136s 00:12:53.254 sys 0m1.524s 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.254 ************************************ 00:12:53.254 END TEST raid_state_function_test 00:12:53.254 ************************************ 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.254 02:13:41 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:53.254 02:13:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:53.254 02:13:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.254 02:13:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.254 ************************************ 00:12:53.254 START TEST raid_state_function_test_sb 00:12:53.254 ************************************ 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=49248 00:12:53.254 Process raid pid: 49248 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49248' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 49248 /var/tmp/spdk-raid.sock 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 49248 ']' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:53.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.254 02:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.254 [2024-05-15 02:13:41.125928] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:53.254 [2024-05-15 02:13:41.126218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:53.833 EAL: TSC is not safe to use in SMP mode 00:12:53.833 EAL: TSC is not invariant 00:12:53.833 [2024-05-15 02:13:41.584554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.833 [2024-05-15 02:13:41.682000] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:53.833 [2024-05-15 02:13:41.685367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.833 [2024-05-15 02:13:41.686685] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.833 [2024-05-15 02:13:41.686713] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.397 02:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.397 02:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:54.397 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:54.654 [2024-05-15 02:13:42.442144] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.654 [2024-05-15 02:13:42.442234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.654 [2024-05-15 02:13:42.442241] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.654 [2024-05-15 02:13:42.442254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.654 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.910 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.910 "name": "Existed_Raid", 00:12:54.910 "uuid": "c05045f2-1260-11ef-99fd-bfc7c66e2865", 00:12:54.910 "strip_size_kb": 64, 00:12:54.910 "state": "configuring", 00:12:54.910 "raid_level": "raid0", 00:12:54.910 "superblock": true, 00:12:54.910 "num_base_bdevs": 2, 00:12:54.910 "num_base_bdevs_discovered": 0, 00:12:54.910 "num_base_bdevs_operational": 2, 00:12:54.910 "base_bdevs_list": [ 00:12:54.910 { 00:12:54.910 "name": "BaseBdev1", 00:12:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.910 "is_configured": false, 00:12:54.910 "data_offset": 0, 00:12:54.910 "data_size": 0 00:12:54.910 }, 00:12:54.910 { 00:12:54.910 "name": "BaseBdev2", 00:12:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.910 "is_configured": false, 00:12:54.910 "data_offset": 0, 00:12:54.910 "data_size": 0 00:12:54.910 } 00:12:54.910 ] 00:12:54.910 }' 00:12:54.910 02:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.910 02:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.165 02:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:55.730 [2024-05-15 02:13:43.438112] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.730 [2024-05-15 02:13:43.438149] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acef500 name Existed_Raid, state configuring 00:12:55.730 02:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:55.730 [2024-05-15 02:13:43.706123] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.730 [2024-05-15 02:13:43.706177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.730 [2024-05-15 02:13:43.706182] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.730 [2024-05-15 02:13:43.706190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.730 02:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.293 [2024-05-15 02:13:43.999110] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.293 BaseBdev1 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.293 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.552 [ 00:12:56.552 { 00:12:56.552 "name": "BaseBdev1", 00:12:56.552 "aliases": [ 00:12:56.552 "c13db2a7-1260-11ef-99fd-bfc7c66e2865" 00:12:56.552 ], 00:12:56.552 "product_name": "Malloc disk", 00:12:56.552 "block_size": 512, 00:12:56.552 "num_blocks": 65536, 00:12:56.552 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:12:56.552 "assigned_rate_limits": { 00:12:56.552 "rw_ios_per_sec": 0, 00:12:56.552 "rw_mbytes_per_sec": 0, 00:12:56.552 "r_mbytes_per_sec": 0, 00:12:56.552 "w_mbytes_per_sec": 0 00:12:56.552 }, 00:12:56.552 "claimed": true, 00:12:56.552 "claim_type": "exclusive_write", 00:12:56.552 "zoned": false, 00:12:56.552 "supported_io_types": { 00:12:56.552 "read": true, 00:12:56.552 "write": true, 00:12:56.552 "unmap": true, 00:12:56.552 "write_zeroes": true, 00:12:56.552 "flush": true, 00:12:56.552 "reset": true, 00:12:56.552 "compare": false, 00:12:56.552 "compare_and_write": false, 00:12:56.552 "abort": true, 00:12:56.552 "nvme_admin": false, 00:12:56.552 "nvme_io": false 00:12:56.552 }, 00:12:56.552 "memory_domains": [ 00:12:56.552 { 00:12:56.552 "dma_device_id": "system", 00:12:56.552 "dma_device_type": 1 00:12:56.552 }, 00:12:56.552 { 00:12:56.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.552 "dma_device_type": 2 00:12:56.552 } 00:12:56.552 ], 00:12:56.552 "driver_specific": {} 00:12:56.552 } 00:12:56.552 ] 00:12:56.552 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:56.552 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:56.552 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:56.552 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.811 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.070 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:57.070 "name": "Existed_Raid", 00:12:57.070 "uuid": "c1112436-1260-11ef-99fd-bfc7c66e2865", 00:12:57.070 "strip_size_kb": 64, 00:12:57.070 "state": "configuring", 00:12:57.070 "raid_level": "raid0", 00:12:57.070 "superblock": true, 00:12:57.070 "num_base_bdevs": 2, 00:12:57.070 "num_base_bdevs_discovered": 1, 00:12:57.070 "num_base_bdevs_operational": 2, 00:12:57.070 "base_bdevs_list": [ 00:12:57.070 { 00:12:57.070 "name": "BaseBdev1", 00:12:57.070 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:12:57.070 "is_configured": true, 00:12:57.070 "data_offset": 2048, 00:12:57.070 "data_size": 63488 00:12:57.070 }, 00:12:57.070 { 00:12:57.070 "name": "BaseBdev2", 00:12:57.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.070 "is_configured": false, 00:12:57.070 "data_offset": 0, 00:12:57.070 "data_size": 0 00:12:57.070 } 00:12:57.070 ] 00:12:57.070 }' 00:12:57.070 02:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:57.070 02:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.327 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:57.585 [2024-05-15 02:13:45.366138] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.585 [2024-05-15 02:13:45.366173] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acef500 name Existed_Raid, state configuring 00:12:57.585 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:57.843 [2024-05-15 02:13:45.626157] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.843 [2024-05-15 02:13:45.626854] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:57.843 [2024-05-15 02:13:45.626902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.843 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.102 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:58.102 "name": "Existed_Raid", 00:12:58.102 "uuid": "c2361d68-1260-11ef-99fd-bfc7c66e2865", 00:12:58.102 "strip_size_kb": 64, 00:12:58.102 "state": "configuring", 00:12:58.102 "raid_level": "raid0", 00:12:58.102 "superblock": true, 00:12:58.102 "num_base_bdevs": 2, 00:12:58.102 "num_base_bdevs_discovered": 1, 00:12:58.102 "num_base_bdevs_operational": 2, 00:12:58.102 "base_bdevs_list": [ 00:12:58.102 { 00:12:58.102 "name": "BaseBdev1", 00:12:58.102 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:12:58.102 "is_configured": true, 00:12:58.102 "data_offset": 2048, 00:12:58.102 "data_size": 63488 00:12:58.102 }, 00:12:58.102 { 00:12:58.102 "name": "BaseBdev2", 00:12:58.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.102 "is_configured": false, 00:12:58.102 "data_offset": 0, 00:12:58.102 "data_size": 0 00:12:58.102 } 00:12:58.102 ] 00:12:58.102 }' 00:12:58.102 02:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:58.102 02:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.359 02:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.618 [2024-05-15 02:13:46.562308] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.618 [2024-05-15 02:13:46.562398] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82acefa00 00:12:58.618 [2024-05-15 02:13:46.562404] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:58.618 [2024-05-15 02:13:46.562424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ad52ec0 00:12:58.618 [2024-05-15 02:13:46.562458] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82acefa00 00:12:58.618 [2024-05-15 02:13:46.562462] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82acefa00 00:12:58.618 [2024-05-15 02:13:46.562479] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.618 BaseBdev2 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:58.618 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:58.876 02:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.442 [ 00:12:59.442 { 00:12:59.442 "name": "BaseBdev2", 00:12:59.442 "aliases": [ 00:12:59.442 "c2c4f162-1260-11ef-99fd-bfc7c66e2865" 00:12:59.442 ], 00:12:59.442 "product_name": "Malloc disk", 00:12:59.442 "block_size": 512, 00:12:59.442 "num_blocks": 65536, 00:12:59.442 "uuid": "c2c4f162-1260-11ef-99fd-bfc7c66e2865", 00:12:59.442 "assigned_rate_limits": { 00:12:59.442 "rw_ios_per_sec": 0, 00:12:59.442 "rw_mbytes_per_sec": 0, 00:12:59.442 "r_mbytes_per_sec": 0, 00:12:59.442 "w_mbytes_per_sec": 0 00:12:59.442 }, 00:12:59.442 "claimed": true, 00:12:59.442 "claim_type": "exclusive_write", 00:12:59.442 "zoned": false, 00:12:59.442 "supported_io_types": { 00:12:59.442 "read": true, 00:12:59.442 "write": true, 00:12:59.442 "unmap": true, 00:12:59.442 "write_zeroes": true, 00:12:59.442 "flush": true, 00:12:59.442 "reset": true, 00:12:59.442 "compare": false, 00:12:59.442 "compare_and_write": false, 00:12:59.442 "abort": true, 00:12:59.442 "nvme_admin": false, 00:12:59.442 "nvme_io": false 00:12:59.442 }, 00:12:59.442 "memory_domains": [ 00:12:59.442 { 00:12:59.442 "dma_device_id": "system", 00:12:59.442 "dma_device_type": 1 00:12:59.442 }, 00:12:59.442 { 00:12:59.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.442 "dma_device_type": 2 00:12:59.442 } 00:12:59.442 ], 00:12:59.442 "driver_specific": {} 00:12:59.442 } 00:12:59.442 ] 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.442 "name": "Existed_Raid", 00:12:59.442 "uuid": "c2361d68-1260-11ef-99fd-bfc7c66e2865", 00:12:59.442 "strip_size_kb": 64, 00:12:59.442 "state": "online", 00:12:59.442 "raid_level": "raid0", 00:12:59.442 "superblock": true, 00:12:59.442 "num_base_bdevs": 2, 00:12:59.442 "num_base_bdevs_discovered": 2, 00:12:59.442 "num_base_bdevs_operational": 2, 00:12:59.442 "base_bdevs_list": [ 00:12:59.442 { 00:12:59.442 "name": "BaseBdev1", 00:12:59.442 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:12:59.442 "is_configured": true, 00:12:59.442 "data_offset": 2048, 00:12:59.442 "data_size": 63488 00:12:59.442 }, 00:12:59.442 { 00:12:59.442 "name": "BaseBdev2", 00:12:59.442 "uuid": "c2c4f162-1260-11ef-99fd-bfc7c66e2865", 00:12:59.442 "is_configured": true, 00:12:59.442 "data_offset": 2048, 00:12:59.442 "data_size": 63488 00:12:59.442 } 00:12:59.442 ] 00:12:59.442 }' 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.442 02:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:00.010 02:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:00.269 [2024-05-15 02:13:48.086239] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.269 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:00.269 "name": "Existed_Raid", 00:13:00.269 "aliases": [ 00:13:00.269 "c2361d68-1260-11ef-99fd-bfc7c66e2865" 00:13:00.269 ], 00:13:00.269 "product_name": "Raid Volume", 00:13:00.269 "block_size": 512, 00:13:00.269 "num_blocks": 126976, 00:13:00.269 "uuid": "c2361d68-1260-11ef-99fd-bfc7c66e2865", 00:13:00.269 "assigned_rate_limits": { 00:13:00.269 "rw_ios_per_sec": 0, 00:13:00.269 "rw_mbytes_per_sec": 0, 00:13:00.269 "r_mbytes_per_sec": 0, 00:13:00.269 "w_mbytes_per_sec": 0 00:13:00.269 }, 00:13:00.269 "claimed": false, 00:13:00.269 "zoned": false, 00:13:00.269 "supported_io_types": { 00:13:00.269 "read": true, 00:13:00.269 "write": true, 00:13:00.269 "unmap": true, 00:13:00.269 "write_zeroes": true, 00:13:00.269 "flush": true, 00:13:00.269 "reset": true, 00:13:00.269 "compare": false, 00:13:00.269 "compare_and_write": false, 00:13:00.269 "abort": false, 00:13:00.269 "nvme_admin": false, 00:13:00.269 "nvme_io": false 00:13:00.269 }, 00:13:00.269 "memory_domains": [ 00:13:00.269 { 00:13:00.269 "dma_device_id": "system", 00:13:00.269 "dma_device_type": 1 00:13:00.269 }, 00:13:00.269 { 00:13:00.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.269 "dma_device_type": 2 00:13:00.269 }, 00:13:00.269 { 00:13:00.269 "dma_device_id": "system", 00:13:00.269 "dma_device_type": 1 00:13:00.269 }, 00:13:00.269 { 00:13:00.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.269 "dma_device_type": 2 00:13:00.269 } 00:13:00.269 ], 00:13:00.269 "driver_specific": { 00:13:00.269 "raid": { 00:13:00.269 "uuid": "c2361d68-1260-11ef-99fd-bfc7c66e2865", 00:13:00.269 "strip_size_kb": 64, 00:13:00.269 "state": "online", 00:13:00.269 "raid_level": "raid0", 00:13:00.269 "superblock": true, 00:13:00.269 "num_base_bdevs": 2, 00:13:00.269 "num_base_bdevs_discovered": 2, 00:13:00.270 "num_base_bdevs_operational": 2, 00:13:00.270 "base_bdevs_list": [ 00:13:00.270 { 00:13:00.270 "name": "BaseBdev1", 00:13:00.270 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:13:00.270 "is_configured": true, 00:13:00.270 "data_offset": 2048, 00:13:00.270 "data_size": 63488 00:13:00.270 }, 00:13:00.270 { 00:13:00.270 "name": "BaseBdev2", 00:13:00.270 "uuid": "c2c4f162-1260-11ef-99fd-bfc7c66e2865", 00:13:00.270 "is_configured": true, 00:13:00.270 "data_offset": 2048, 00:13:00.270 "data_size": 63488 00:13:00.270 } 00:13:00.270 ] 00:13:00.270 } 00:13:00.270 } 00:13:00.270 }' 00:13:00.270 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.270 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:00.270 BaseBdev2' 00:13:00.270 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:00.270 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:00.270 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:00.556 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:00.556 "name": "BaseBdev1", 00:13:00.556 "aliases": [ 00:13:00.556 "c13db2a7-1260-11ef-99fd-bfc7c66e2865" 00:13:00.556 ], 00:13:00.556 "product_name": "Malloc disk", 00:13:00.556 "block_size": 512, 00:13:00.557 "num_blocks": 65536, 00:13:00.557 "uuid": "c13db2a7-1260-11ef-99fd-bfc7c66e2865", 00:13:00.557 "assigned_rate_limits": { 00:13:00.557 "rw_ios_per_sec": 0, 00:13:00.557 "rw_mbytes_per_sec": 0, 00:13:00.557 "r_mbytes_per_sec": 0, 00:13:00.557 "w_mbytes_per_sec": 0 00:13:00.557 }, 00:13:00.557 "claimed": true, 00:13:00.557 "claim_type": "exclusive_write", 00:13:00.557 "zoned": false, 00:13:00.557 "supported_io_types": { 00:13:00.557 "read": true, 00:13:00.557 "write": true, 00:13:00.557 "unmap": true, 00:13:00.557 "write_zeroes": true, 00:13:00.557 "flush": true, 00:13:00.557 "reset": true, 00:13:00.557 "compare": false, 00:13:00.557 "compare_and_write": false, 00:13:00.557 "abort": true, 00:13:00.557 "nvme_admin": false, 00:13:00.557 "nvme_io": false 00:13:00.557 }, 00:13:00.557 "memory_domains": [ 00:13:00.557 { 00:13:00.557 "dma_device_id": "system", 00:13:00.557 "dma_device_type": 1 00:13:00.557 }, 00:13:00.557 { 00:13:00.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.557 "dma_device_type": 2 00:13:00.557 } 00:13:00.557 ], 00:13:00.557 "driver_specific": {} 00:13:00.557 }' 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:00.557 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:00.816 "name": "BaseBdev2", 00:13:00.816 "aliases": [ 00:13:00.816 "c2c4f162-1260-11ef-99fd-bfc7c66e2865" 00:13:00.816 ], 00:13:00.816 "product_name": "Malloc disk", 00:13:00.816 "block_size": 512, 00:13:00.816 "num_blocks": 65536, 00:13:00.816 "uuid": "c2c4f162-1260-11ef-99fd-bfc7c66e2865", 00:13:00.816 "assigned_rate_limits": { 00:13:00.816 "rw_ios_per_sec": 0, 00:13:00.816 "rw_mbytes_per_sec": 0, 00:13:00.816 "r_mbytes_per_sec": 0, 00:13:00.816 "w_mbytes_per_sec": 0 00:13:00.816 }, 00:13:00.816 "claimed": true, 00:13:00.816 "claim_type": "exclusive_write", 00:13:00.816 "zoned": false, 00:13:00.816 "supported_io_types": { 00:13:00.816 "read": true, 00:13:00.816 "write": true, 00:13:00.816 "unmap": true, 00:13:00.816 "write_zeroes": true, 00:13:00.816 "flush": true, 00:13:00.816 "reset": true, 00:13:00.816 "compare": false, 00:13:00.816 "compare_and_write": false, 00:13:00.816 "abort": true, 00:13:00.816 "nvme_admin": false, 00:13:00.816 "nvme_io": false 00:13:00.816 }, 00:13:00.816 "memory_domains": [ 00:13:00.816 { 00:13:00.816 "dma_device_id": "system", 00:13:00.816 "dma_device_type": 1 00:13:00.816 }, 00:13:00.816 { 00:13:00.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.816 "dma_device_type": 2 00:13:00.816 } 00:13:00.816 ], 00:13:00.816 "driver_specific": {} 00:13:00.816 }' 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:00.816 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:01.075 02:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:01.342 [2024-05-15 02:13:49.110230] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.342 [2024-05-15 02:13:49.110262] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.342 [2024-05-15 02:13:49.110277] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.342 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.602 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:01.602 "name": "Existed_Raid", 00:13:01.602 "uuid": "c2361d68-1260-11ef-99fd-bfc7c66e2865", 00:13:01.602 "strip_size_kb": 64, 00:13:01.602 "state": "offline", 00:13:01.602 "raid_level": "raid0", 00:13:01.602 "superblock": true, 00:13:01.602 "num_base_bdevs": 2, 00:13:01.602 "num_base_bdevs_discovered": 1, 00:13:01.602 "num_base_bdevs_operational": 1, 00:13:01.602 "base_bdevs_list": [ 00:13:01.602 { 00:13:01.602 "name": null, 00:13:01.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.602 "is_configured": false, 00:13:01.602 "data_offset": 2048, 00:13:01.602 "data_size": 63488 00:13:01.602 }, 00:13:01.602 { 00:13:01.602 "name": "BaseBdev2", 00:13:01.602 "uuid": "c2c4f162-1260-11ef-99fd-bfc7c66e2865", 00:13:01.602 "is_configured": true, 00:13:01.602 "data_offset": 2048, 00:13:01.602 "data_size": 63488 00:13:01.602 } 00:13:01.602 ] 00:13:01.602 }' 00:13:01.602 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:01.602 02:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.859 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:01.859 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:01.859 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.859 02:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:02.118 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:02.118 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.118 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:02.376 [2024-05-15 02:13:50.327026] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.376 [2024-05-15 02:13:50.327066] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acefa00 name Existed_Raid, state offline 00:13:02.376 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:02.376 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:02.376 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.376 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 49248 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 49248 ']' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 49248 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 49248 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:02.634 killing process with pid 49248 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49248' 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 49248 00:13:02.634 [2024-05-15 02:13:50.595964] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.634 [2024-05-15 02:13:50.596010] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.634 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 49248 00:13:02.892 02:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:02.892 00:13:02.892 real 0m9.631s 00:13:02.892 user 0m16.979s 00:13:02.892 sys 0m1.559s 00:13:02.892 ************************************ 00:13:02.892 END TEST raid_state_function_test_sb 00:13:02.892 ************************************ 00:13:02.892 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.892 02:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.892 02:13:50 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:02.892 02:13:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:02.892 02:13:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.892 02:13:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.892 ************************************ 00:13:02.892 START TEST raid_superblock_test 00:13:02.892 ************************************ 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=49522 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 49522 /var/tmp/spdk-raid.sock 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 49522 ']' 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.892 02:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.892 [2024-05-15 02:13:50.795994] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:02.892 [2024-05-15 02:13:50.796230] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:03.457 EAL: TSC is not safe to use in SMP mode 00:13:03.457 EAL: TSC is not invariant 00:13:03.457 [2024-05-15 02:13:51.268494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.457 [2024-05-15 02:13:51.356186] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:03.457 [2024-05-15 02:13:51.358416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.457 [2024-05-15 02:13:51.359144] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.457 [2024-05-15 02:13:51.359151] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.021 02:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:04.279 malloc1 00:13:04.279 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.537 [2024-05-15 02:13:52.498308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.537 [2024-05-15 02:13:52.498375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.537 [2024-05-15 02:13:52.498950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b8c1780 00:13:04.537 [2024-05-15 02:13:52.498978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.537 [2024-05-15 02:13:52.499759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.537 [2024-05-15 02:13:52.499787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.537 pt1 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.537 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:04.796 malloc2 00:13:04.796 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.054 [2024-05-15 02:13:52.954306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.054 [2024-05-15 02:13:52.954381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.054 [2024-05-15 02:13:52.954407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b8c1c80 00:13:05.054 [2024-05-15 02:13:52.954415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.054 [2024-05-15 02:13:52.954921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.054 [2024-05-15 02:13:52.954944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.054 pt2 00:13:05.054 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.054 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.054 02:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:05.313 [2024-05-15 02:13:53.194308] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.313 [2024-05-15 02:13:53.194752] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.313 [2024-05-15 02:13:53.194807] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8c1f00 00:13:05.313 [2024-05-15 02:13:53.194812] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:05.313 [2024-05-15 02:13:53.194842] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b924e20 00:13:05.313 [2024-05-15 02:13:53.194906] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8c1f00 00:13:05.313 [2024-05-15 02:13:53.194910] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b8c1f00 00:13:05.313 [2024-05-15 02:13:53.194933] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.313 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.579 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:05.579 "name": "raid_bdev1", 00:13:05.579 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:05.579 "strip_size_kb": 64, 00:13:05.580 "state": "online", 00:13:05.580 "raid_level": "raid0", 00:13:05.580 "superblock": true, 00:13:05.580 "num_base_bdevs": 2, 00:13:05.580 "num_base_bdevs_discovered": 2, 00:13:05.580 "num_base_bdevs_operational": 2, 00:13:05.580 "base_bdevs_list": [ 00:13:05.580 { 00:13:05.580 "name": "pt1", 00:13:05.580 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:05.580 "is_configured": true, 00:13:05.580 "data_offset": 2048, 00:13:05.580 "data_size": 63488 00:13:05.580 }, 00:13:05.580 { 00:13:05.580 "name": "pt2", 00:13:05.580 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:05.580 "is_configured": true, 00:13:05.580 "data_offset": 2048, 00:13:05.580 "data_size": 63488 00:13:05.580 } 00:13:05.580 ] 00:13:05.580 }' 00:13:05.580 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:05.580 02:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:05.840 02:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:06.100 [2024-05-15 02:13:54.010346] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:06.100 "name": "raid_bdev1", 00:13:06.100 "aliases": [ 00:13:06.100 "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865" 00:13:06.100 ], 00:13:06.100 "product_name": "Raid Volume", 00:13:06.100 "block_size": 512, 00:13:06.100 "num_blocks": 126976, 00:13:06.100 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:06.100 "assigned_rate_limits": { 00:13:06.100 "rw_ios_per_sec": 0, 00:13:06.100 "rw_mbytes_per_sec": 0, 00:13:06.100 "r_mbytes_per_sec": 0, 00:13:06.100 "w_mbytes_per_sec": 0 00:13:06.100 }, 00:13:06.100 "claimed": false, 00:13:06.100 "zoned": false, 00:13:06.100 "supported_io_types": { 00:13:06.100 "read": true, 00:13:06.100 "write": true, 00:13:06.100 "unmap": true, 00:13:06.100 "write_zeroes": true, 00:13:06.100 "flush": true, 00:13:06.100 "reset": true, 00:13:06.100 "compare": false, 00:13:06.100 "compare_and_write": false, 00:13:06.100 "abort": false, 00:13:06.100 "nvme_admin": false, 00:13:06.100 "nvme_io": false 00:13:06.100 }, 00:13:06.100 "memory_domains": [ 00:13:06.100 { 00:13:06.100 "dma_device_id": "system", 00:13:06.100 "dma_device_type": 1 00:13:06.100 }, 00:13:06.100 { 00:13:06.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.100 "dma_device_type": 2 00:13:06.100 }, 00:13:06.100 { 00:13:06.100 "dma_device_id": "system", 00:13:06.100 "dma_device_type": 1 00:13:06.100 }, 00:13:06.100 { 00:13:06.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.100 "dma_device_type": 2 00:13:06.100 } 00:13:06.100 ], 00:13:06.100 "driver_specific": { 00:13:06.100 "raid": { 00:13:06.100 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:06.100 "strip_size_kb": 64, 00:13:06.100 "state": "online", 00:13:06.100 "raid_level": "raid0", 00:13:06.100 "superblock": true, 00:13:06.100 "num_base_bdevs": 2, 00:13:06.100 "num_base_bdevs_discovered": 2, 00:13:06.100 "num_base_bdevs_operational": 2, 00:13:06.100 "base_bdevs_list": [ 00:13:06.100 { 00:13:06.100 "name": "pt1", 00:13:06.100 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:06.100 "is_configured": true, 00:13:06.100 "data_offset": 2048, 00:13:06.100 "data_size": 63488 00:13:06.100 }, 00:13:06.100 { 00:13:06.100 "name": "pt2", 00:13:06.100 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:06.100 "is_configured": true, 00:13:06.100 "data_offset": 2048, 00:13:06.100 "data_size": 63488 00:13:06.100 } 00:13:06.100 ] 00:13:06.100 } 00:13:06.100 } 00:13:06.100 }' 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:06.100 pt2' 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:06.100 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:06.366 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:06.366 "name": "pt1", 00:13:06.366 "aliases": [ 00:13:06.366 "f0bd82b7-bd2d-035b-802d-9d13521b0a17" 00:13:06.366 ], 00:13:06.366 "product_name": "passthru", 00:13:06.366 "block_size": 512, 00:13:06.366 "num_blocks": 65536, 00:13:06.366 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:06.366 "assigned_rate_limits": { 00:13:06.366 "rw_ios_per_sec": 0, 00:13:06.366 "rw_mbytes_per_sec": 0, 00:13:06.366 "r_mbytes_per_sec": 0, 00:13:06.366 "w_mbytes_per_sec": 0 00:13:06.366 }, 00:13:06.366 "claimed": true, 00:13:06.366 "claim_type": "exclusive_write", 00:13:06.366 "zoned": false, 00:13:06.366 "supported_io_types": { 00:13:06.366 "read": true, 00:13:06.366 "write": true, 00:13:06.366 "unmap": true, 00:13:06.366 "write_zeroes": true, 00:13:06.366 "flush": true, 00:13:06.366 "reset": true, 00:13:06.366 "compare": false, 00:13:06.366 "compare_and_write": false, 00:13:06.366 "abort": true, 00:13:06.366 "nvme_admin": false, 00:13:06.366 "nvme_io": false 00:13:06.366 }, 00:13:06.366 "memory_domains": [ 00:13:06.366 { 00:13:06.366 "dma_device_id": "system", 00:13:06.366 "dma_device_type": 1 00:13:06.366 }, 00:13:06.366 { 00:13:06.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.366 "dma_device_type": 2 00:13:06.366 } 00:13:06.366 ], 00:13:06.366 "driver_specific": { 00:13:06.366 "passthru": { 00:13:06.366 "name": "pt1", 00:13:06.366 "base_bdev_name": "malloc1" 00:13:06.366 } 00:13:06.366 } 00:13:06.366 }' 00:13:06.366 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.366 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.366 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:06.366 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:06.624 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:06.883 "name": "pt2", 00:13:06.883 "aliases": [ 00:13:06.883 "93fc10ba-36ee-ec5b-ae54-8003ee95e02b" 00:13:06.883 ], 00:13:06.883 "product_name": "passthru", 00:13:06.883 "block_size": 512, 00:13:06.883 "num_blocks": 65536, 00:13:06.883 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:06.883 "assigned_rate_limits": { 00:13:06.883 "rw_ios_per_sec": 0, 00:13:06.883 "rw_mbytes_per_sec": 0, 00:13:06.883 "r_mbytes_per_sec": 0, 00:13:06.883 "w_mbytes_per_sec": 0 00:13:06.883 }, 00:13:06.883 "claimed": true, 00:13:06.883 "claim_type": "exclusive_write", 00:13:06.883 "zoned": false, 00:13:06.883 "supported_io_types": { 00:13:06.883 "read": true, 00:13:06.883 "write": true, 00:13:06.883 "unmap": true, 00:13:06.883 "write_zeroes": true, 00:13:06.883 "flush": true, 00:13:06.883 "reset": true, 00:13:06.883 "compare": false, 00:13:06.883 "compare_and_write": false, 00:13:06.883 "abort": true, 00:13:06.883 "nvme_admin": false, 00:13:06.883 "nvme_io": false 00:13:06.883 }, 00:13:06.883 "memory_domains": [ 00:13:06.883 { 00:13:06.883 "dma_device_id": "system", 00:13:06.883 "dma_device_type": 1 00:13:06.883 }, 00:13:06.883 { 00:13:06.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.883 "dma_device_type": 2 00:13:06.883 } 00:13:06.883 ], 00:13:06.883 "driver_specific": { 00:13:06.883 "passthru": { 00:13:06.883 "name": "pt2", 00:13:06.883 "base_bdev_name": "malloc2" 00:13:06.883 } 00:13:06.883 } 00:13:06.883 }' 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.883 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.884 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:06.884 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:06.884 02:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:07.142 [2024-05-15 02:13:55.014363] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.142 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6b8ec4f-1260-11ef-99fd-bfc7c66e2865 00:13:07.142 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c6b8ec4f-1260-11ef-99fd-bfc7c66e2865 ']' 00:13:07.142 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:07.401 [2024-05-15 02:13:55.302330] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.402 [2024-05-15 02:13:55.302359] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.402 [2024-05-15 02:13:55.302390] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.402 [2024-05-15 02:13:55.302403] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.402 [2024-05-15 02:13:55.302407] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8c1f00 name raid_bdev1, state offline 00:13:07.402 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.402 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:07.661 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:07.661 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:07.661 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.661 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:07.973 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.973 02:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:08.232 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:08.232 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.492 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:08.493 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:08.751 [2024-05-15 02:13:56.726390] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:08.751 [2024-05-15 02:13:56.726892] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:08.751 [2024-05-15 02:13:56.726908] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:08.751 [2024-05-15 02:13:56.726948] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:08.751 [2024-05-15 02:13:56.726958] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.751 [2024-05-15 02:13:56.726962] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8c1c80 name raid_bdev1, state configuring 00:13:08.751 request: 00:13:08.751 { 00:13:08.751 "name": "raid_bdev1", 00:13:08.751 "raid_level": "raid0", 00:13:08.751 "base_bdevs": [ 00:13:08.751 "malloc1", 00:13:08.751 "malloc2" 00:13:08.751 ], 00:13:08.751 "superblock": false, 00:13:08.751 "strip_size_kb": 64, 00:13:08.751 "method": "bdev_raid_create", 00:13:08.751 "req_id": 1 00:13:08.751 } 00:13:08.751 Got JSON-RPC error response 00:13:08.751 response: 00:13:08.751 { 00:13:08.751 "code": -17, 00:13:08.751 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:08.751 } 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:09.010 02:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.270 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:09.270 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:09.270 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:09.529 [2024-05-15 02:13:57.282419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:09.529 [2024-05-15 02:13:57.282480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.529 [2024-05-15 02:13:57.282510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b8c1780 00:13:09.529 [2024-05-15 02:13:57.282518] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.529 [2024-05-15 02:13:57.283025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.529 [2024-05-15 02:13:57.283053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:09.529 [2024-05-15 02:13:57.283076] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:09.529 [2024-05-15 02:13:57.283087] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:09.529 pt1 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.529 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.787 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:09.787 "name": "raid_bdev1", 00:13:09.787 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:09.787 "strip_size_kb": 64, 00:13:09.787 "state": "configuring", 00:13:09.787 "raid_level": "raid0", 00:13:09.787 "superblock": true, 00:13:09.787 "num_base_bdevs": 2, 00:13:09.787 "num_base_bdevs_discovered": 1, 00:13:09.787 "num_base_bdevs_operational": 2, 00:13:09.787 "base_bdevs_list": [ 00:13:09.787 { 00:13:09.787 "name": "pt1", 00:13:09.787 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:09.787 "is_configured": true, 00:13:09.787 "data_offset": 2048, 00:13:09.787 "data_size": 63488 00:13:09.787 }, 00:13:09.787 { 00:13:09.787 "name": null, 00:13:09.787 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:09.787 "is_configured": false, 00:13:09.787 "data_offset": 2048, 00:13:09.787 "data_size": 63488 00:13:09.787 } 00:13:09.787 ] 00:13:09.787 }' 00:13:09.787 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:09.788 02:13:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.046 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:10.046 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:10.046 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.046 02:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.305 [2024-05-15 02:13:58.130462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.305 [2024-05-15 02:13:58.130525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.305 [2024-05-15 02:13:58.130554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b8c1f00 00:13:10.305 [2024-05-15 02:13:58.130562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.305 [2024-05-15 02:13:58.130657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.305 [2024-05-15 02:13:58.130666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.305 [2024-05-15 02:13:58.130686] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:10.305 [2024-05-15 02:13:58.130693] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.305 [2024-05-15 02:13:58.130716] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8c2180 00:13:10.305 [2024-05-15 02:13:58.130719] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:10.305 [2024-05-15 02:13:58.130736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b924e20 00:13:10.305 [2024-05-15 02:13:58.130776] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8c2180 00:13:10.305 [2024-05-15 02:13:58.130780] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b8c2180 00:13:10.305 [2024-05-15 02:13:58.130798] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.305 pt2 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.305 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.564 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.564 "name": "raid_bdev1", 00:13:10.564 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:10.564 "strip_size_kb": 64, 00:13:10.564 "state": "online", 00:13:10.564 "raid_level": "raid0", 00:13:10.564 "superblock": true, 00:13:10.564 "num_base_bdevs": 2, 00:13:10.564 "num_base_bdevs_discovered": 2, 00:13:10.564 "num_base_bdevs_operational": 2, 00:13:10.564 "base_bdevs_list": [ 00:13:10.564 { 00:13:10.564 "name": "pt1", 00:13:10.564 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:10.564 "is_configured": true, 00:13:10.564 "data_offset": 2048, 00:13:10.564 "data_size": 63488 00:13:10.564 }, 00:13:10.564 { 00:13:10.564 "name": "pt2", 00:13:10.564 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:10.564 "is_configured": true, 00:13:10.564 "data_offset": 2048, 00:13:10.564 "data_size": 63488 00:13:10.564 } 00:13:10.564 ] 00:13:10.564 }' 00:13:10.564 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.564 02:13:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:10.822 02:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:11.082 [2024-05-15 02:13:59.034516] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:11.082 "name": "raid_bdev1", 00:13:11.082 "aliases": [ 00:13:11.082 "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865" 00:13:11.082 ], 00:13:11.082 "product_name": "Raid Volume", 00:13:11.082 "block_size": 512, 00:13:11.082 "num_blocks": 126976, 00:13:11.082 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:11.082 "assigned_rate_limits": { 00:13:11.082 "rw_ios_per_sec": 0, 00:13:11.082 "rw_mbytes_per_sec": 0, 00:13:11.082 "r_mbytes_per_sec": 0, 00:13:11.082 "w_mbytes_per_sec": 0 00:13:11.082 }, 00:13:11.082 "claimed": false, 00:13:11.082 "zoned": false, 00:13:11.082 "supported_io_types": { 00:13:11.082 "read": true, 00:13:11.082 "write": true, 00:13:11.082 "unmap": true, 00:13:11.082 "write_zeroes": true, 00:13:11.082 "flush": true, 00:13:11.082 "reset": true, 00:13:11.082 "compare": false, 00:13:11.082 "compare_and_write": false, 00:13:11.082 "abort": false, 00:13:11.082 "nvme_admin": false, 00:13:11.082 "nvme_io": false 00:13:11.082 }, 00:13:11.082 "memory_domains": [ 00:13:11.082 { 00:13:11.082 "dma_device_id": "system", 00:13:11.082 "dma_device_type": 1 00:13:11.082 }, 00:13:11.082 { 00:13:11.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.082 "dma_device_type": 2 00:13:11.082 }, 00:13:11.082 { 00:13:11.082 "dma_device_id": "system", 00:13:11.082 "dma_device_type": 1 00:13:11.082 }, 00:13:11.082 { 00:13:11.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.082 "dma_device_type": 2 00:13:11.082 } 00:13:11.082 ], 00:13:11.082 "driver_specific": { 00:13:11.082 "raid": { 00:13:11.082 "uuid": "c6b8ec4f-1260-11ef-99fd-bfc7c66e2865", 00:13:11.082 "strip_size_kb": 64, 00:13:11.082 "state": "online", 00:13:11.082 "raid_level": "raid0", 00:13:11.082 "superblock": true, 00:13:11.082 "num_base_bdevs": 2, 00:13:11.082 "num_base_bdevs_discovered": 2, 00:13:11.082 "num_base_bdevs_operational": 2, 00:13:11.082 "base_bdevs_list": [ 00:13:11.082 { 00:13:11.082 "name": "pt1", 00:13:11.082 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:11.082 "is_configured": true, 00:13:11.082 "data_offset": 2048, 00:13:11.082 "data_size": 63488 00:13:11.082 }, 00:13:11.082 { 00:13:11.082 "name": "pt2", 00:13:11.082 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:11.082 "is_configured": true, 00:13:11.082 "data_offset": 2048, 00:13:11.082 "data_size": 63488 00:13:11.082 } 00:13:11.082 ] 00:13:11.082 } 00:13:11.082 } 00:13:11.082 }' 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:11.082 pt2' 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:11.082 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:11.649 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:11.649 "name": "pt1", 00:13:11.649 "aliases": [ 00:13:11.650 "f0bd82b7-bd2d-035b-802d-9d13521b0a17" 00:13:11.650 ], 00:13:11.650 "product_name": "passthru", 00:13:11.650 "block_size": 512, 00:13:11.650 "num_blocks": 65536, 00:13:11.650 "uuid": "f0bd82b7-bd2d-035b-802d-9d13521b0a17", 00:13:11.650 "assigned_rate_limits": { 00:13:11.650 "rw_ios_per_sec": 0, 00:13:11.650 "rw_mbytes_per_sec": 0, 00:13:11.650 "r_mbytes_per_sec": 0, 00:13:11.650 "w_mbytes_per_sec": 0 00:13:11.650 }, 00:13:11.650 "claimed": true, 00:13:11.650 "claim_type": "exclusive_write", 00:13:11.650 "zoned": false, 00:13:11.650 "supported_io_types": { 00:13:11.650 "read": true, 00:13:11.650 "write": true, 00:13:11.650 "unmap": true, 00:13:11.650 "write_zeroes": true, 00:13:11.650 "flush": true, 00:13:11.650 "reset": true, 00:13:11.650 "compare": false, 00:13:11.650 "compare_and_write": false, 00:13:11.650 "abort": true, 00:13:11.650 "nvme_admin": false, 00:13:11.650 "nvme_io": false 00:13:11.650 }, 00:13:11.650 "memory_domains": [ 00:13:11.650 { 00:13:11.650 "dma_device_id": "system", 00:13:11.650 "dma_device_type": 1 00:13:11.650 }, 00:13:11.650 { 00:13:11.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.650 "dma_device_type": 2 00:13:11.650 } 00:13:11.650 ], 00:13:11.650 "driver_specific": { 00:13:11.650 "passthru": { 00:13:11.650 "name": "pt1", 00:13:11.650 "base_bdev_name": "malloc1" 00:13:11.650 } 00:13:11.650 } 00:13:11.650 }' 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:11.650 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:11.911 "name": "pt2", 00:13:11.911 "aliases": [ 00:13:11.911 "93fc10ba-36ee-ec5b-ae54-8003ee95e02b" 00:13:11.911 ], 00:13:11.911 "product_name": "passthru", 00:13:11.911 "block_size": 512, 00:13:11.911 "num_blocks": 65536, 00:13:11.911 "uuid": "93fc10ba-36ee-ec5b-ae54-8003ee95e02b", 00:13:11.911 "assigned_rate_limits": { 00:13:11.911 "rw_ios_per_sec": 0, 00:13:11.911 "rw_mbytes_per_sec": 0, 00:13:11.911 "r_mbytes_per_sec": 0, 00:13:11.911 "w_mbytes_per_sec": 0 00:13:11.911 }, 00:13:11.911 "claimed": true, 00:13:11.911 "claim_type": "exclusive_write", 00:13:11.911 "zoned": false, 00:13:11.911 "supported_io_types": { 00:13:11.911 "read": true, 00:13:11.911 "write": true, 00:13:11.911 "unmap": true, 00:13:11.911 "write_zeroes": true, 00:13:11.911 "flush": true, 00:13:11.911 "reset": true, 00:13:11.911 "compare": false, 00:13:11.911 "compare_and_write": false, 00:13:11.911 "abort": true, 00:13:11.911 "nvme_admin": false, 00:13:11.911 "nvme_io": false 00:13:11.911 }, 00:13:11.911 "memory_domains": [ 00:13:11.911 { 00:13:11.911 "dma_device_id": "system", 00:13:11.911 "dma_device_type": 1 00:13:11.911 }, 00:13:11.911 { 00:13:11.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.911 "dma_device_type": 2 00:13:11.911 } 00:13:11.911 ], 00:13:11.911 "driver_specific": { 00:13:11.911 "passthru": { 00:13:11.911 "name": "pt2", 00:13:11.911 "base_bdev_name": "malloc2" 00:13:11.911 } 00:13:11.911 } 00:13:11.911 }' 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:11.911 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:12.170 [2024-05-15 02:13:59.950522] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c6b8ec4f-1260-11ef-99fd-bfc7c66e2865 '!=' c6b8ec4f-1260-11ef-99fd-bfc7c66e2865 ']' 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 49522 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 49522 ']' 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 49522 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 49522 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:12.170 killing process with pid 49522 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49522' 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 49522 00:13:12.170 [2024-05-15 02:13:59.982305] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.170 [2024-05-15 02:13:59.982338] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.170 [2024-05-15 02:13:59.982352] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.170 [2024-05-15 02:13:59.982356] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8c2180 name raid_bdev1, state offline 00:13:12.170 02:13:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 49522 00:13:12.170 [2024-05-15 02:13:59.992069] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.170 02:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:12.170 00:13:12.170 real 0m9.353s 00:13:12.170 user 0m16.414s 00:13:12.170 sys 0m1.565s 00:13:12.170 ************************************ 00:13:12.170 END TEST raid_superblock_test 00:13:12.170 ************************************ 00:13:12.170 02:14:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:12.170 02:14:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.429 02:14:00 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:13:12.429 02:14:00 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:12.429 02:14:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:12.429 02:14:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.429 02:14:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.429 ************************************ 00:13:12.429 START TEST raid_state_function_test 00:13:12.429 ************************************ 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=49789 00:13:12.429 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49789' 00:13:12.429 Process raid pid: 49789 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 49789 /var/tmp/spdk-raid.sock 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 49789 ']' 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:12.430 02:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.430 [2024-05-15 02:14:00.194385] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:12.430 [2024-05-15 02:14:00.194611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:12.689 EAL: TSC is not safe to use in SMP mode 00:13:12.689 EAL: TSC is not invariant 00:13:12.689 [2024-05-15 02:14:00.682602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.948 [2024-05-15 02:14:00.783315] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:12.948 [2024-05-15 02:14:00.786087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.948 [2024-05-15 02:14:00.787097] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.948 [2024-05-15 02:14:00.787117] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.207 02:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:13.207 02:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:13.207 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:13.466 [2024-05-15 02:14:01.412228] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.466 [2024-05-15 02:14:01.412285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.466 [2024-05-15 02:14:01.412291] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.466 [2024-05-15 02:14:01.412299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.466 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.725 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:13.725 "name": "Existed_Raid", 00:13:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.725 "strip_size_kb": 64, 00:13:13.725 "state": "configuring", 00:13:13.725 "raid_level": "concat", 00:13:13.725 "superblock": false, 00:13:13.725 "num_base_bdevs": 2, 00:13:13.725 "num_base_bdevs_discovered": 0, 00:13:13.725 "num_base_bdevs_operational": 2, 00:13:13.725 "base_bdevs_list": [ 00:13:13.725 { 00:13:13.725 "name": "BaseBdev1", 00:13:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.725 "is_configured": false, 00:13:13.725 "data_offset": 0, 00:13:13.725 "data_size": 0 00:13:13.725 }, 00:13:13.725 { 00:13:13.725 "name": "BaseBdev2", 00:13:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.725 "is_configured": false, 00:13:13.725 "data_offset": 0, 00:13:13.725 "data_size": 0 00:13:13.725 } 00:13:13.725 ] 00:13:13.725 }' 00:13:13.725 02:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:13.725 02:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.292 02:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:14.550 [2024-05-15 02:14:02.404245] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.550 [2024-05-15 02:14:02.404280] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c26e500 name Existed_Raid, state configuring 00:13:14.550 02:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:14.808 [2024-05-15 02:14:02.700257] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.808 [2024-05-15 02:14:02.700335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.808 [2024-05-15 02:14:02.700340] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.808 [2024-05-15 02:14:02.700349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.808 02:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.125 [2024-05-15 02:14:02.985297] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.125 BaseBdev1 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:15.125 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:15.126 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.383 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.640 [ 00:13:15.640 { 00:13:15.640 "name": "BaseBdev1", 00:13:15.640 "aliases": [ 00:13:15.640 "cc8ec168-1260-11ef-99fd-bfc7c66e2865" 00:13:15.640 ], 00:13:15.640 "product_name": "Malloc disk", 00:13:15.640 "block_size": 512, 00:13:15.640 "num_blocks": 65536, 00:13:15.640 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:15.640 "assigned_rate_limits": { 00:13:15.640 "rw_ios_per_sec": 0, 00:13:15.640 "rw_mbytes_per_sec": 0, 00:13:15.640 "r_mbytes_per_sec": 0, 00:13:15.640 "w_mbytes_per_sec": 0 00:13:15.640 }, 00:13:15.640 "claimed": true, 00:13:15.640 "claim_type": "exclusive_write", 00:13:15.640 "zoned": false, 00:13:15.640 "supported_io_types": { 00:13:15.640 "read": true, 00:13:15.640 "write": true, 00:13:15.640 "unmap": true, 00:13:15.640 "write_zeroes": true, 00:13:15.640 "flush": true, 00:13:15.640 "reset": true, 00:13:15.640 "compare": false, 00:13:15.640 "compare_and_write": false, 00:13:15.640 "abort": true, 00:13:15.640 "nvme_admin": false, 00:13:15.640 "nvme_io": false 00:13:15.640 }, 00:13:15.640 "memory_domains": [ 00:13:15.640 { 00:13:15.640 "dma_device_id": "system", 00:13:15.640 "dma_device_type": 1 00:13:15.640 }, 00:13:15.640 { 00:13:15.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.640 "dma_device_type": 2 00:13:15.640 } 00:13:15.640 ], 00:13:15.640 "driver_specific": {} 00:13:15.640 } 00:13:15.640 ] 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.640 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.897 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:15.897 "name": "Existed_Raid", 00:13:15.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.897 "strip_size_kb": 64, 00:13:15.897 "state": "configuring", 00:13:15.897 "raid_level": "concat", 00:13:15.897 "superblock": false, 00:13:15.897 "num_base_bdevs": 2, 00:13:15.897 "num_base_bdevs_discovered": 1, 00:13:15.897 "num_base_bdevs_operational": 2, 00:13:15.897 "base_bdevs_list": [ 00:13:15.897 { 00:13:15.897 "name": "BaseBdev1", 00:13:15.897 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:15.897 "is_configured": true, 00:13:15.897 "data_offset": 0, 00:13:15.897 "data_size": 65536 00:13:15.897 }, 00:13:15.897 { 00:13:15.897 "name": "BaseBdev2", 00:13:15.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.897 "is_configured": false, 00:13:15.897 "data_offset": 0, 00:13:15.897 "data_size": 0 00:13:15.897 } 00:13:15.897 ] 00:13:15.897 }' 00:13:15.897 02:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:15.897 02:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.463 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:16.722 [2024-05-15 02:14:04.472295] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.722 [2024-05-15 02:14:04.472335] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c26e500 name Existed_Raid, state configuring 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:16.722 [2024-05-15 02:14:04.700301] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.722 [2024-05-15 02:14:04.701059] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.722 [2024-05-15 02:14:04.701108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:16.722 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:16.980 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.980 02:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.239 02:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.239 "name": "Existed_Raid", 00:13:17.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.239 "strip_size_kb": 64, 00:13:17.239 "state": "configuring", 00:13:17.239 "raid_level": "concat", 00:13:17.239 "superblock": false, 00:13:17.239 "num_base_bdevs": 2, 00:13:17.239 "num_base_bdevs_discovered": 1, 00:13:17.239 "num_base_bdevs_operational": 2, 00:13:17.239 "base_bdevs_list": [ 00:13:17.239 { 00:13:17.239 "name": "BaseBdev1", 00:13:17.239 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:17.239 "is_configured": true, 00:13:17.239 "data_offset": 0, 00:13:17.239 "data_size": 65536 00:13:17.239 }, 00:13:17.239 { 00:13:17.239 "name": "BaseBdev2", 00:13:17.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.239 "is_configured": false, 00:13:17.239 "data_offset": 0, 00:13:17.239 "data_size": 0 00:13:17.239 } 00:13:17.239 ] 00:13:17.239 }' 00:13:17.239 02:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.239 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.497 02:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.755 [2024-05-15 02:14:05.528451] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.755 [2024-05-15 02:14:05.528481] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c26ea00 00:13:17.755 [2024-05-15 02:14:05.528485] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:17.755 [2024-05-15 02:14:05.528506] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c2d1ec0 00:13:17.755 [2024-05-15 02:14:05.528589] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c26ea00 00:13:17.755 [2024-05-15 02:14:05.528593] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c26ea00 00:13:17.755 [2024-05-15 02:14:05.528646] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.755 BaseBdev2 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:17.755 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.013 02:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.271 [ 00:13:18.271 { 00:13:18.271 "name": "BaseBdev2", 00:13:18.271 "aliases": [ 00:13:18.271 "ce12f252-1260-11ef-99fd-bfc7c66e2865" 00:13:18.271 ], 00:13:18.271 "product_name": "Malloc disk", 00:13:18.271 "block_size": 512, 00:13:18.271 "num_blocks": 65536, 00:13:18.271 "uuid": "ce12f252-1260-11ef-99fd-bfc7c66e2865", 00:13:18.271 "assigned_rate_limits": { 00:13:18.271 "rw_ios_per_sec": 0, 00:13:18.271 "rw_mbytes_per_sec": 0, 00:13:18.271 "r_mbytes_per_sec": 0, 00:13:18.271 "w_mbytes_per_sec": 0 00:13:18.271 }, 00:13:18.271 "claimed": true, 00:13:18.271 "claim_type": "exclusive_write", 00:13:18.271 "zoned": false, 00:13:18.271 "supported_io_types": { 00:13:18.271 "read": true, 00:13:18.271 "write": true, 00:13:18.271 "unmap": true, 00:13:18.271 "write_zeroes": true, 00:13:18.271 "flush": true, 00:13:18.271 "reset": true, 00:13:18.271 "compare": false, 00:13:18.271 "compare_and_write": false, 00:13:18.271 "abort": true, 00:13:18.271 "nvme_admin": false, 00:13:18.271 "nvme_io": false 00:13:18.271 }, 00:13:18.271 "memory_domains": [ 00:13:18.271 { 00:13:18.271 "dma_device_id": "system", 00:13:18.271 "dma_device_type": 1 00:13:18.271 }, 00:13:18.271 { 00:13:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.271 "dma_device_type": 2 00:13:18.271 } 00:13:18.271 ], 00:13:18.271 "driver_specific": {} 00:13:18.271 } 00:13:18.271 ] 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:18.271 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:18.272 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:18.272 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.272 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.529 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:18.529 "name": "Existed_Raid", 00:13:18.529 "uuid": "ce12f8ae-1260-11ef-99fd-bfc7c66e2865", 00:13:18.529 "strip_size_kb": 64, 00:13:18.529 "state": "online", 00:13:18.529 "raid_level": "concat", 00:13:18.529 "superblock": false, 00:13:18.529 "num_base_bdevs": 2, 00:13:18.529 "num_base_bdevs_discovered": 2, 00:13:18.529 "num_base_bdevs_operational": 2, 00:13:18.529 "base_bdevs_list": [ 00:13:18.529 { 00:13:18.529 "name": "BaseBdev1", 00:13:18.529 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:18.529 "is_configured": true, 00:13:18.529 "data_offset": 0, 00:13:18.529 "data_size": 65536 00:13:18.529 }, 00:13:18.529 { 00:13:18.529 "name": "BaseBdev2", 00:13:18.529 "uuid": "ce12f252-1260-11ef-99fd-bfc7c66e2865", 00:13:18.529 "is_configured": true, 00:13:18.529 "data_offset": 0, 00:13:18.529 "data_size": 65536 00:13:18.529 } 00:13:18.529 ] 00:13:18.529 }' 00:13:18.530 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:18.530 02:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:18.788 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:19.046 [2024-05-15 02:14:06.924389] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.046 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:19.046 "name": "Existed_Raid", 00:13:19.046 "aliases": [ 00:13:19.046 "ce12f8ae-1260-11ef-99fd-bfc7c66e2865" 00:13:19.046 ], 00:13:19.046 "product_name": "Raid Volume", 00:13:19.046 "block_size": 512, 00:13:19.046 "num_blocks": 131072, 00:13:19.046 "uuid": "ce12f8ae-1260-11ef-99fd-bfc7c66e2865", 00:13:19.046 "assigned_rate_limits": { 00:13:19.046 "rw_ios_per_sec": 0, 00:13:19.046 "rw_mbytes_per_sec": 0, 00:13:19.046 "r_mbytes_per_sec": 0, 00:13:19.046 "w_mbytes_per_sec": 0 00:13:19.046 }, 00:13:19.046 "claimed": false, 00:13:19.046 "zoned": false, 00:13:19.046 "supported_io_types": { 00:13:19.046 "read": true, 00:13:19.046 "write": true, 00:13:19.046 "unmap": true, 00:13:19.046 "write_zeroes": true, 00:13:19.046 "flush": true, 00:13:19.046 "reset": true, 00:13:19.046 "compare": false, 00:13:19.046 "compare_and_write": false, 00:13:19.046 "abort": false, 00:13:19.046 "nvme_admin": false, 00:13:19.046 "nvme_io": false 00:13:19.046 }, 00:13:19.046 "memory_domains": [ 00:13:19.046 { 00:13:19.046 "dma_device_id": "system", 00:13:19.046 "dma_device_type": 1 00:13:19.046 }, 00:13:19.046 { 00:13:19.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.046 "dma_device_type": 2 00:13:19.046 }, 00:13:19.046 { 00:13:19.046 "dma_device_id": "system", 00:13:19.046 "dma_device_type": 1 00:13:19.046 }, 00:13:19.046 { 00:13:19.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.047 "dma_device_type": 2 00:13:19.047 } 00:13:19.047 ], 00:13:19.047 "driver_specific": { 00:13:19.047 "raid": { 00:13:19.047 "uuid": "ce12f8ae-1260-11ef-99fd-bfc7c66e2865", 00:13:19.047 "strip_size_kb": 64, 00:13:19.047 "state": "online", 00:13:19.047 "raid_level": "concat", 00:13:19.047 "superblock": false, 00:13:19.047 "num_base_bdevs": 2, 00:13:19.047 "num_base_bdevs_discovered": 2, 00:13:19.047 "num_base_bdevs_operational": 2, 00:13:19.047 "base_bdevs_list": [ 00:13:19.047 { 00:13:19.047 "name": "BaseBdev1", 00:13:19.047 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:19.047 "is_configured": true, 00:13:19.047 "data_offset": 0, 00:13:19.047 "data_size": 65536 00:13:19.047 }, 00:13:19.047 { 00:13:19.047 "name": "BaseBdev2", 00:13:19.047 "uuid": "ce12f252-1260-11ef-99fd-bfc7c66e2865", 00:13:19.047 "is_configured": true, 00:13:19.047 "data_offset": 0, 00:13:19.047 "data_size": 65536 00:13:19.047 } 00:13:19.047 ] 00:13:19.047 } 00:13:19.047 } 00:13:19.047 }' 00:13:19.047 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.047 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:19.047 BaseBdev2' 00:13:19.047 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:19.047 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:19.047 02:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:19.305 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:19.305 "name": "BaseBdev1", 00:13:19.305 "aliases": [ 00:13:19.305 "cc8ec168-1260-11ef-99fd-bfc7c66e2865" 00:13:19.305 ], 00:13:19.305 "product_name": "Malloc disk", 00:13:19.305 "block_size": 512, 00:13:19.305 "num_blocks": 65536, 00:13:19.305 "uuid": "cc8ec168-1260-11ef-99fd-bfc7c66e2865", 00:13:19.305 "assigned_rate_limits": { 00:13:19.305 "rw_ios_per_sec": 0, 00:13:19.305 "rw_mbytes_per_sec": 0, 00:13:19.305 "r_mbytes_per_sec": 0, 00:13:19.305 "w_mbytes_per_sec": 0 00:13:19.305 }, 00:13:19.305 "claimed": true, 00:13:19.305 "claim_type": "exclusive_write", 00:13:19.305 "zoned": false, 00:13:19.305 "supported_io_types": { 00:13:19.305 "read": true, 00:13:19.305 "write": true, 00:13:19.305 "unmap": true, 00:13:19.305 "write_zeroes": true, 00:13:19.305 "flush": true, 00:13:19.305 "reset": true, 00:13:19.305 "compare": false, 00:13:19.305 "compare_and_write": false, 00:13:19.305 "abort": true, 00:13:19.305 "nvme_admin": false, 00:13:19.305 "nvme_io": false 00:13:19.305 }, 00:13:19.305 "memory_domains": [ 00:13:19.305 { 00:13:19.305 "dma_device_id": "system", 00:13:19.305 "dma_device_type": 1 00:13:19.305 }, 00:13:19.305 { 00:13:19.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.305 "dma_device_type": 2 00:13:19.305 } 00:13:19.305 ], 00:13:19.305 "driver_specific": {} 00:13:19.305 }' 00:13:19.305 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:19.305 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:19.305 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:19.305 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:19.306 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:19.565 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:19.565 "name": "BaseBdev2", 00:13:19.565 "aliases": [ 00:13:19.565 "ce12f252-1260-11ef-99fd-bfc7c66e2865" 00:13:19.565 ], 00:13:19.565 "product_name": "Malloc disk", 00:13:19.565 "block_size": 512, 00:13:19.565 "num_blocks": 65536, 00:13:19.565 "uuid": "ce12f252-1260-11ef-99fd-bfc7c66e2865", 00:13:19.565 "assigned_rate_limits": { 00:13:19.565 "rw_ios_per_sec": 0, 00:13:19.565 "rw_mbytes_per_sec": 0, 00:13:19.565 "r_mbytes_per_sec": 0, 00:13:19.565 "w_mbytes_per_sec": 0 00:13:19.565 }, 00:13:19.565 "claimed": true, 00:13:19.565 "claim_type": "exclusive_write", 00:13:19.565 "zoned": false, 00:13:19.565 "supported_io_types": { 00:13:19.565 "read": true, 00:13:19.565 "write": true, 00:13:19.565 "unmap": true, 00:13:19.565 "write_zeroes": true, 00:13:19.565 "flush": true, 00:13:19.565 "reset": true, 00:13:19.565 "compare": false, 00:13:19.565 "compare_and_write": false, 00:13:19.565 "abort": true, 00:13:19.565 "nvme_admin": false, 00:13:19.565 "nvme_io": false 00:13:19.565 }, 00:13:19.565 "memory_domains": [ 00:13:19.565 { 00:13:19.565 "dma_device_id": "system", 00:13:19.565 "dma_device_type": 1 00:13:19.565 }, 00:13:19.565 { 00:13:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.565 "dma_device_type": 2 00:13:19.565 } 00:13:19.565 ], 00:13:19.565 "driver_specific": {} 00:13:19.565 }' 00:13:19.565 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:19.565 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:19.824 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:20.083 [2024-05-15 02:14:07.900390] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.083 [2024-05-15 02:14:07.900422] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.083 [2024-05-15 02:14:07.900437] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.083 02:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.342 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.342 "name": "Existed_Raid", 00:13:20.342 "uuid": "ce12f8ae-1260-11ef-99fd-bfc7c66e2865", 00:13:20.342 "strip_size_kb": 64, 00:13:20.342 "state": "offline", 00:13:20.342 "raid_level": "concat", 00:13:20.342 "superblock": false, 00:13:20.342 "num_base_bdevs": 2, 00:13:20.342 "num_base_bdevs_discovered": 1, 00:13:20.342 "num_base_bdevs_operational": 1, 00:13:20.342 "base_bdevs_list": [ 00:13:20.342 { 00:13:20.342 "name": null, 00:13:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.342 "is_configured": false, 00:13:20.342 "data_offset": 0, 00:13:20.342 "data_size": 65536 00:13:20.342 }, 00:13:20.342 { 00:13:20.342 "name": "BaseBdev2", 00:13:20.342 "uuid": "ce12f252-1260-11ef-99fd-bfc7c66e2865", 00:13:20.342 "is_configured": true, 00:13:20.342 "data_offset": 0, 00:13:20.342 "data_size": 65536 00:13:20.342 } 00:13:20.342 ] 00:13:20.342 }' 00:13:20.342 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.342 02:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.600 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:20.600 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:20.600 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:20.600 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.859 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:20.859 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.859 02:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:21.117 [2024-05-15 02:14:09.045369] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.117 [2024-05-15 02:14:09.045416] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c26ea00 name Existed_Raid, state offline 00:13:21.117 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.117 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.117 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.117 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 49789 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 49789 ']' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 49789 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 49789 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:21.376 killing process with pid 49789 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49789' 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 49789 00:13:21.376 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 49789 00:13:21.376 [2024-05-15 02:14:09.376615] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.634 [2024-05-15 02:14:09.376664] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.634 02:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:13:21.634 00:13:21.634 real 0m9.353s 00:13:21.634 user 0m16.483s 00:13:21.634 sys 0m1.504s 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.635 ************************************ 00:13:21.635 END TEST raid_state_function_test 00:13:21.635 ************************************ 00:13:21.635 02:14:09 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:21.635 02:14:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:21.635 02:14:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.635 02:14:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.635 ************************************ 00:13:21.635 START TEST raid_state_function_test_sb 00:13:21.635 ************************************ 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=50064 00:13:21.635 Process raid pid: 50064 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50064' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 50064 /var/tmp/spdk-raid.sock 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 50064 ']' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:21.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:21.635 02:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.635 [2024-05-15 02:14:09.581269] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:21.635 [2024-05-15 02:14:09.581429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:22.229 EAL: TSC is not safe to use in SMP mode 00:13:22.229 EAL: TSC is not invariant 00:13:22.229 [2024-05-15 02:14:10.046539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.229 [2024-05-15 02:14:10.132239] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:22.229 [2024-05-15 02:14:10.134482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.229 [2024-05-15 02:14:10.135206] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.229 [2024-05-15 02:14:10.135222] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.795 02:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.795 02:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:22.795 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.053 [2024-05-15 02:14:10.870732] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.053 [2024-05-15 02:14:10.870801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.053 [2024-05-15 02:14:10.870806] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.053 [2024-05-15 02:14:10.870815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.053 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:23.053 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.053 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.054 02:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.312 02:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.312 "name": "Existed_Raid", 00:13:23.312 "uuid": "d14221b1-1260-11ef-99fd-bfc7c66e2865", 00:13:23.312 "strip_size_kb": 64, 00:13:23.312 "state": "configuring", 00:13:23.312 "raid_level": "concat", 00:13:23.312 "superblock": true, 00:13:23.312 "num_base_bdevs": 2, 00:13:23.312 "num_base_bdevs_discovered": 0, 00:13:23.312 "num_base_bdevs_operational": 2, 00:13:23.312 "base_bdevs_list": [ 00:13:23.312 { 00:13:23.312 "name": "BaseBdev1", 00:13:23.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.312 "is_configured": false, 00:13:23.312 "data_offset": 0, 00:13:23.312 "data_size": 0 00:13:23.312 }, 00:13:23.312 { 00:13:23.312 "name": "BaseBdev2", 00:13:23.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.312 "is_configured": false, 00:13:23.312 "data_offset": 0, 00:13:23.312 "data_size": 0 00:13:23.312 } 00:13:23.312 ] 00:13:23.312 }' 00:13:23.312 02:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.312 02:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.878 02:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:23.878 [2024-05-15 02:14:11.855147] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.878 [2024-05-15 02:14:11.855178] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b392500 name Existed_Raid, state configuring 00:13:23.878 02:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:24.137 [2024-05-15 02:14:12.107281] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.137 [2024-05-15 02:14:12.107356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.137 [2024-05-15 02:14:12.107362] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.137 [2024-05-15 02:14:12.107371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.137 02:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.705 [2024-05-15 02:14:12.400389] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.705 BaseBdev1 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.705 02:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.271 [ 00:13:25.271 { 00:13:25.271 "name": "BaseBdev1", 00:13:25.271 "aliases": [ 00:13:25.271 "d22b6441-1260-11ef-99fd-bfc7c66e2865" 00:13:25.271 ], 00:13:25.271 "product_name": "Malloc disk", 00:13:25.271 "block_size": 512, 00:13:25.271 "num_blocks": 65536, 00:13:25.271 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:25.271 "assigned_rate_limits": { 00:13:25.271 "rw_ios_per_sec": 0, 00:13:25.271 "rw_mbytes_per_sec": 0, 00:13:25.271 "r_mbytes_per_sec": 0, 00:13:25.271 "w_mbytes_per_sec": 0 00:13:25.271 }, 00:13:25.271 "claimed": true, 00:13:25.271 "claim_type": "exclusive_write", 00:13:25.271 "zoned": false, 00:13:25.271 "supported_io_types": { 00:13:25.271 "read": true, 00:13:25.271 "write": true, 00:13:25.271 "unmap": true, 00:13:25.271 "write_zeroes": true, 00:13:25.271 "flush": true, 00:13:25.271 "reset": true, 00:13:25.271 "compare": false, 00:13:25.271 "compare_and_write": false, 00:13:25.271 "abort": true, 00:13:25.271 "nvme_admin": false, 00:13:25.271 "nvme_io": false 00:13:25.271 }, 00:13:25.271 "memory_domains": [ 00:13:25.271 { 00:13:25.271 "dma_device_id": "system", 00:13:25.271 "dma_device_type": 1 00:13:25.271 }, 00:13:25.271 { 00:13:25.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.271 "dma_device_type": 2 00:13:25.271 } 00:13:25.271 ], 00:13:25.271 "driver_specific": {} 00:13:25.271 } 00:13:25.271 ] 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.271 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.529 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.529 "name": "Existed_Raid", 00:13:25.529 "uuid": "d1fed06a-1260-11ef-99fd-bfc7c66e2865", 00:13:25.529 "strip_size_kb": 64, 00:13:25.529 "state": "configuring", 00:13:25.529 "raid_level": "concat", 00:13:25.529 "superblock": true, 00:13:25.529 "num_base_bdevs": 2, 00:13:25.529 "num_base_bdevs_discovered": 1, 00:13:25.529 "num_base_bdevs_operational": 2, 00:13:25.529 "base_bdevs_list": [ 00:13:25.529 { 00:13:25.529 "name": "BaseBdev1", 00:13:25.529 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:25.529 "is_configured": true, 00:13:25.529 "data_offset": 2048, 00:13:25.529 "data_size": 63488 00:13:25.529 }, 00:13:25.529 { 00:13:25.529 "name": "BaseBdev2", 00:13:25.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.529 "is_configured": false, 00:13:25.529 "data_offset": 0, 00:13:25.529 "data_size": 0 00:13:25.529 } 00:13:25.529 ] 00:13:25.529 }' 00:13:25.529 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.529 02:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.787 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:26.045 [2024-05-15 02:14:13.944091] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.045 [2024-05-15 02:14:13.944134] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b392500 name Existed_Raid, state configuring 00:13:26.045 02:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:26.316 [2024-05-15 02:14:14.236236] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.316 [2024-05-15 02:14:14.236965] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.316 [2024-05-15 02:14:14.237014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.316 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.620 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.620 "name": "Existed_Raid", 00:13:26.620 "uuid": "d343aa85-1260-11ef-99fd-bfc7c66e2865", 00:13:26.620 "strip_size_kb": 64, 00:13:26.620 "state": "configuring", 00:13:26.620 "raid_level": "concat", 00:13:26.620 "superblock": true, 00:13:26.620 "num_base_bdevs": 2, 00:13:26.620 "num_base_bdevs_discovered": 1, 00:13:26.620 "num_base_bdevs_operational": 2, 00:13:26.620 "base_bdevs_list": [ 00:13:26.620 { 00:13:26.620 "name": "BaseBdev1", 00:13:26.620 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:26.620 "is_configured": true, 00:13:26.620 "data_offset": 2048, 00:13:26.620 "data_size": 63488 00:13:26.620 }, 00:13:26.620 { 00:13:26.620 "name": "BaseBdev2", 00:13:26.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.620 "is_configured": false, 00:13:26.620 "data_offset": 0, 00:13:26.620 "data_size": 0 00:13:26.620 } 00:13:26.620 ] 00:13:26.620 }' 00:13:26.620 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.620 02:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.878 02:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.135 [2024-05-15 02:14:15.124764] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.135 [2024-05-15 02:14:15.124832] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b392a00 00:13:27.135 [2024-05-15 02:14:15.124838] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:27.135 [2024-05-15 02:14:15.124858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b3f5ec0 00:13:27.135 [2024-05-15 02:14:15.124891] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b392a00 00:13:27.135 [2024-05-15 02:14:15.124895] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b392a00 00:13:27.135 [2024-05-15 02:14:15.124912] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.135 BaseBdev2 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:27.392 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.650 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.908 [ 00:13:27.908 { 00:13:27.908 "name": "BaseBdev2", 00:13:27.908 "aliases": [ 00:13:27.908 "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865" 00:13:27.908 ], 00:13:27.908 "product_name": "Malloc disk", 00:13:27.908 "block_size": 512, 00:13:27.908 "num_blocks": 65536, 00:13:27.908 "uuid": "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865", 00:13:27.908 "assigned_rate_limits": { 00:13:27.908 "rw_ios_per_sec": 0, 00:13:27.908 "rw_mbytes_per_sec": 0, 00:13:27.908 "r_mbytes_per_sec": 0, 00:13:27.908 "w_mbytes_per_sec": 0 00:13:27.908 }, 00:13:27.908 "claimed": true, 00:13:27.908 "claim_type": "exclusive_write", 00:13:27.908 "zoned": false, 00:13:27.908 "supported_io_types": { 00:13:27.908 "read": true, 00:13:27.908 "write": true, 00:13:27.908 "unmap": true, 00:13:27.908 "write_zeroes": true, 00:13:27.908 "flush": true, 00:13:27.908 "reset": true, 00:13:27.908 "compare": false, 00:13:27.908 "compare_and_write": false, 00:13:27.908 "abort": true, 00:13:27.908 "nvme_admin": false, 00:13:27.908 "nvme_io": false 00:13:27.908 }, 00:13:27.908 "memory_domains": [ 00:13:27.908 { 00:13:27.908 "dma_device_id": "system", 00:13:27.908 "dma_device_type": 1 00:13:27.908 }, 00:13:27.908 { 00:13:27.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.908 "dma_device_type": 2 00:13:27.908 } 00:13:27.908 ], 00:13:27.908 "driver_specific": {} 00:13:27.908 } 00:13:27.908 ] 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.908 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.166 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.166 "name": "Existed_Raid", 00:13:28.166 "uuid": "d343aa85-1260-11ef-99fd-bfc7c66e2865", 00:13:28.166 "strip_size_kb": 64, 00:13:28.166 "state": "online", 00:13:28.166 "raid_level": "concat", 00:13:28.166 "superblock": true, 00:13:28.166 "num_base_bdevs": 2, 00:13:28.166 "num_base_bdevs_discovered": 2, 00:13:28.166 "num_base_bdevs_operational": 2, 00:13:28.166 "base_bdevs_list": [ 00:13:28.166 { 00:13:28.166 "name": "BaseBdev1", 00:13:28.166 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:28.166 "is_configured": true, 00:13:28.166 "data_offset": 2048, 00:13:28.166 "data_size": 63488 00:13:28.166 }, 00:13:28.166 { 00:13:28.166 "name": "BaseBdev2", 00:13:28.166 "uuid": "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865", 00:13:28.166 "is_configured": true, 00:13:28.166 "data_offset": 2048, 00:13:28.166 "data_size": 63488 00:13:28.166 } 00:13:28.166 ] 00:13:28.166 }' 00:13:28.166 02:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.166 02:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:28.425 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:28.685 [2024-05-15 02:14:16.533233] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.685 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:28.685 "name": "Existed_Raid", 00:13:28.685 "aliases": [ 00:13:28.685 "d343aa85-1260-11ef-99fd-bfc7c66e2865" 00:13:28.685 ], 00:13:28.685 "product_name": "Raid Volume", 00:13:28.685 "block_size": 512, 00:13:28.685 "num_blocks": 126976, 00:13:28.685 "uuid": "d343aa85-1260-11ef-99fd-bfc7c66e2865", 00:13:28.685 "assigned_rate_limits": { 00:13:28.685 "rw_ios_per_sec": 0, 00:13:28.685 "rw_mbytes_per_sec": 0, 00:13:28.685 "r_mbytes_per_sec": 0, 00:13:28.685 "w_mbytes_per_sec": 0 00:13:28.685 }, 00:13:28.685 "claimed": false, 00:13:28.685 "zoned": false, 00:13:28.685 "supported_io_types": { 00:13:28.685 "read": true, 00:13:28.685 "write": true, 00:13:28.685 "unmap": true, 00:13:28.685 "write_zeroes": true, 00:13:28.685 "flush": true, 00:13:28.685 "reset": true, 00:13:28.685 "compare": false, 00:13:28.685 "compare_and_write": false, 00:13:28.685 "abort": false, 00:13:28.685 "nvme_admin": false, 00:13:28.685 "nvme_io": false 00:13:28.685 }, 00:13:28.685 "memory_domains": [ 00:13:28.685 { 00:13:28.685 "dma_device_id": "system", 00:13:28.685 "dma_device_type": 1 00:13:28.685 }, 00:13:28.685 { 00:13:28.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.685 "dma_device_type": 2 00:13:28.685 }, 00:13:28.685 { 00:13:28.685 "dma_device_id": "system", 00:13:28.686 "dma_device_type": 1 00:13:28.686 }, 00:13:28.686 { 00:13:28.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.686 "dma_device_type": 2 00:13:28.686 } 00:13:28.686 ], 00:13:28.686 "driver_specific": { 00:13:28.686 "raid": { 00:13:28.686 "uuid": "d343aa85-1260-11ef-99fd-bfc7c66e2865", 00:13:28.686 "strip_size_kb": 64, 00:13:28.686 "state": "online", 00:13:28.686 "raid_level": "concat", 00:13:28.686 "superblock": true, 00:13:28.686 "num_base_bdevs": 2, 00:13:28.686 "num_base_bdevs_discovered": 2, 00:13:28.686 "num_base_bdevs_operational": 2, 00:13:28.686 "base_bdevs_list": [ 00:13:28.686 { 00:13:28.686 "name": "BaseBdev1", 00:13:28.686 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:28.686 "is_configured": true, 00:13:28.686 "data_offset": 2048, 00:13:28.686 "data_size": 63488 00:13:28.686 }, 00:13:28.686 { 00:13:28.686 "name": "BaseBdev2", 00:13:28.686 "uuid": "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865", 00:13:28.686 "is_configured": true, 00:13:28.686 "data_offset": 2048, 00:13:28.686 "data_size": 63488 00:13:28.686 } 00:13:28.686 ] 00:13:28.686 } 00:13:28.686 } 00:13:28.686 }' 00:13:28.686 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.686 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:28.686 BaseBdev2' 00:13:28.686 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:28.686 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:28.686 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:28.944 "name": "BaseBdev1", 00:13:28.944 "aliases": [ 00:13:28.944 "d22b6441-1260-11ef-99fd-bfc7c66e2865" 00:13:28.944 ], 00:13:28.944 "product_name": "Malloc disk", 00:13:28.944 "block_size": 512, 00:13:28.944 "num_blocks": 65536, 00:13:28.944 "uuid": "d22b6441-1260-11ef-99fd-bfc7c66e2865", 00:13:28.944 "assigned_rate_limits": { 00:13:28.944 "rw_ios_per_sec": 0, 00:13:28.944 "rw_mbytes_per_sec": 0, 00:13:28.944 "r_mbytes_per_sec": 0, 00:13:28.944 "w_mbytes_per_sec": 0 00:13:28.944 }, 00:13:28.944 "claimed": true, 00:13:28.944 "claim_type": "exclusive_write", 00:13:28.944 "zoned": false, 00:13:28.944 "supported_io_types": { 00:13:28.944 "read": true, 00:13:28.944 "write": true, 00:13:28.944 "unmap": true, 00:13:28.944 "write_zeroes": true, 00:13:28.944 "flush": true, 00:13:28.944 "reset": true, 00:13:28.944 "compare": false, 00:13:28.944 "compare_and_write": false, 00:13:28.944 "abort": true, 00:13:28.944 "nvme_admin": false, 00:13:28.944 "nvme_io": false 00:13:28.944 }, 00:13:28.944 "memory_domains": [ 00:13:28.944 { 00:13:28.944 "dma_device_id": "system", 00:13:28.944 "dma_device_type": 1 00:13:28.944 }, 00:13:28.944 { 00:13:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.944 "dma_device_type": 2 00:13:28.944 } 00:13:28.944 ], 00:13:28.944 "driver_specific": {} 00:13:28.944 }' 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:28.944 02:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:29.509 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:29.509 "name": "BaseBdev2", 00:13:29.509 "aliases": [ 00:13:29.510 "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865" 00:13:29.510 ], 00:13:29.510 "product_name": "Malloc disk", 00:13:29.510 "block_size": 512, 00:13:29.510 "num_blocks": 65536, 00:13:29.510 "uuid": "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865", 00:13:29.510 "assigned_rate_limits": { 00:13:29.510 "rw_ios_per_sec": 0, 00:13:29.510 "rw_mbytes_per_sec": 0, 00:13:29.510 "r_mbytes_per_sec": 0, 00:13:29.510 "w_mbytes_per_sec": 0 00:13:29.510 }, 00:13:29.510 "claimed": true, 00:13:29.510 "claim_type": "exclusive_write", 00:13:29.510 "zoned": false, 00:13:29.510 "supported_io_types": { 00:13:29.510 "read": true, 00:13:29.510 "write": true, 00:13:29.510 "unmap": true, 00:13:29.510 "write_zeroes": true, 00:13:29.510 "flush": true, 00:13:29.510 "reset": true, 00:13:29.510 "compare": false, 00:13:29.510 "compare_and_write": false, 00:13:29.510 "abort": true, 00:13:29.510 "nvme_admin": false, 00:13:29.510 "nvme_io": false 00:13:29.510 }, 00:13:29.510 "memory_domains": [ 00:13:29.510 { 00:13:29.510 "dma_device_id": "system", 00:13:29.510 "dma_device_type": 1 00:13:29.510 }, 00:13:29.510 { 00:13:29.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.510 "dma_device_type": 2 00:13:29.510 } 00:13:29.510 ], 00:13:29.510 "driver_specific": {} 00:13:29.510 }' 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:29.510 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:29.767 [2024-05-15 02:14:17.545623] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.767 [2024-05-15 02:14:17.545646] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.767 [2024-05-15 02:14:17.545661] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.768 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.025 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.025 "name": "Existed_Raid", 00:13:30.025 "uuid": "d343aa85-1260-11ef-99fd-bfc7c66e2865", 00:13:30.025 "strip_size_kb": 64, 00:13:30.025 "state": "offline", 00:13:30.025 "raid_level": "concat", 00:13:30.025 "superblock": true, 00:13:30.025 "num_base_bdevs": 2, 00:13:30.025 "num_base_bdevs_discovered": 1, 00:13:30.025 "num_base_bdevs_operational": 1, 00:13:30.025 "base_bdevs_list": [ 00:13:30.025 { 00:13:30.025 "name": null, 00:13:30.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.025 "is_configured": false, 00:13:30.025 "data_offset": 2048, 00:13:30.025 "data_size": 63488 00:13:30.025 }, 00:13:30.025 { 00:13:30.025 "name": "BaseBdev2", 00:13:30.025 "uuid": "d3cb3a1f-1260-11ef-99fd-bfc7c66e2865", 00:13:30.025 "is_configured": true, 00:13:30.025 "data_offset": 2048, 00:13:30.025 "data_size": 63488 00:13:30.025 } 00:13:30.025 ] 00:13:30.025 }' 00:13:30.025 02:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.025 02:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.283 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:30.283 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.283 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.283 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:30.541 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:30.541 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.541 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:30.798 [2024-05-15 02:14:18.662974] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.798 [2024-05-15 02:14:18.663042] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b392a00 name Existed_Raid, state offline 00:13:30.798 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.798 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.798 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.798 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 50064 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 50064 ']' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 50064 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 50064 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:31.056 killing process with pid 50064 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50064' 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 50064 00:13:31.056 [2024-05-15 02:14:18.914068] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.056 [2024-05-15 02:14:18.914114] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.056 02:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 50064 00:13:31.314 02:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:31.314 00:13:31.314 real 0m9.490s 00:13:31.314 user 0m16.745s 00:13:31.314 sys 0m1.511s 00:13:31.314 ************************************ 00:13:31.314 END TEST raid_state_function_test_sb 00:13:31.314 ************************************ 00:13:31.314 02:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.314 02:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 02:14:19 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:31.315 02:14:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:31.315 02:14:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.315 02:14:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 ************************************ 00:13:31.315 START TEST raid_superblock_test 00:13:31.315 ************************************ 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=50338 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 50338 /var/tmp/spdk-raid.sock 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 50338 ']' 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:31.315 02:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-05-15 02:14:19.115905] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:31.315 [2024-05-15 02:14:19.116090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:31.881 EAL: TSC is not safe to use in SMP mode 00:13:31.881 EAL: TSC is not invariant 00:13:31.881 [2024-05-15 02:14:19.616398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.881 [2024-05-15 02:14:19.703534] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:31.881 [2024-05-15 02:14:19.705813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.881 [2024-05-15 02:14:19.706544] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.881 [2024-05-15 02:14:19.706560] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.448 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:32.707 malloc1 00:13:32.707 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.966 [2024-05-15 02:14:20.803349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.966 [2024-05-15 02:14:20.803421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.966 [2024-05-15 02:14:20.804089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c99d780 00:13:32.966 [2024-05-15 02:14:20.804124] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.966 [2024-05-15 02:14:20.804912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.966 [2024-05-15 02:14:20.804939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.966 pt1 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.966 02:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:33.225 malloc2 00:13:33.225 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.484 [2024-05-15 02:14:21.291515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.484 [2024-05-15 02:14:21.291571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.484 [2024-05-15 02:14:21.291598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c99dc80 00:13:33.484 [2024-05-15 02:14:21.291607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.484 [2024-05-15 02:14:21.292172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.484 [2024-05-15 02:14:21.292197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.484 pt2 00:13:33.484 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.484 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.484 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:13:33.793 [2024-05-15 02:14:21.591642] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.793 [2024-05-15 02:14:21.592122] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.793 [2024-05-15 02:14:21.592173] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c99df00 00:13:33.793 [2024-05-15 02:14:21.592178] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.793 [2024-05-15 02:14:21.592208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ca00e20 00:13:33.793 [2024-05-15 02:14:21.592298] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c99df00 00:13:33.793 [2024-05-15 02:14:21.592302] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c99df00 00:13:33.793 [2024-05-15 02:14:21.592325] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.793 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.051 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.051 "name": "raid_bdev1", 00:13:34.051 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:34.051 "strip_size_kb": 64, 00:13:34.051 "state": "online", 00:13:34.051 "raid_level": "concat", 00:13:34.051 "superblock": true, 00:13:34.051 "num_base_bdevs": 2, 00:13:34.051 "num_base_bdevs_discovered": 2, 00:13:34.051 "num_base_bdevs_operational": 2, 00:13:34.051 "base_bdevs_list": [ 00:13:34.051 { 00:13:34.051 "name": "pt1", 00:13:34.051 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:34.051 "is_configured": true, 00:13:34.051 "data_offset": 2048, 00:13:34.051 "data_size": 63488 00:13:34.051 }, 00:13:34.051 { 00:13:34.051 "name": "pt2", 00:13:34.051 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:34.051 "is_configured": true, 00:13:34.051 "data_offset": 2048, 00:13:34.051 "data_size": 63488 00:13:34.051 } 00:13:34.051 ] 00:13:34.051 }' 00:13:34.051 02:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.051 02:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.309 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:34.569 [2024-05-15 02:14:22.455947] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:34.569 "name": "raid_bdev1", 00:13:34.569 "aliases": [ 00:13:34.569 "d7a602fa-1260-11ef-99fd-bfc7c66e2865" 00:13:34.569 ], 00:13:34.569 "product_name": "Raid Volume", 00:13:34.569 "block_size": 512, 00:13:34.569 "num_blocks": 126976, 00:13:34.569 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:34.569 "assigned_rate_limits": { 00:13:34.569 "rw_ios_per_sec": 0, 00:13:34.569 "rw_mbytes_per_sec": 0, 00:13:34.569 "r_mbytes_per_sec": 0, 00:13:34.569 "w_mbytes_per_sec": 0 00:13:34.569 }, 00:13:34.569 "claimed": false, 00:13:34.569 "zoned": false, 00:13:34.569 "supported_io_types": { 00:13:34.569 "read": true, 00:13:34.569 "write": true, 00:13:34.569 "unmap": true, 00:13:34.569 "write_zeroes": true, 00:13:34.569 "flush": true, 00:13:34.569 "reset": true, 00:13:34.569 "compare": false, 00:13:34.569 "compare_and_write": false, 00:13:34.569 "abort": false, 00:13:34.569 "nvme_admin": false, 00:13:34.569 "nvme_io": false 00:13:34.569 }, 00:13:34.569 "memory_domains": [ 00:13:34.569 { 00:13:34.569 "dma_device_id": "system", 00:13:34.569 "dma_device_type": 1 00:13:34.569 }, 00:13:34.569 { 00:13:34.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.569 "dma_device_type": 2 00:13:34.569 }, 00:13:34.569 { 00:13:34.569 "dma_device_id": "system", 00:13:34.569 "dma_device_type": 1 00:13:34.569 }, 00:13:34.569 { 00:13:34.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.569 "dma_device_type": 2 00:13:34.569 } 00:13:34.569 ], 00:13:34.569 "driver_specific": { 00:13:34.569 "raid": { 00:13:34.569 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:34.569 "strip_size_kb": 64, 00:13:34.569 "state": "online", 00:13:34.569 "raid_level": "concat", 00:13:34.569 "superblock": true, 00:13:34.569 "num_base_bdevs": 2, 00:13:34.569 "num_base_bdevs_discovered": 2, 00:13:34.569 "num_base_bdevs_operational": 2, 00:13:34.569 "base_bdevs_list": [ 00:13:34.569 { 00:13:34.569 "name": "pt1", 00:13:34.569 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:34.569 "is_configured": true, 00:13:34.569 "data_offset": 2048, 00:13:34.569 "data_size": 63488 00:13:34.569 }, 00:13:34.569 { 00:13:34.569 "name": "pt2", 00:13:34.569 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:34.569 "is_configured": true, 00:13:34.569 "data_offset": 2048, 00:13:34.569 "data_size": 63488 00:13:34.569 } 00:13:34.569 ] 00:13:34.569 } 00:13:34.569 } 00:13:34.569 }' 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:34.569 pt2' 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:34.569 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:34.828 "name": "pt1", 00:13:34.828 "aliases": [ 00:13:34.828 "a3b5de50-de7a-0b54-9722-3bd8ca286de8" 00:13:34.828 ], 00:13:34.828 "product_name": "passthru", 00:13:34.828 "block_size": 512, 00:13:34.828 "num_blocks": 65536, 00:13:34.828 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:34.828 "assigned_rate_limits": { 00:13:34.828 "rw_ios_per_sec": 0, 00:13:34.828 "rw_mbytes_per_sec": 0, 00:13:34.828 "r_mbytes_per_sec": 0, 00:13:34.828 "w_mbytes_per_sec": 0 00:13:34.828 }, 00:13:34.828 "claimed": true, 00:13:34.828 "claim_type": "exclusive_write", 00:13:34.828 "zoned": false, 00:13:34.828 "supported_io_types": { 00:13:34.828 "read": true, 00:13:34.828 "write": true, 00:13:34.828 "unmap": true, 00:13:34.828 "write_zeroes": true, 00:13:34.828 "flush": true, 00:13:34.828 "reset": true, 00:13:34.828 "compare": false, 00:13:34.828 "compare_and_write": false, 00:13:34.828 "abort": true, 00:13:34.828 "nvme_admin": false, 00:13:34.828 "nvme_io": false 00:13:34.828 }, 00:13:34.828 "memory_domains": [ 00:13:34.828 { 00:13:34.828 "dma_device_id": "system", 00:13:34.828 "dma_device_type": 1 00:13:34.828 }, 00:13:34.828 { 00:13:34.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.828 "dma_device_type": 2 00:13:34.828 } 00:13:34.828 ], 00:13:34.828 "driver_specific": { 00:13:34.828 "passthru": { 00:13:34.828 "name": "pt1", 00:13:34.828 "base_bdev_name": "malloc1" 00:13:34.828 } 00:13:34.828 } 00:13:34.828 }' 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:34.828 02:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:35.087 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:35.087 "name": "pt2", 00:13:35.088 "aliases": [ 00:13:35.088 "67f3b599-7053-8154-bef1-04e8315cb020" 00:13:35.088 ], 00:13:35.088 "product_name": "passthru", 00:13:35.088 "block_size": 512, 00:13:35.088 "num_blocks": 65536, 00:13:35.088 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:35.088 "assigned_rate_limits": { 00:13:35.088 "rw_ios_per_sec": 0, 00:13:35.088 "rw_mbytes_per_sec": 0, 00:13:35.088 "r_mbytes_per_sec": 0, 00:13:35.088 "w_mbytes_per_sec": 0 00:13:35.088 }, 00:13:35.088 "claimed": true, 00:13:35.088 "claim_type": "exclusive_write", 00:13:35.088 "zoned": false, 00:13:35.088 "supported_io_types": { 00:13:35.088 "read": true, 00:13:35.088 "write": true, 00:13:35.088 "unmap": true, 00:13:35.088 "write_zeroes": true, 00:13:35.088 "flush": true, 00:13:35.088 "reset": true, 00:13:35.088 "compare": false, 00:13:35.088 "compare_and_write": false, 00:13:35.088 "abort": true, 00:13:35.088 "nvme_admin": false, 00:13:35.088 "nvme_io": false 00:13:35.088 }, 00:13:35.088 "memory_domains": [ 00:13:35.088 { 00:13:35.088 "dma_device_id": "system", 00:13:35.088 "dma_device_type": 1 00:13:35.088 }, 00:13:35.088 { 00:13:35.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.088 "dma_device_type": 2 00:13:35.088 } 00:13:35.088 ], 00:13:35.088 "driver_specific": { 00:13:35.088 "passthru": { 00:13:35.088 "name": "pt2", 00:13:35.088 "base_bdev_name": "malloc2" 00:13:35.088 } 00:13:35.088 } 00:13:35.088 }' 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:35.088 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:35.347 [2024-05-15 02:14:23.324245] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d7a602fa-1260-11ef-99fd-bfc7c66e2865 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d7a602fa-1260-11ef-99fd-bfc7c66e2865 ']' 00:13:35.347 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:35.606 [2024-05-15 02:14:23.548274] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.606 [2024-05-15 02:14:23.548302] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.606 [2024-05-15 02:14:23.548329] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.606 [2024-05-15 02:14:23.548339] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.606 [2024-05-15 02:14:23.548344] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c99df00 name raid_bdev1, state offline 00:13:35.606 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.606 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:35.865 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:35.865 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:35.865 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.865 02:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:36.124 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.124 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:36.383 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:36.383 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:36.952 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:37.212 [2024-05-15 02:14:24.964785] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:37.212 [2024-05-15 02:14:24.965319] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:37.212 [2024-05-15 02:14:24.965363] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:37.212 [2024-05-15 02:14:24.965415] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:37.212 [2024-05-15 02:14:24.965428] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.212 [2024-05-15 02:14:24.965434] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c99dc80 name raid_bdev1, state configuring 00:13:37.212 request: 00:13:37.212 { 00:13:37.212 "name": "raid_bdev1", 00:13:37.212 "raid_level": "concat", 00:13:37.212 "base_bdevs": [ 00:13:37.212 "malloc1", 00:13:37.212 "malloc2" 00:13:37.212 ], 00:13:37.212 "superblock": false, 00:13:37.212 "strip_size_kb": 64, 00:13:37.212 "method": "bdev_raid_create", 00:13:37.212 "req_id": 1 00:13:37.212 } 00:13:37.212 Got JSON-RPC error response 00:13:37.212 response: 00:13:37.212 { 00:13:37.212 "code": -17, 00:13:37.212 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:37.212 } 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:37.212 02:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.472 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:37.472 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:37.472 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.730 [2024-05-15 02:14:25.528942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.730 [2024-05-15 02:14:25.529014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.730 [2024-05-15 02:14:25.529043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c99d780 00:13:37.730 [2024-05-15 02:14:25.529051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.730 [2024-05-15 02:14:25.529591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.730 [2024-05-15 02:14:25.529618] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.730 [2024-05-15 02:14:25.529642] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.730 [2024-05-15 02:14:25.529653] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.730 pt1 00:13:37.730 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:37.730 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:37.730 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.731 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.989 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:37.989 "name": "raid_bdev1", 00:13:37.989 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:37.989 "strip_size_kb": 64, 00:13:37.989 "state": "configuring", 00:13:37.989 "raid_level": "concat", 00:13:37.989 "superblock": true, 00:13:37.989 "num_base_bdevs": 2, 00:13:37.989 "num_base_bdevs_discovered": 1, 00:13:37.989 "num_base_bdevs_operational": 2, 00:13:37.989 "base_bdevs_list": [ 00:13:37.989 { 00:13:37.989 "name": "pt1", 00:13:37.989 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:37.989 "is_configured": true, 00:13:37.989 "data_offset": 2048, 00:13:37.989 "data_size": 63488 00:13:37.989 }, 00:13:37.989 { 00:13:37.989 "name": null, 00:13:37.989 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:37.989 "is_configured": false, 00:13:37.989 "data_offset": 2048, 00:13:37.989 "data_size": 63488 00:13:37.989 } 00:13:37.989 ] 00:13:37.989 }' 00:13:37.989 02:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:37.989 02:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.247 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:38.247 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:38.247 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:38.247 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.506 [2024-05-15 02:14:26.345197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.506 [2024-05-15 02:14:26.345266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.506 [2024-05-15 02:14:26.345295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c99df00 00:13:38.506 [2024-05-15 02:14:26.345303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.506 [2024-05-15 02:14:26.345412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.506 [2024-05-15 02:14:26.345423] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.506 [2024-05-15 02:14:26.345445] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.506 [2024-05-15 02:14:26.345454] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.506 [2024-05-15 02:14:26.345478] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c99e180 00:13:38.506 [2024-05-15 02:14:26.345482] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:38.506 [2024-05-15 02:14:26.345501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ca00e20 00:13:38.506 [2024-05-15 02:14:26.345549] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c99e180 00:13:38.506 [2024-05-15 02:14:26.345554] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c99e180 00:13:38.506 [2024-05-15 02:14:26.345572] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.506 pt2 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.506 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.764 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:38.764 "name": "raid_bdev1", 00:13:38.764 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:38.764 "strip_size_kb": 64, 00:13:38.764 "state": "online", 00:13:38.764 "raid_level": "concat", 00:13:38.764 "superblock": true, 00:13:38.764 "num_base_bdevs": 2, 00:13:38.764 "num_base_bdevs_discovered": 2, 00:13:38.764 "num_base_bdevs_operational": 2, 00:13:38.764 "base_bdevs_list": [ 00:13:38.764 { 00:13:38.764 "name": "pt1", 00:13:38.764 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:38.764 "is_configured": true, 00:13:38.764 "data_offset": 2048, 00:13:38.764 "data_size": 63488 00:13:38.764 }, 00:13:38.764 { 00:13:38.764 "name": "pt2", 00:13:38.764 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:38.764 "is_configured": true, 00:13:38.764 "data_offset": 2048, 00:13:38.764 "data_size": 63488 00:13:38.764 } 00:13:38.764 ] 00:13:38.764 }' 00:13:38.764 02:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:38.764 02:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:39.331 [2024-05-15 02:14:27.249502] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:39.331 "name": "raid_bdev1", 00:13:39.331 "aliases": [ 00:13:39.331 "d7a602fa-1260-11ef-99fd-bfc7c66e2865" 00:13:39.331 ], 00:13:39.331 "product_name": "Raid Volume", 00:13:39.331 "block_size": 512, 00:13:39.331 "num_blocks": 126976, 00:13:39.331 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:39.331 "assigned_rate_limits": { 00:13:39.331 "rw_ios_per_sec": 0, 00:13:39.331 "rw_mbytes_per_sec": 0, 00:13:39.331 "r_mbytes_per_sec": 0, 00:13:39.331 "w_mbytes_per_sec": 0 00:13:39.331 }, 00:13:39.331 "claimed": false, 00:13:39.331 "zoned": false, 00:13:39.331 "supported_io_types": { 00:13:39.331 "read": true, 00:13:39.331 "write": true, 00:13:39.331 "unmap": true, 00:13:39.331 "write_zeroes": true, 00:13:39.331 "flush": true, 00:13:39.331 "reset": true, 00:13:39.331 "compare": false, 00:13:39.331 "compare_and_write": false, 00:13:39.331 "abort": false, 00:13:39.331 "nvme_admin": false, 00:13:39.331 "nvme_io": false 00:13:39.331 }, 00:13:39.331 "memory_domains": [ 00:13:39.331 { 00:13:39.331 "dma_device_id": "system", 00:13:39.331 "dma_device_type": 1 00:13:39.331 }, 00:13:39.331 { 00:13:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.331 "dma_device_type": 2 00:13:39.331 }, 00:13:39.331 { 00:13:39.331 "dma_device_id": "system", 00:13:39.331 "dma_device_type": 1 00:13:39.331 }, 00:13:39.331 { 00:13:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.331 "dma_device_type": 2 00:13:39.331 } 00:13:39.331 ], 00:13:39.331 "driver_specific": { 00:13:39.331 "raid": { 00:13:39.331 "uuid": "d7a602fa-1260-11ef-99fd-bfc7c66e2865", 00:13:39.331 "strip_size_kb": 64, 00:13:39.331 "state": "online", 00:13:39.331 "raid_level": "concat", 00:13:39.331 "superblock": true, 00:13:39.331 "num_base_bdevs": 2, 00:13:39.331 "num_base_bdevs_discovered": 2, 00:13:39.331 "num_base_bdevs_operational": 2, 00:13:39.331 "base_bdevs_list": [ 00:13:39.331 { 00:13:39.331 "name": "pt1", 00:13:39.331 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:39.331 "is_configured": true, 00:13:39.331 "data_offset": 2048, 00:13:39.331 "data_size": 63488 00:13:39.331 }, 00:13:39.331 { 00:13:39.331 "name": "pt2", 00:13:39.331 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:39.331 "is_configured": true, 00:13:39.331 "data_offset": 2048, 00:13:39.331 "data_size": 63488 00:13:39.331 } 00:13:39.331 ] 00:13:39.331 } 00:13:39.331 } 00:13:39.331 }' 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:39.331 pt2' 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:39.331 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:39.589 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:39.589 "name": "pt1", 00:13:39.589 "aliases": [ 00:13:39.589 "a3b5de50-de7a-0b54-9722-3bd8ca286de8" 00:13:39.589 ], 00:13:39.589 "product_name": "passthru", 00:13:39.589 "block_size": 512, 00:13:39.589 "num_blocks": 65536, 00:13:39.589 "uuid": "a3b5de50-de7a-0b54-9722-3bd8ca286de8", 00:13:39.589 "assigned_rate_limits": { 00:13:39.589 "rw_ios_per_sec": 0, 00:13:39.589 "rw_mbytes_per_sec": 0, 00:13:39.589 "r_mbytes_per_sec": 0, 00:13:39.589 "w_mbytes_per_sec": 0 00:13:39.589 }, 00:13:39.589 "claimed": true, 00:13:39.589 "claim_type": "exclusive_write", 00:13:39.589 "zoned": false, 00:13:39.589 "supported_io_types": { 00:13:39.589 "read": true, 00:13:39.589 "write": true, 00:13:39.589 "unmap": true, 00:13:39.589 "write_zeroes": true, 00:13:39.589 "flush": true, 00:13:39.589 "reset": true, 00:13:39.589 "compare": false, 00:13:39.589 "compare_and_write": false, 00:13:39.589 "abort": true, 00:13:39.590 "nvme_admin": false, 00:13:39.590 "nvme_io": false 00:13:39.590 }, 00:13:39.590 "memory_domains": [ 00:13:39.590 { 00:13:39.590 "dma_device_id": "system", 00:13:39.590 "dma_device_type": 1 00:13:39.590 }, 00:13:39.590 { 00:13:39.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.590 "dma_device_type": 2 00:13:39.590 } 00:13:39.590 ], 00:13:39.590 "driver_specific": { 00:13:39.590 "passthru": { 00:13:39.590 "name": "pt1", 00:13:39.590 "base_bdev_name": "malloc1" 00:13:39.590 } 00:13:39.590 } 00:13:39.590 }' 00:13:39.590 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:39.590 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:39.848 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:40.107 "name": "pt2", 00:13:40.107 "aliases": [ 00:13:40.107 "67f3b599-7053-8154-bef1-04e8315cb020" 00:13:40.107 ], 00:13:40.107 "product_name": "passthru", 00:13:40.107 "block_size": 512, 00:13:40.107 "num_blocks": 65536, 00:13:40.107 "uuid": "67f3b599-7053-8154-bef1-04e8315cb020", 00:13:40.107 "assigned_rate_limits": { 00:13:40.107 "rw_ios_per_sec": 0, 00:13:40.107 "rw_mbytes_per_sec": 0, 00:13:40.107 "r_mbytes_per_sec": 0, 00:13:40.107 "w_mbytes_per_sec": 0 00:13:40.107 }, 00:13:40.107 "claimed": true, 00:13:40.107 "claim_type": "exclusive_write", 00:13:40.107 "zoned": false, 00:13:40.107 "supported_io_types": { 00:13:40.107 "read": true, 00:13:40.107 "write": true, 00:13:40.107 "unmap": true, 00:13:40.107 "write_zeroes": true, 00:13:40.107 "flush": true, 00:13:40.107 "reset": true, 00:13:40.107 "compare": false, 00:13:40.107 "compare_and_write": false, 00:13:40.107 "abort": true, 00:13:40.107 "nvme_admin": false, 00:13:40.107 "nvme_io": false 00:13:40.107 }, 00:13:40.107 "memory_domains": [ 00:13:40.107 { 00:13:40.107 "dma_device_id": "system", 00:13:40.107 "dma_device_type": 1 00:13:40.107 }, 00:13:40.107 { 00:13:40.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.107 "dma_device_type": 2 00:13:40.107 } 00:13:40.107 ], 00:13:40.107 "driver_specific": { 00:13:40.107 "passthru": { 00:13:40.107 "name": "pt2", 00:13:40.107 "base_bdev_name": "malloc2" 00:13:40.107 } 00:13:40.107 } 00:13:40.107 }' 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:40.107 02:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:40.366 [2024-05-15 02:14:28.209792] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d7a602fa-1260-11ef-99fd-bfc7c66e2865 '!=' d7a602fa-1260-11ef-99fd-bfc7c66e2865 ']' 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 50338 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 50338 ']' 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 50338 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 50338 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:40.366 killing process with pid 50338 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50338' 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 50338 00:13:40.366 [2024-05-15 02:14:28.244273] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.366 [2024-05-15 02:14:28.244309] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.366 [2024-05-15 02:14:28.244322] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.366 [2024-05-15 02:14:28.244327] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c99e180 name raid_bdev1, state offline 00:13:40.366 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 50338 00:13:40.366 [2024-05-15 02:14:28.254138] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.624 02:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:40.624 00:13:40.624 real 0m9.299s 00:13:40.624 user 0m16.300s 00:13:40.624 sys 0m1.588s 00:13:40.624 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:40.624 02:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 ************************************ 00:13:40.624 END TEST raid_superblock_test 00:13:40.624 ************************************ 00:13:40.624 02:14:28 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:13:40.624 02:14:28 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:40.624 02:14:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:40.624 02:14:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.624 02:14:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 ************************************ 00:13:40.624 START TEST raid_state_function_test 00:13:40.624 ************************************ 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:40.624 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=50605 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50605' 00:13:40.625 Process raid pid: 50605 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 50605 /var/tmp/spdk-raid.sock 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 50605 ']' 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.625 02:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.625 [2024-05-15 02:14:28.462591] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:40.625 [2024-05-15 02:14:28.462788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:41.192 EAL: TSC is not safe to use in SMP mode 00:13:41.192 EAL: TSC is not invariant 00:13:41.192 [2024-05-15 02:14:28.930480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.192 [2024-05-15 02:14:29.017725] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:41.192 [2024-05-15 02:14:29.020113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.192 [2024-05-15 02:14:29.021095] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.192 [2024-05-15 02:14:29.021116] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.760 02:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:41.760 02:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:41.760 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:42.021 [2024-05-15 02:14:29.773192] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.021 [2024-05-15 02:14:29.773248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.021 [2024-05-15 02:14:29.773253] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.021 [2024-05-15 02:14:29.773271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.021 02:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.282 02:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.282 "name": "Existed_Raid", 00:13:42.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.282 "strip_size_kb": 0, 00:13:42.282 "state": "configuring", 00:13:42.282 "raid_level": "raid1", 00:13:42.282 "superblock": false, 00:13:42.282 "num_base_bdevs": 2, 00:13:42.282 "num_base_bdevs_discovered": 0, 00:13:42.282 "num_base_bdevs_operational": 2, 00:13:42.282 "base_bdevs_list": [ 00:13:42.282 { 00:13:42.282 "name": "BaseBdev1", 00:13:42.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.282 "is_configured": false, 00:13:42.282 "data_offset": 0, 00:13:42.282 "data_size": 0 00:13:42.282 }, 00:13:42.282 { 00:13:42.282 "name": "BaseBdev2", 00:13:42.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.282 "is_configured": false, 00:13:42.282 "data_offset": 0, 00:13:42.282 "data_size": 0 00:13:42.282 } 00:13:42.282 ] 00:13:42.282 }' 00:13:42.282 02:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.282 02:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.540 02:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:42.798 [2024-05-15 02:14:30.745459] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.798 [2024-05-15 02:14:30.745507] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cad8500 name Existed_Raid, state configuring 00:13:42.798 02:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:43.056 [2024-05-15 02:14:31.001559] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.056 [2024-05-15 02:14:31.001641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.056 [2024-05-15 02:14:31.001646] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.056 [2024-05-15 02:14:31.001655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.056 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.315 [2024-05-15 02:14:31.254572] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.315 BaseBdev1 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:43.315 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.575 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.153 [ 00:13:44.153 { 00:13:44.153 "name": "BaseBdev1", 00:13:44.153 "aliases": [ 00:13:44.153 "dd6850ff-1260-11ef-99fd-bfc7c66e2865" 00:13:44.154 ], 00:13:44.154 "product_name": "Malloc disk", 00:13:44.154 "block_size": 512, 00:13:44.154 "num_blocks": 65536, 00:13:44.154 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:44.154 "assigned_rate_limits": { 00:13:44.154 "rw_ios_per_sec": 0, 00:13:44.154 "rw_mbytes_per_sec": 0, 00:13:44.154 "r_mbytes_per_sec": 0, 00:13:44.154 "w_mbytes_per_sec": 0 00:13:44.154 }, 00:13:44.154 "claimed": true, 00:13:44.154 "claim_type": "exclusive_write", 00:13:44.154 "zoned": false, 00:13:44.154 "supported_io_types": { 00:13:44.154 "read": true, 00:13:44.154 "write": true, 00:13:44.154 "unmap": true, 00:13:44.154 "write_zeroes": true, 00:13:44.154 "flush": true, 00:13:44.154 "reset": true, 00:13:44.154 "compare": false, 00:13:44.154 "compare_and_write": false, 00:13:44.154 "abort": true, 00:13:44.154 "nvme_admin": false, 00:13:44.154 "nvme_io": false 00:13:44.154 }, 00:13:44.154 "memory_domains": [ 00:13:44.154 { 00:13:44.154 "dma_device_id": "system", 00:13:44.154 "dma_device_type": 1 00:13:44.154 }, 00:13:44.154 { 00:13:44.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.154 "dma_device_type": 2 00:13:44.154 } 00:13:44.154 ], 00:13:44.154 "driver_specific": {} 00:13:44.154 } 00:13:44.154 ] 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.154 02:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.421 02:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:44.421 "name": "Existed_Raid", 00:13:44.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.421 "strip_size_kb": 0, 00:13:44.421 "state": "configuring", 00:13:44.421 "raid_level": "raid1", 00:13:44.421 "superblock": false, 00:13:44.421 "num_base_bdevs": 2, 00:13:44.421 "num_base_bdevs_discovered": 1, 00:13:44.421 "num_base_bdevs_operational": 2, 00:13:44.421 "base_bdevs_list": [ 00:13:44.421 { 00:13:44.421 "name": "BaseBdev1", 00:13:44.421 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:44.421 "is_configured": true, 00:13:44.421 "data_offset": 0, 00:13:44.421 "data_size": 65536 00:13:44.421 }, 00:13:44.421 { 00:13:44.421 "name": "BaseBdev2", 00:13:44.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.421 "is_configured": false, 00:13:44.421 "data_offset": 0, 00:13:44.421 "data_size": 0 00:13:44.421 } 00:13:44.421 ] 00:13:44.421 }' 00:13:44.421 02:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:44.421 02:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.690 02:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.961 [2024-05-15 02:14:32.778055] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.961 [2024-05-15 02:14:32.778096] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cad8500 name Existed_Raid, state configuring 00:13:44.961 02:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:45.234 [2024-05-15 02:14:33.030134] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.234 [2024-05-15 02:14:33.030865] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:45.234 [2024-05-15 02:14:33.030913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.234 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.495 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.495 "name": "Existed_Raid", 00:13:45.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.495 "strip_size_kb": 0, 00:13:45.495 "state": "configuring", 00:13:45.495 "raid_level": "raid1", 00:13:45.495 "superblock": false, 00:13:45.495 "num_base_bdevs": 2, 00:13:45.495 "num_base_bdevs_discovered": 1, 00:13:45.495 "num_base_bdevs_operational": 2, 00:13:45.495 "base_bdevs_list": [ 00:13:45.495 { 00:13:45.495 "name": "BaseBdev1", 00:13:45.495 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:45.495 "is_configured": true, 00:13:45.495 "data_offset": 0, 00:13:45.495 "data_size": 65536 00:13:45.495 }, 00:13:45.495 { 00:13:45.495 "name": "BaseBdev2", 00:13:45.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.495 "is_configured": false, 00:13:45.495 "data_offset": 0, 00:13:45.495 "data_size": 0 00:13:45.495 } 00:13:45.495 ] 00:13:45.495 }' 00:13:45.495 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.495 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.752 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:46.077 [2024-05-15 02:14:33.906490] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.077 [2024-05-15 02:14:33.906524] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cad8a00 00:13:46.077 [2024-05-15 02:14:33.906529] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:46.077 [2024-05-15 02:14:33.906550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cb3bec0 00:13:46.077 [2024-05-15 02:14:33.906640] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cad8a00 00:13:46.077 [2024-05-15 02:14:33.906643] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cad8a00 00:13:46.077 [2024-05-15 02:14:33.906675] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.077 BaseBdev2 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:46.077 02:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:46.334 02:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:46.590 [ 00:13:46.590 { 00:13:46.591 "name": "BaseBdev2", 00:13:46.591 "aliases": [ 00:13:46.591 "defd1768-1260-11ef-99fd-bfc7c66e2865" 00:13:46.591 ], 00:13:46.591 "product_name": "Malloc disk", 00:13:46.591 "block_size": 512, 00:13:46.591 "num_blocks": 65536, 00:13:46.591 "uuid": "defd1768-1260-11ef-99fd-bfc7c66e2865", 00:13:46.591 "assigned_rate_limits": { 00:13:46.591 "rw_ios_per_sec": 0, 00:13:46.591 "rw_mbytes_per_sec": 0, 00:13:46.591 "r_mbytes_per_sec": 0, 00:13:46.591 "w_mbytes_per_sec": 0 00:13:46.591 }, 00:13:46.591 "claimed": true, 00:13:46.591 "claim_type": "exclusive_write", 00:13:46.591 "zoned": false, 00:13:46.591 "supported_io_types": { 00:13:46.591 "read": true, 00:13:46.591 "write": true, 00:13:46.591 "unmap": true, 00:13:46.591 "write_zeroes": true, 00:13:46.591 "flush": true, 00:13:46.591 "reset": true, 00:13:46.591 "compare": false, 00:13:46.591 "compare_and_write": false, 00:13:46.591 "abort": true, 00:13:46.591 "nvme_admin": false, 00:13:46.591 "nvme_io": false 00:13:46.591 }, 00:13:46.591 "memory_domains": [ 00:13:46.591 { 00:13:46.591 "dma_device_id": "system", 00:13:46.591 "dma_device_type": 1 00:13:46.591 }, 00:13:46.591 { 00:13:46.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.591 "dma_device_type": 2 00:13:46.591 } 00:13:46.591 ], 00:13:46.591 "driver_specific": {} 00:13:46.591 } 00:13:46.591 ] 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.591 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.848 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.848 "name": "Existed_Raid", 00:13:46.848 "uuid": "defd1dfe-1260-11ef-99fd-bfc7c66e2865", 00:13:46.848 "strip_size_kb": 0, 00:13:46.848 "state": "online", 00:13:46.848 "raid_level": "raid1", 00:13:46.848 "superblock": false, 00:13:46.848 "num_base_bdevs": 2, 00:13:46.848 "num_base_bdevs_discovered": 2, 00:13:46.848 "num_base_bdevs_operational": 2, 00:13:46.848 "base_bdevs_list": [ 00:13:46.848 { 00:13:46.848 "name": "BaseBdev1", 00:13:46.848 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:46.848 "is_configured": true, 00:13:46.848 "data_offset": 0, 00:13:46.848 "data_size": 65536 00:13:46.848 }, 00:13:46.848 { 00:13:46.848 "name": "BaseBdev2", 00:13:46.848 "uuid": "defd1768-1260-11ef-99fd-bfc7c66e2865", 00:13:46.848 "is_configured": true, 00:13:46.848 "data_offset": 0, 00:13:46.848 "data_size": 65536 00:13:46.848 } 00:13:46.848 ] 00:13:46.848 }' 00:13:46.848 02:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.848 02:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:47.125 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:47.388 [2024-05-15 02:14:35.230732] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:47.388 "name": "Existed_Raid", 00:13:47.388 "aliases": [ 00:13:47.388 "defd1dfe-1260-11ef-99fd-bfc7c66e2865" 00:13:47.388 ], 00:13:47.388 "product_name": "Raid Volume", 00:13:47.388 "block_size": 512, 00:13:47.388 "num_blocks": 65536, 00:13:47.388 "uuid": "defd1dfe-1260-11ef-99fd-bfc7c66e2865", 00:13:47.388 "assigned_rate_limits": { 00:13:47.388 "rw_ios_per_sec": 0, 00:13:47.388 "rw_mbytes_per_sec": 0, 00:13:47.388 "r_mbytes_per_sec": 0, 00:13:47.388 "w_mbytes_per_sec": 0 00:13:47.388 }, 00:13:47.388 "claimed": false, 00:13:47.388 "zoned": false, 00:13:47.388 "supported_io_types": { 00:13:47.388 "read": true, 00:13:47.388 "write": true, 00:13:47.388 "unmap": false, 00:13:47.388 "write_zeroes": true, 00:13:47.388 "flush": false, 00:13:47.388 "reset": true, 00:13:47.388 "compare": false, 00:13:47.388 "compare_and_write": false, 00:13:47.388 "abort": false, 00:13:47.388 "nvme_admin": false, 00:13:47.388 "nvme_io": false 00:13:47.388 }, 00:13:47.388 "memory_domains": [ 00:13:47.388 { 00:13:47.388 "dma_device_id": "system", 00:13:47.388 "dma_device_type": 1 00:13:47.388 }, 00:13:47.388 { 00:13:47.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.388 "dma_device_type": 2 00:13:47.388 }, 00:13:47.388 { 00:13:47.388 "dma_device_id": "system", 00:13:47.388 "dma_device_type": 1 00:13:47.388 }, 00:13:47.388 { 00:13:47.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.388 "dma_device_type": 2 00:13:47.388 } 00:13:47.388 ], 00:13:47.388 "driver_specific": { 00:13:47.388 "raid": { 00:13:47.388 "uuid": "defd1dfe-1260-11ef-99fd-bfc7c66e2865", 00:13:47.388 "strip_size_kb": 0, 00:13:47.388 "state": "online", 00:13:47.388 "raid_level": "raid1", 00:13:47.388 "superblock": false, 00:13:47.388 "num_base_bdevs": 2, 00:13:47.388 "num_base_bdevs_discovered": 2, 00:13:47.388 "num_base_bdevs_operational": 2, 00:13:47.388 "base_bdevs_list": [ 00:13:47.388 { 00:13:47.388 "name": "BaseBdev1", 00:13:47.388 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:47.388 "is_configured": true, 00:13:47.388 "data_offset": 0, 00:13:47.388 "data_size": 65536 00:13:47.388 }, 00:13:47.388 { 00:13:47.388 "name": "BaseBdev2", 00:13:47.388 "uuid": "defd1768-1260-11ef-99fd-bfc7c66e2865", 00:13:47.388 "is_configured": true, 00:13:47.388 "data_offset": 0, 00:13:47.388 "data_size": 65536 00:13:47.388 } 00:13:47.388 ] 00:13:47.388 } 00:13:47.388 } 00:13:47.388 }' 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:47.388 BaseBdev2' 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:47.388 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:47.647 "name": "BaseBdev1", 00:13:47.647 "aliases": [ 00:13:47.647 "dd6850ff-1260-11ef-99fd-bfc7c66e2865" 00:13:47.647 ], 00:13:47.647 "product_name": "Malloc disk", 00:13:47.647 "block_size": 512, 00:13:47.647 "num_blocks": 65536, 00:13:47.647 "uuid": "dd6850ff-1260-11ef-99fd-bfc7c66e2865", 00:13:47.647 "assigned_rate_limits": { 00:13:47.647 "rw_ios_per_sec": 0, 00:13:47.647 "rw_mbytes_per_sec": 0, 00:13:47.647 "r_mbytes_per_sec": 0, 00:13:47.647 "w_mbytes_per_sec": 0 00:13:47.647 }, 00:13:47.647 "claimed": true, 00:13:47.647 "claim_type": "exclusive_write", 00:13:47.647 "zoned": false, 00:13:47.647 "supported_io_types": { 00:13:47.647 "read": true, 00:13:47.647 "write": true, 00:13:47.647 "unmap": true, 00:13:47.647 "write_zeroes": true, 00:13:47.647 "flush": true, 00:13:47.647 "reset": true, 00:13:47.647 "compare": false, 00:13:47.647 "compare_and_write": false, 00:13:47.647 "abort": true, 00:13:47.647 "nvme_admin": false, 00:13:47.647 "nvme_io": false 00:13:47.647 }, 00:13:47.647 "memory_domains": [ 00:13:47.647 { 00:13:47.647 "dma_device_id": "system", 00:13:47.647 "dma_device_type": 1 00:13:47.647 }, 00:13:47.647 { 00:13:47.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.647 "dma_device_type": 2 00:13:47.647 } 00:13:47.647 ], 00:13:47.647 "driver_specific": {} 00:13:47.647 }' 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:47.647 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:47.905 "name": "BaseBdev2", 00:13:47.905 "aliases": [ 00:13:47.905 "defd1768-1260-11ef-99fd-bfc7c66e2865" 00:13:47.905 ], 00:13:47.905 "product_name": "Malloc disk", 00:13:47.905 "block_size": 512, 00:13:47.905 "num_blocks": 65536, 00:13:47.905 "uuid": "defd1768-1260-11ef-99fd-bfc7c66e2865", 00:13:47.905 "assigned_rate_limits": { 00:13:47.905 "rw_ios_per_sec": 0, 00:13:47.905 "rw_mbytes_per_sec": 0, 00:13:47.905 "r_mbytes_per_sec": 0, 00:13:47.905 "w_mbytes_per_sec": 0 00:13:47.905 }, 00:13:47.905 "claimed": true, 00:13:47.905 "claim_type": "exclusive_write", 00:13:47.905 "zoned": false, 00:13:47.905 "supported_io_types": { 00:13:47.905 "read": true, 00:13:47.905 "write": true, 00:13:47.905 "unmap": true, 00:13:47.905 "write_zeroes": true, 00:13:47.905 "flush": true, 00:13:47.905 "reset": true, 00:13:47.905 "compare": false, 00:13:47.905 "compare_and_write": false, 00:13:47.905 "abort": true, 00:13:47.905 "nvme_admin": false, 00:13:47.905 "nvme_io": false 00:13:47.905 }, 00:13:47.905 "memory_domains": [ 00:13:47.905 { 00:13:47.905 "dma_device_id": "system", 00:13:47.905 "dma_device_type": 1 00:13:47.905 }, 00:13:47.905 { 00:13:47.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.905 "dma_device_type": 2 00:13:47.905 } 00:13:47.905 ], 00:13:47.905 "driver_specific": {} 00:13:47.905 }' 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:47.905 02:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:48.171 [2024-05-15 02:14:36.114924] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.171 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.436 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.436 "name": "Existed_Raid", 00:13:48.436 "uuid": "defd1dfe-1260-11ef-99fd-bfc7c66e2865", 00:13:48.436 "strip_size_kb": 0, 00:13:48.436 "state": "online", 00:13:48.436 "raid_level": "raid1", 00:13:48.436 "superblock": false, 00:13:48.436 "num_base_bdevs": 2, 00:13:48.436 "num_base_bdevs_discovered": 1, 00:13:48.436 "num_base_bdevs_operational": 1, 00:13:48.436 "base_bdevs_list": [ 00:13:48.436 { 00:13:48.436 "name": null, 00:13:48.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.436 "is_configured": false, 00:13:48.436 "data_offset": 0, 00:13:48.436 "data_size": 65536 00:13:48.436 }, 00:13:48.436 { 00:13:48.436 "name": "BaseBdev2", 00:13:48.436 "uuid": "defd1768-1260-11ef-99fd-bfc7c66e2865", 00:13:48.436 "is_configured": true, 00:13:48.436 "data_offset": 0, 00:13:48.436 "data_size": 65536 00:13:48.436 } 00:13:48.436 ] 00:13:48.436 }' 00:13:48.436 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.436 02:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.002 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:49.002 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.002 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.002 02:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:49.260 [2024-05-15 02:14:37.232049] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.260 [2024-05-15 02:14:37.232094] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.260 [2024-05-15 02:14:37.237041] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.260 [2024-05-15 02:14:37.237057] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.260 [2024-05-15 02:14:37.237061] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cad8a00 name Existed_Raid, state offline 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.260 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 50605 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 50605 ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 50605 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 50605 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:49.826 killing process with pid 50605 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50605' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 50605 00:13:49.826 [2024-05-15 02:14:37.544454] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.826 [2024-05-15 02:14:37.544501] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 50605 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:13:49.826 00:13:49.826 real 0m9.245s 00:13:49.826 user 0m16.211s 00:13:49.826 sys 0m1.562s 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.826 ************************************ 00:13:49.826 END TEST raid_state_function_test 00:13:49.826 ************************************ 00:13:49.826 02:14:37 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:49.826 02:14:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:49.826 02:14:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.826 02:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.826 ************************************ 00:13:49.826 START TEST raid_state_function_test_sb 00:13:49.826 ************************************ 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:13:49.826 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=50880 00:13:49.827 Process raid pid: 50880 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50880' 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 50880 /var/tmp/spdk-raid.sock 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 50880 ']' 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.827 02:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.827 [2024-05-15 02:14:37.756096] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:49.827 [2024-05-15 02:14:37.756338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:50.392 EAL: TSC is not safe to use in SMP mode 00:13:50.392 EAL: TSC is not invariant 00:13:50.392 [2024-05-15 02:14:38.232801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.392 [2024-05-15 02:14:38.333673] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:50.392 [2024-05-15 02:14:38.336372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.392 [2024-05-15 02:14:38.337322] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.392 [2024-05-15 02:14:38.337338] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.987 02:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.987 02:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:50.987 02:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:51.246 [2024-05-15 02:14:39.074345] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.246 [2024-05-15 02:14:39.074410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.246 [2024-05-15 02:14:39.074415] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.246 [2024-05-15 02:14:39.074424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.246 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.504 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.504 "name": "Existed_Raid", 00:13:51.504 "uuid": "e211a957-1260-11ef-99fd-bfc7c66e2865", 00:13:51.504 "strip_size_kb": 0, 00:13:51.504 "state": "configuring", 00:13:51.504 "raid_level": "raid1", 00:13:51.504 "superblock": true, 00:13:51.504 "num_base_bdevs": 2, 00:13:51.504 "num_base_bdevs_discovered": 0, 00:13:51.504 "num_base_bdevs_operational": 2, 00:13:51.504 "base_bdevs_list": [ 00:13:51.504 { 00:13:51.504 "name": "BaseBdev1", 00:13:51.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.504 "is_configured": false, 00:13:51.504 "data_offset": 0, 00:13:51.504 "data_size": 0 00:13:51.504 }, 00:13:51.504 { 00:13:51.504 "name": "BaseBdev2", 00:13:51.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.504 "is_configured": false, 00:13:51.504 "data_offset": 0, 00:13:51.504 "data_size": 0 00:13:51.504 } 00:13:51.504 ] 00:13:51.504 }' 00:13:51.504 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.504 02:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.763 02:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:52.329 [2024-05-15 02:14:40.070538] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.329 [2024-05-15 02:14:40.070571] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bfb3500 name Existed_Raid, state configuring 00:13:52.329 02:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:52.329 [2024-05-15 02:14:40.298605] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.329 [2024-05-15 02:14:40.298674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.329 [2024-05-15 02:14:40.298679] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.329 [2024-05-15 02:14:40.298688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.329 02:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.592 [2024-05-15 02:14:40.571592] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.592 BaseBdev1 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:52.592 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.161 02:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.161 [ 00:13:53.161 { 00:13:53.161 "name": "BaseBdev1", 00:13:53.161 "aliases": [ 00:13:53.161 "e2f5fb17-1260-11ef-99fd-bfc7c66e2865" 00:13:53.161 ], 00:13:53.161 "product_name": "Malloc disk", 00:13:53.161 "block_size": 512, 00:13:53.161 "num_blocks": 65536, 00:13:53.161 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:53.161 "assigned_rate_limits": { 00:13:53.161 "rw_ios_per_sec": 0, 00:13:53.161 "rw_mbytes_per_sec": 0, 00:13:53.161 "r_mbytes_per_sec": 0, 00:13:53.161 "w_mbytes_per_sec": 0 00:13:53.161 }, 00:13:53.161 "claimed": true, 00:13:53.161 "claim_type": "exclusive_write", 00:13:53.161 "zoned": false, 00:13:53.161 "supported_io_types": { 00:13:53.161 "read": true, 00:13:53.161 "write": true, 00:13:53.161 "unmap": true, 00:13:53.161 "write_zeroes": true, 00:13:53.161 "flush": true, 00:13:53.161 "reset": true, 00:13:53.161 "compare": false, 00:13:53.161 "compare_and_write": false, 00:13:53.161 "abort": true, 00:13:53.161 "nvme_admin": false, 00:13:53.161 "nvme_io": false 00:13:53.161 }, 00:13:53.161 "memory_domains": [ 00:13:53.161 { 00:13:53.161 "dma_device_id": "system", 00:13:53.161 "dma_device_type": 1 00:13:53.161 }, 00:13:53.161 { 00:13:53.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.161 "dma_device_type": 2 00:13:53.161 } 00:13:53.161 ], 00:13:53.161 "driver_specific": {} 00:13:53.161 } 00:13:53.161 ] 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.161 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.750 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.750 "name": "Existed_Raid", 00:13:53.750 "uuid": "e2cc77fc-1260-11ef-99fd-bfc7c66e2865", 00:13:53.750 "strip_size_kb": 0, 00:13:53.750 "state": "configuring", 00:13:53.750 "raid_level": "raid1", 00:13:53.750 "superblock": true, 00:13:53.750 "num_base_bdevs": 2, 00:13:53.750 "num_base_bdevs_discovered": 1, 00:13:53.750 "num_base_bdevs_operational": 2, 00:13:53.750 "base_bdevs_list": [ 00:13:53.750 { 00:13:53.750 "name": "BaseBdev1", 00:13:53.750 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:53.750 "is_configured": true, 00:13:53.750 "data_offset": 2048, 00:13:53.750 "data_size": 63488 00:13:53.750 }, 00:13:53.750 { 00:13:53.750 "name": "BaseBdev2", 00:13:53.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.750 "is_configured": false, 00:13:53.750 "data_offset": 0, 00:13:53.750 "data_size": 0 00:13:53.750 } 00:13:53.750 ] 00:13:53.750 }' 00:13:53.750 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.750 02:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.008 02:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:54.266 [2024-05-15 02:14:42.126987] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.266 [2024-05-15 02:14:42.127030] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bfb3500 name Existed_Raid, state configuring 00:13:54.266 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:54.523 [2024-05-15 02:14:42.367051] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.523 [2024-05-15 02:14:42.367765] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.523 [2024-05-15 02:14:42.367813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.523 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.781 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.781 "name": "Existed_Raid", 00:13:54.781 "uuid": "e408166e-1260-11ef-99fd-bfc7c66e2865", 00:13:54.781 "strip_size_kb": 0, 00:13:54.781 "state": "configuring", 00:13:54.781 "raid_level": "raid1", 00:13:54.781 "superblock": true, 00:13:54.781 "num_base_bdevs": 2, 00:13:54.781 "num_base_bdevs_discovered": 1, 00:13:54.781 "num_base_bdevs_operational": 2, 00:13:54.781 "base_bdevs_list": [ 00:13:54.781 { 00:13:54.781 "name": "BaseBdev1", 00:13:54.781 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev2", 00:13:54.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.781 "is_configured": false, 00:13:54.781 "data_offset": 0, 00:13:54.781 "data_size": 0 00:13:54.781 } 00:13:54.781 ] 00:13:54.781 }' 00:13:54.781 02:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.781 02:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.038 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.295 [2024-05-15 02:14:43.299360] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.295 [2024-05-15 02:14:43.299426] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bfb3a00 00:13:55.295 [2024-05-15 02:14:43.299432] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.295 [2024-05-15 02:14:43.299451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c016ec0 00:13:55.295 [2024-05-15 02:14:43.299488] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bfb3a00 00:13:55.295 [2024-05-15 02:14:43.299508] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bfb3a00 00:13:55.295 [2024-05-15 02:14:43.299529] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.295 BaseBdev2 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.552 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.810 [ 00:13:55.810 { 00:13:55.810 "name": "BaseBdev2", 00:13:55.810 "aliases": [ 00:13:55.810 "e4965492-1260-11ef-99fd-bfc7c66e2865" 00:13:55.810 ], 00:13:55.810 "product_name": "Malloc disk", 00:13:55.810 "block_size": 512, 00:13:55.810 "num_blocks": 65536, 00:13:55.810 "uuid": "e4965492-1260-11ef-99fd-bfc7c66e2865", 00:13:55.810 "assigned_rate_limits": { 00:13:55.810 "rw_ios_per_sec": 0, 00:13:55.810 "rw_mbytes_per_sec": 0, 00:13:55.810 "r_mbytes_per_sec": 0, 00:13:55.810 "w_mbytes_per_sec": 0 00:13:55.810 }, 00:13:55.810 "claimed": true, 00:13:55.810 "claim_type": "exclusive_write", 00:13:55.810 "zoned": false, 00:13:55.810 "supported_io_types": { 00:13:55.810 "read": true, 00:13:55.810 "write": true, 00:13:55.810 "unmap": true, 00:13:55.810 "write_zeroes": true, 00:13:55.810 "flush": true, 00:13:55.810 "reset": true, 00:13:55.810 "compare": false, 00:13:55.810 "compare_and_write": false, 00:13:55.810 "abort": true, 00:13:55.810 "nvme_admin": false, 00:13:55.810 "nvme_io": false 00:13:55.810 }, 00:13:55.810 "memory_domains": [ 00:13:55.810 { 00:13:55.810 "dma_device_id": "system", 00:13:55.810 "dma_device_type": 1 00:13:55.810 }, 00:13:55.810 { 00:13:55.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.810 "dma_device_type": 2 00:13:55.810 } 00:13:55.810 ], 00:13:55.810 "driver_specific": {} 00:13:55.810 } 00:13:55.810 ] 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.067 02:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.324 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.324 "name": "Existed_Raid", 00:13:56.324 "uuid": "e408166e-1260-11ef-99fd-bfc7c66e2865", 00:13:56.324 "strip_size_kb": 0, 00:13:56.324 "state": "online", 00:13:56.324 "raid_level": "raid1", 00:13:56.324 "superblock": true, 00:13:56.324 "num_base_bdevs": 2, 00:13:56.324 "num_base_bdevs_discovered": 2, 00:13:56.324 "num_base_bdevs_operational": 2, 00:13:56.324 "base_bdevs_list": [ 00:13:56.324 { 00:13:56.324 "name": "BaseBdev1", 00:13:56.324 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:56.324 "is_configured": true, 00:13:56.324 "data_offset": 2048, 00:13:56.324 "data_size": 63488 00:13:56.324 }, 00:13:56.324 { 00:13:56.324 "name": "BaseBdev2", 00:13:56.324 "uuid": "e4965492-1260-11ef-99fd-bfc7c66e2865", 00:13:56.324 "is_configured": true, 00:13:56.324 "data_offset": 2048, 00:13:56.324 "data_size": 63488 00:13:56.324 } 00:13:56.324 ] 00:13:56.324 }' 00:13:56.324 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.324 02:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:56.581 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:56.839 [2024-05-15 02:14:44.675556] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:56.839 "name": "Existed_Raid", 00:13:56.839 "aliases": [ 00:13:56.839 "e408166e-1260-11ef-99fd-bfc7c66e2865" 00:13:56.839 ], 00:13:56.839 "product_name": "Raid Volume", 00:13:56.839 "block_size": 512, 00:13:56.839 "num_blocks": 63488, 00:13:56.839 "uuid": "e408166e-1260-11ef-99fd-bfc7c66e2865", 00:13:56.839 "assigned_rate_limits": { 00:13:56.839 "rw_ios_per_sec": 0, 00:13:56.839 "rw_mbytes_per_sec": 0, 00:13:56.839 "r_mbytes_per_sec": 0, 00:13:56.839 "w_mbytes_per_sec": 0 00:13:56.839 }, 00:13:56.839 "claimed": false, 00:13:56.839 "zoned": false, 00:13:56.839 "supported_io_types": { 00:13:56.839 "read": true, 00:13:56.839 "write": true, 00:13:56.839 "unmap": false, 00:13:56.839 "write_zeroes": true, 00:13:56.839 "flush": false, 00:13:56.839 "reset": true, 00:13:56.839 "compare": false, 00:13:56.839 "compare_and_write": false, 00:13:56.839 "abort": false, 00:13:56.839 "nvme_admin": false, 00:13:56.839 "nvme_io": false 00:13:56.839 }, 00:13:56.839 "memory_domains": [ 00:13:56.839 { 00:13:56.839 "dma_device_id": "system", 00:13:56.839 "dma_device_type": 1 00:13:56.839 }, 00:13:56.839 { 00:13:56.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.839 "dma_device_type": 2 00:13:56.839 }, 00:13:56.839 { 00:13:56.839 "dma_device_id": "system", 00:13:56.839 "dma_device_type": 1 00:13:56.839 }, 00:13:56.839 { 00:13:56.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.839 "dma_device_type": 2 00:13:56.839 } 00:13:56.839 ], 00:13:56.839 "driver_specific": { 00:13:56.839 "raid": { 00:13:56.839 "uuid": "e408166e-1260-11ef-99fd-bfc7c66e2865", 00:13:56.839 "strip_size_kb": 0, 00:13:56.839 "state": "online", 00:13:56.839 "raid_level": "raid1", 00:13:56.839 "superblock": true, 00:13:56.839 "num_base_bdevs": 2, 00:13:56.839 "num_base_bdevs_discovered": 2, 00:13:56.839 "num_base_bdevs_operational": 2, 00:13:56.839 "base_bdevs_list": [ 00:13:56.839 { 00:13:56.839 "name": "BaseBdev1", 00:13:56.839 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:56.839 "is_configured": true, 00:13:56.839 "data_offset": 2048, 00:13:56.839 "data_size": 63488 00:13:56.839 }, 00:13:56.839 { 00:13:56.839 "name": "BaseBdev2", 00:13:56.839 "uuid": "e4965492-1260-11ef-99fd-bfc7c66e2865", 00:13:56.839 "is_configured": true, 00:13:56.839 "data_offset": 2048, 00:13:56.839 "data_size": 63488 00:13:56.839 } 00:13:56.839 ] 00:13:56.839 } 00:13:56.839 } 00:13:56.839 }' 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:56.839 BaseBdev2' 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:56.839 02:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:57.096 "name": "BaseBdev1", 00:13:57.096 "aliases": [ 00:13:57.096 "e2f5fb17-1260-11ef-99fd-bfc7c66e2865" 00:13:57.096 ], 00:13:57.096 "product_name": "Malloc disk", 00:13:57.096 "block_size": 512, 00:13:57.096 "num_blocks": 65536, 00:13:57.096 "uuid": "e2f5fb17-1260-11ef-99fd-bfc7c66e2865", 00:13:57.096 "assigned_rate_limits": { 00:13:57.096 "rw_ios_per_sec": 0, 00:13:57.096 "rw_mbytes_per_sec": 0, 00:13:57.096 "r_mbytes_per_sec": 0, 00:13:57.096 "w_mbytes_per_sec": 0 00:13:57.096 }, 00:13:57.096 "claimed": true, 00:13:57.096 "claim_type": "exclusive_write", 00:13:57.096 "zoned": false, 00:13:57.096 "supported_io_types": { 00:13:57.096 "read": true, 00:13:57.096 "write": true, 00:13:57.096 "unmap": true, 00:13:57.096 "write_zeroes": true, 00:13:57.096 "flush": true, 00:13:57.096 "reset": true, 00:13:57.096 "compare": false, 00:13:57.096 "compare_and_write": false, 00:13:57.096 "abort": true, 00:13:57.096 "nvme_admin": false, 00:13:57.096 "nvme_io": false 00:13:57.096 }, 00:13:57.096 "memory_domains": [ 00:13:57.096 { 00:13:57.096 "dma_device_id": "system", 00:13:57.096 "dma_device_type": 1 00:13:57.096 }, 00:13:57.096 { 00:13:57.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.096 "dma_device_type": 2 00:13:57.096 } 00:13:57.096 ], 00:13:57.096 "driver_specific": {} 00:13:57.096 }' 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:57.096 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:57.661 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:57.661 "name": "BaseBdev2", 00:13:57.661 "aliases": [ 00:13:57.661 "e4965492-1260-11ef-99fd-bfc7c66e2865" 00:13:57.661 ], 00:13:57.661 "product_name": "Malloc disk", 00:13:57.661 "block_size": 512, 00:13:57.661 "num_blocks": 65536, 00:13:57.661 "uuid": "e4965492-1260-11ef-99fd-bfc7c66e2865", 00:13:57.661 "assigned_rate_limits": { 00:13:57.661 "rw_ios_per_sec": 0, 00:13:57.661 "rw_mbytes_per_sec": 0, 00:13:57.661 "r_mbytes_per_sec": 0, 00:13:57.661 "w_mbytes_per_sec": 0 00:13:57.661 }, 00:13:57.661 "claimed": true, 00:13:57.661 "claim_type": "exclusive_write", 00:13:57.661 "zoned": false, 00:13:57.661 "supported_io_types": { 00:13:57.661 "read": true, 00:13:57.661 "write": true, 00:13:57.661 "unmap": true, 00:13:57.661 "write_zeroes": true, 00:13:57.661 "flush": true, 00:13:57.661 "reset": true, 00:13:57.661 "compare": false, 00:13:57.661 "compare_and_write": false, 00:13:57.661 "abort": true, 00:13:57.661 "nvme_admin": false, 00:13:57.661 "nvme_io": false 00:13:57.661 }, 00:13:57.662 "memory_domains": [ 00:13:57.662 { 00:13:57.662 "dma_device_id": "system", 00:13:57.662 "dma_device_type": 1 00:13:57.662 }, 00:13:57.662 { 00:13:57.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.662 "dma_device_type": 2 00:13:57.662 } 00:13:57.662 ], 00:13:57.662 "driver_specific": {} 00:13:57.662 }' 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:57.662 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:57.662 [2024-05-15 02:14:45.663729] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.919 02:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.178 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.178 "name": "Existed_Raid", 00:13:58.178 "uuid": "e408166e-1260-11ef-99fd-bfc7c66e2865", 00:13:58.178 "strip_size_kb": 0, 00:13:58.178 "state": "online", 00:13:58.178 "raid_level": "raid1", 00:13:58.178 "superblock": true, 00:13:58.178 "num_base_bdevs": 2, 00:13:58.178 "num_base_bdevs_discovered": 1, 00:13:58.178 "num_base_bdevs_operational": 1, 00:13:58.178 "base_bdevs_list": [ 00:13:58.178 { 00:13:58.178 "name": null, 00:13:58.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.178 "is_configured": false, 00:13:58.178 "data_offset": 2048, 00:13:58.178 "data_size": 63488 00:13:58.178 }, 00:13:58.178 { 00:13:58.178 "name": "BaseBdev2", 00:13:58.178 "uuid": "e4965492-1260-11ef-99fd-bfc7c66e2865", 00:13:58.178 "is_configured": true, 00:13:58.178 "data_offset": 2048, 00:13:58.178 "data_size": 63488 00:13:58.178 } 00:13:58.178 ] 00:13:58.178 }' 00:13:58.178 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.178 02:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.743 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:58.743 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.743 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.743 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:59.000 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:59.000 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.000 02:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:59.258 [2024-05-15 02:14:47.261854] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.258 [2024-05-15 02:14:47.261903] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.258 [2024-05-15 02:14:47.266801] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.258 [2024-05-15 02:14:47.266820] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.258 [2024-05-15 02:14:47.266826] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bfb3a00 name Existed_Raid, state offline 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:59.514 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 50880 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 50880 ']' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 50880 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 50880 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:59.771 killing process with pid 50880 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50880' 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 50880 00:13:59.771 [2024-05-15 02:14:47.535643] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 50880 00:13:59.771 [2024-05-15 02:14:47.535692] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:59.771 00:13:59.771 real 0m9.948s 00:13:59.771 user 0m17.511s 00:13:59.771 sys 0m1.641s 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.771 02:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.771 ************************************ 00:13:59.771 END TEST raid_state_function_test_sb 00:13:59.771 ************************************ 00:13:59.771 02:14:47 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:59.771 02:14:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:59.771 02:14:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.771 02:14:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.771 ************************************ 00:13:59.771 START TEST raid_superblock_test 00:13:59.771 ************************************ 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:59.771 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=51154 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 51154 /var/tmp/spdk-raid.sock 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 51154 ']' 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.772 02:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.772 [2024-05-15 02:14:47.731156] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:59.772 [2024-05-15 02:14:47.731331] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:00.336 EAL: TSC is not safe to use in SMP mode 00:14:00.336 EAL: TSC is not invariant 00:14:00.336 [2024-05-15 02:14:48.207503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.336 [2024-05-15 02:14:48.294357] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:00.336 [2024-05-15 02:14:48.296557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.336 [2024-05-15 02:14:48.297292] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.336 [2024-05-15 02:14:48.297304] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.899 02:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:01.155 malloc1 00:14:01.155 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.718 [2024-05-15 02:14:49.520910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.718 [2024-05-15 02:14:49.520998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.718 [2024-05-15 02:14:49.521601] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212780 00:14:01.718 [2024-05-15 02:14:49.521626] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.718 [2024-05-15 02:14:49.522385] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.718 [2024-05-15 02:14:49.522415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.718 pt1 00:14:01.718 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.718 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.718 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:01.718 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:01.718 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:01.719 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.719 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.719 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.719 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:01.975 malloc2 00:14:01.975 02:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.232 [2024-05-15 02:14:50.064999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.232 [2024-05-15 02:14:50.065076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.232 [2024-05-15 02:14:50.065112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212c80 00:14:02.232 [2024-05-15 02:14:50.065123] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.232 [2024-05-15 02:14:50.065779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.232 [2024-05-15 02:14:50.065823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.232 pt2 00:14:02.232 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:02.232 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.232 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:02.490 [2024-05-15 02:14:50.333055] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.490 [2024-05-15 02:14:50.333537] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.490 [2024-05-15 02:14:50.333600] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b212f00 00:14:02.490 [2024-05-15 02:14:50.333606] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.490 [2024-05-15 02:14:50.333643] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b275e20 00:14:02.490 [2024-05-15 02:14:50.333701] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b212f00 00:14:02.490 [2024-05-15 02:14:50.333705] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b212f00 00:14:02.490 [2024-05-15 02:14:50.333729] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.491 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.748 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.748 "name": "raid_bdev1", 00:14:02.748 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:02.748 "strip_size_kb": 0, 00:14:02.748 "state": "online", 00:14:02.748 "raid_level": "raid1", 00:14:02.748 "superblock": true, 00:14:02.748 "num_base_bdevs": 2, 00:14:02.748 "num_base_bdevs_discovered": 2, 00:14:02.748 "num_base_bdevs_operational": 2, 00:14:02.748 "base_bdevs_list": [ 00:14:02.748 { 00:14:02.748 "name": "pt1", 00:14:02.748 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:02.748 "is_configured": true, 00:14:02.748 "data_offset": 2048, 00:14:02.748 "data_size": 63488 00:14:02.748 }, 00:14:02.748 { 00:14:02.748 "name": "pt2", 00:14:02.748 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:02.748 "is_configured": true, 00:14:02.748 "data_offset": 2048, 00:14:02.748 "data_size": 63488 00:14:02.748 } 00:14:02.748 ] 00:14:02.748 }' 00:14:02.748 02:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.748 02:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:03.312 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:03.570 [2024-05-15 02:14:51.369265] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.570 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:03.570 "name": "raid_bdev1", 00:14:03.570 "aliases": [ 00:14:03.570 "e8c79a6f-1260-11ef-99fd-bfc7c66e2865" 00:14:03.570 ], 00:14:03.570 "product_name": "Raid Volume", 00:14:03.570 "block_size": 512, 00:14:03.570 "num_blocks": 63488, 00:14:03.570 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:03.570 "assigned_rate_limits": { 00:14:03.570 "rw_ios_per_sec": 0, 00:14:03.570 "rw_mbytes_per_sec": 0, 00:14:03.570 "r_mbytes_per_sec": 0, 00:14:03.570 "w_mbytes_per_sec": 0 00:14:03.570 }, 00:14:03.570 "claimed": false, 00:14:03.570 "zoned": false, 00:14:03.570 "supported_io_types": { 00:14:03.570 "read": true, 00:14:03.570 "write": true, 00:14:03.570 "unmap": false, 00:14:03.570 "write_zeroes": true, 00:14:03.570 "flush": false, 00:14:03.570 "reset": true, 00:14:03.570 "compare": false, 00:14:03.570 "compare_and_write": false, 00:14:03.570 "abort": false, 00:14:03.570 "nvme_admin": false, 00:14:03.570 "nvme_io": false 00:14:03.570 }, 00:14:03.570 "memory_domains": [ 00:14:03.570 { 00:14:03.570 "dma_device_id": "system", 00:14:03.570 "dma_device_type": 1 00:14:03.570 }, 00:14:03.570 { 00:14:03.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.570 "dma_device_type": 2 00:14:03.570 }, 00:14:03.570 { 00:14:03.570 "dma_device_id": "system", 00:14:03.570 "dma_device_type": 1 00:14:03.570 }, 00:14:03.570 { 00:14:03.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.570 "dma_device_type": 2 00:14:03.570 } 00:14:03.570 ], 00:14:03.570 "driver_specific": { 00:14:03.570 "raid": { 00:14:03.570 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:03.570 "strip_size_kb": 0, 00:14:03.570 "state": "online", 00:14:03.570 "raid_level": "raid1", 00:14:03.570 "superblock": true, 00:14:03.570 "num_base_bdevs": 2, 00:14:03.570 "num_base_bdevs_discovered": 2, 00:14:03.570 "num_base_bdevs_operational": 2, 00:14:03.570 "base_bdevs_list": [ 00:14:03.570 { 00:14:03.570 "name": "pt1", 00:14:03.570 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:03.570 "is_configured": true, 00:14:03.570 "data_offset": 2048, 00:14:03.570 "data_size": 63488 00:14:03.570 }, 00:14:03.570 { 00:14:03.570 "name": "pt2", 00:14:03.570 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 2048, 00:14:03.571 "data_size": 63488 00:14:03.571 } 00:14:03.571 ] 00:14:03.571 } 00:14:03.571 } 00:14:03.571 }' 00:14:03.571 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.571 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:14:03.571 pt2' 00:14:03.571 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:03.571 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:03.571 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:03.829 "name": "pt1", 00:14:03.829 "aliases": [ 00:14:03.829 "32d2ddff-1e4b-bb57-a84f-c33b808bcce3" 00:14:03.829 ], 00:14:03.829 "product_name": "passthru", 00:14:03.829 "block_size": 512, 00:14:03.829 "num_blocks": 65536, 00:14:03.829 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:03.829 "assigned_rate_limits": { 00:14:03.829 "rw_ios_per_sec": 0, 00:14:03.829 "rw_mbytes_per_sec": 0, 00:14:03.829 "r_mbytes_per_sec": 0, 00:14:03.829 "w_mbytes_per_sec": 0 00:14:03.829 }, 00:14:03.829 "claimed": true, 00:14:03.829 "claim_type": "exclusive_write", 00:14:03.829 "zoned": false, 00:14:03.829 "supported_io_types": { 00:14:03.829 "read": true, 00:14:03.829 "write": true, 00:14:03.829 "unmap": true, 00:14:03.829 "write_zeroes": true, 00:14:03.829 "flush": true, 00:14:03.829 "reset": true, 00:14:03.829 "compare": false, 00:14:03.829 "compare_and_write": false, 00:14:03.829 "abort": true, 00:14:03.829 "nvme_admin": false, 00:14:03.829 "nvme_io": false 00:14:03.829 }, 00:14:03.829 "memory_domains": [ 00:14:03.829 { 00:14:03.829 "dma_device_id": "system", 00:14:03.829 "dma_device_type": 1 00:14:03.829 }, 00:14:03.829 { 00:14:03.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.829 "dma_device_type": 2 00:14:03.829 } 00:14:03.829 ], 00:14:03.829 "driver_specific": { 00:14:03.829 "passthru": { 00:14:03.829 "name": "pt1", 00:14:03.829 "base_bdev_name": "malloc1" 00:14:03.829 } 00:14:03.829 } 00:14:03.829 }' 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:03.829 02:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:04.397 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:04.398 "name": "pt2", 00:14:04.398 "aliases": [ 00:14:04.398 "a7bb1bb2-ac31-c650-882f-2412f38860cd" 00:14:04.398 ], 00:14:04.398 "product_name": "passthru", 00:14:04.398 "block_size": 512, 00:14:04.398 "num_blocks": 65536, 00:14:04.398 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:04.398 "assigned_rate_limits": { 00:14:04.398 "rw_ios_per_sec": 0, 00:14:04.398 "rw_mbytes_per_sec": 0, 00:14:04.398 "r_mbytes_per_sec": 0, 00:14:04.398 "w_mbytes_per_sec": 0 00:14:04.398 }, 00:14:04.398 "claimed": true, 00:14:04.398 "claim_type": "exclusive_write", 00:14:04.398 "zoned": false, 00:14:04.398 "supported_io_types": { 00:14:04.398 "read": true, 00:14:04.398 "write": true, 00:14:04.398 "unmap": true, 00:14:04.398 "write_zeroes": true, 00:14:04.398 "flush": true, 00:14:04.398 "reset": true, 00:14:04.398 "compare": false, 00:14:04.398 "compare_and_write": false, 00:14:04.398 "abort": true, 00:14:04.398 "nvme_admin": false, 00:14:04.398 "nvme_io": false 00:14:04.398 }, 00:14:04.398 "memory_domains": [ 00:14:04.398 { 00:14:04.398 "dma_device_id": "system", 00:14:04.398 "dma_device_type": 1 00:14:04.398 }, 00:14:04.398 { 00:14:04.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.398 "dma_device_type": 2 00:14:04.398 } 00:14:04.398 ], 00:14:04.398 "driver_specific": { 00:14:04.398 "passthru": { 00:14:04.398 "name": "pt2", 00:14:04.398 "base_bdev_name": "malloc2" 00:14:04.398 } 00:14:04.398 } 00:14:04.398 }' 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:04.398 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:04.656 [2024-05-15 02:14:52.453468] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.656 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e8c79a6f-1260-11ef-99fd-bfc7c66e2865 00:14:04.656 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e8c79a6f-1260-11ef-99fd-bfc7c66e2865 ']' 00:14:04.656 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.916 [2024-05-15 02:14:52.737466] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.916 [2024-05-15 02:14:52.737496] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.916 [2024-05-15 02:14:52.737532] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.916 [2024-05-15 02:14:52.737547] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.916 [2024-05-15 02:14:52.737551] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b212f00 name raid_bdev1, state offline 00:14:04.916 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.916 02:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.174 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.174 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.174 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.174 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:05.432 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.432 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:06.000 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:06.000 02:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:06.258 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:06.516 [2024-05-15 02:14:54.325763] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:06.516 [2024-05-15 02:14:54.326243] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:06.516 [2024-05-15 02:14:54.326263] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:06.516 [2024-05-15 02:14:54.326310] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:06.516 [2024-05-15 02:14:54.326321] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.516 [2024-05-15 02:14:54.326325] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b212c80 name raid_bdev1, state configuring 00:14:06.516 request: 00:14:06.516 { 00:14:06.516 "name": "raid_bdev1", 00:14:06.516 "raid_level": "raid1", 00:14:06.516 "base_bdevs": [ 00:14:06.516 "malloc1", 00:14:06.516 "malloc2" 00:14:06.516 ], 00:14:06.516 "superblock": false, 00:14:06.516 "method": "bdev_raid_create", 00:14:06.516 "req_id": 1 00:14:06.516 } 00:14:06.516 Got JSON-RPC error response 00:14:06.516 response: 00:14:06.516 { 00:14:06.516 "code": -17, 00:14:06.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:06.516 } 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.516 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:06.774 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:06.774 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:06.774 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:07.032 [2024-05-15 02:14:54.929837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:07.032 [2024-05-15 02:14:54.929903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.032 [2024-05-15 02:14:54.929934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212780 00:14:07.032 [2024-05-15 02:14:54.929942] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.032 [2024-05-15 02:14:54.930479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.032 [2024-05-15 02:14:54.930523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:07.032 [2024-05-15 02:14:54.930554] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:07.032 [2024-05-15 02:14:54.930566] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:07.032 pt1 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.033 02:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.291 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.291 "name": "raid_bdev1", 00:14:07.291 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:07.291 "strip_size_kb": 0, 00:14:07.291 "state": "configuring", 00:14:07.291 "raid_level": "raid1", 00:14:07.291 "superblock": true, 00:14:07.291 "num_base_bdevs": 2, 00:14:07.291 "num_base_bdevs_discovered": 1, 00:14:07.291 "num_base_bdevs_operational": 2, 00:14:07.291 "base_bdevs_list": [ 00:14:07.291 { 00:14:07.291 "name": "pt1", 00:14:07.291 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:07.291 "is_configured": true, 00:14:07.291 "data_offset": 2048, 00:14:07.291 "data_size": 63488 00:14:07.291 }, 00:14:07.291 { 00:14:07.291 "name": null, 00:14:07.291 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:07.291 "is_configured": false, 00:14:07.291 "data_offset": 2048, 00:14:07.291 "data_size": 63488 00:14:07.291 } 00:14:07.291 ] 00:14:07.291 }' 00:14:07.291 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.291 02:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.858 [2024-05-15 02:14:55.830002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.858 [2024-05-15 02:14:55.830083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.858 [2024-05-15 02:14:55.830114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212f00 00:14:07.858 [2024-05-15 02:14:55.830123] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.858 [2024-05-15 02:14:55.830224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.858 [2024-05-15 02:14:55.830233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.858 [2024-05-15 02:14:55.830255] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.858 [2024-05-15 02:14:55.830263] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.858 [2024-05-15 02:14:55.830289] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b213180 00:14:07.858 [2024-05-15 02:14:55.830294] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.858 [2024-05-15 02:14:55.830312] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b275e20 00:14:07.858 [2024-05-15 02:14:55.830359] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b213180 00:14:07.858 [2024-05-15 02:14:55.830362] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b213180 00:14:07.858 [2024-05-15 02:14:55.830381] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.858 pt2 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.858 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.859 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.859 02:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.424 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.424 "name": "raid_bdev1", 00:14:08.424 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:08.424 "strip_size_kb": 0, 00:14:08.424 "state": "online", 00:14:08.424 "raid_level": "raid1", 00:14:08.424 "superblock": true, 00:14:08.424 "num_base_bdevs": 2, 00:14:08.424 "num_base_bdevs_discovered": 2, 00:14:08.424 "num_base_bdevs_operational": 2, 00:14:08.424 "base_bdevs_list": [ 00:14:08.424 { 00:14:08.424 "name": "pt1", 00:14:08.424 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:08.424 "is_configured": true, 00:14:08.424 "data_offset": 2048, 00:14:08.424 "data_size": 63488 00:14:08.424 }, 00:14:08.424 { 00:14:08.424 "name": "pt2", 00:14:08.424 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:08.424 "is_configured": true, 00:14:08.424 "data_offset": 2048, 00:14:08.424 "data_size": 63488 00:14:08.424 } 00:14:08.424 ] 00:14:08.424 }' 00:14:08.424 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.424 02:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:08.682 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:08.941 [2024-05-15 02:14:56.758212] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:08.941 "name": "raid_bdev1", 00:14:08.941 "aliases": [ 00:14:08.941 "e8c79a6f-1260-11ef-99fd-bfc7c66e2865" 00:14:08.941 ], 00:14:08.941 "product_name": "Raid Volume", 00:14:08.941 "block_size": 512, 00:14:08.941 "num_blocks": 63488, 00:14:08.941 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:08.941 "assigned_rate_limits": { 00:14:08.941 "rw_ios_per_sec": 0, 00:14:08.941 "rw_mbytes_per_sec": 0, 00:14:08.941 "r_mbytes_per_sec": 0, 00:14:08.941 "w_mbytes_per_sec": 0 00:14:08.941 }, 00:14:08.941 "claimed": false, 00:14:08.941 "zoned": false, 00:14:08.941 "supported_io_types": { 00:14:08.941 "read": true, 00:14:08.941 "write": true, 00:14:08.941 "unmap": false, 00:14:08.941 "write_zeroes": true, 00:14:08.941 "flush": false, 00:14:08.941 "reset": true, 00:14:08.941 "compare": false, 00:14:08.941 "compare_and_write": false, 00:14:08.941 "abort": false, 00:14:08.941 "nvme_admin": false, 00:14:08.941 "nvme_io": false 00:14:08.941 }, 00:14:08.941 "memory_domains": [ 00:14:08.941 { 00:14:08.941 "dma_device_id": "system", 00:14:08.941 "dma_device_type": 1 00:14:08.941 }, 00:14:08.941 { 00:14:08.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.941 "dma_device_type": 2 00:14:08.941 }, 00:14:08.941 { 00:14:08.941 "dma_device_id": "system", 00:14:08.941 "dma_device_type": 1 00:14:08.941 }, 00:14:08.941 { 00:14:08.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.941 "dma_device_type": 2 00:14:08.941 } 00:14:08.941 ], 00:14:08.941 "driver_specific": { 00:14:08.941 "raid": { 00:14:08.941 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:08.941 "strip_size_kb": 0, 00:14:08.941 "state": "online", 00:14:08.941 "raid_level": "raid1", 00:14:08.941 "superblock": true, 00:14:08.941 "num_base_bdevs": 2, 00:14:08.941 "num_base_bdevs_discovered": 2, 00:14:08.941 "num_base_bdevs_operational": 2, 00:14:08.941 "base_bdevs_list": [ 00:14:08.941 { 00:14:08.941 "name": "pt1", 00:14:08.941 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:08.941 "is_configured": true, 00:14:08.941 "data_offset": 2048, 00:14:08.941 "data_size": 63488 00:14:08.941 }, 00:14:08.941 { 00:14:08.941 "name": "pt2", 00:14:08.941 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:08.941 "is_configured": true, 00:14:08.941 "data_offset": 2048, 00:14:08.941 "data_size": 63488 00:14:08.941 } 00:14:08.941 ] 00:14:08.941 } 00:14:08.941 } 00:14:08.941 }' 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:14:08.941 pt2' 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:08.941 02:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:09.200 "name": "pt1", 00:14:09.200 "aliases": [ 00:14:09.200 "32d2ddff-1e4b-bb57-a84f-c33b808bcce3" 00:14:09.200 ], 00:14:09.200 "product_name": "passthru", 00:14:09.200 "block_size": 512, 00:14:09.200 "num_blocks": 65536, 00:14:09.200 "uuid": "32d2ddff-1e4b-bb57-a84f-c33b808bcce3", 00:14:09.200 "assigned_rate_limits": { 00:14:09.200 "rw_ios_per_sec": 0, 00:14:09.200 "rw_mbytes_per_sec": 0, 00:14:09.200 "r_mbytes_per_sec": 0, 00:14:09.200 "w_mbytes_per_sec": 0 00:14:09.200 }, 00:14:09.200 "claimed": true, 00:14:09.200 "claim_type": "exclusive_write", 00:14:09.200 "zoned": false, 00:14:09.200 "supported_io_types": { 00:14:09.200 "read": true, 00:14:09.200 "write": true, 00:14:09.200 "unmap": true, 00:14:09.200 "write_zeroes": true, 00:14:09.200 "flush": true, 00:14:09.200 "reset": true, 00:14:09.200 "compare": false, 00:14:09.200 "compare_and_write": false, 00:14:09.200 "abort": true, 00:14:09.200 "nvme_admin": false, 00:14:09.200 "nvme_io": false 00:14:09.200 }, 00:14:09.200 "memory_domains": [ 00:14:09.200 { 00:14:09.200 "dma_device_id": "system", 00:14:09.200 "dma_device_type": 1 00:14:09.200 }, 00:14:09.200 { 00:14:09.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.200 "dma_device_type": 2 00:14:09.200 } 00:14:09.200 ], 00:14:09.200 "driver_specific": { 00:14:09.200 "passthru": { 00:14:09.200 "name": "pt1", 00:14:09.200 "base_bdev_name": "malloc1" 00:14:09.200 } 00:14:09.200 } 00:14:09.200 }' 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:09.200 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:09.458 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:09.458 "name": "pt2", 00:14:09.458 "aliases": [ 00:14:09.458 "a7bb1bb2-ac31-c650-882f-2412f38860cd" 00:14:09.458 ], 00:14:09.458 "product_name": "passthru", 00:14:09.458 "block_size": 512, 00:14:09.458 "num_blocks": 65536, 00:14:09.458 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:09.458 "assigned_rate_limits": { 00:14:09.458 "rw_ios_per_sec": 0, 00:14:09.458 "rw_mbytes_per_sec": 0, 00:14:09.458 "r_mbytes_per_sec": 0, 00:14:09.458 "w_mbytes_per_sec": 0 00:14:09.458 }, 00:14:09.458 "claimed": true, 00:14:09.458 "claim_type": "exclusive_write", 00:14:09.458 "zoned": false, 00:14:09.458 "supported_io_types": { 00:14:09.458 "read": true, 00:14:09.458 "write": true, 00:14:09.458 "unmap": true, 00:14:09.458 "write_zeroes": true, 00:14:09.458 "flush": true, 00:14:09.458 "reset": true, 00:14:09.458 "compare": false, 00:14:09.458 "compare_and_write": false, 00:14:09.458 "abort": true, 00:14:09.458 "nvme_admin": false, 00:14:09.458 "nvme_io": false 00:14:09.458 }, 00:14:09.458 "memory_domains": [ 00:14:09.458 { 00:14:09.458 "dma_device_id": "system", 00:14:09.458 "dma_device_type": 1 00:14:09.458 }, 00:14:09.458 { 00:14:09.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.459 "dma_device_type": 2 00:14:09.459 } 00:14:09.459 ], 00:14:09.459 "driver_specific": { 00:14:09.459 "passthru": { 00:14:09.459 "name": "pt2", 00:14:09.459 "base_bdev_name": "malloc2" 00:14:09.459 } 00:14:09.459 } 00:14:09.459 }' 00:14:09.459 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.459 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:09.459 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:09.459 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:09.717 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:09.975 [2024-05-15 02:14:57.842356] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.975 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e8c79a6f-1260-11ef-99fd-bfc7c66e2865 '!=' e8c79a6f-1260-11ef-99fd-bfc7c66e2865 ']' 00:14:09.975 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:09.975 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:09.975 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:14:09.975 02:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:10.233 [2024-05-15 02:14:58.078386] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.233 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.491 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.491 "name": "raid_bdev1", 00:14:10.491 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:10.491 "strip_size_kb": 0, 00:14:10.491 "state": "online", 00:14:10.491 "raid_level": "raid1", 00:14:10.491 "superblock": true, 00:14:10.491 "num_base_bdevs": 2, 00:14:10.491 "num_base_bdevs_discovered": 1, 00:14:10.491 "num_base_bdevs_operational": 1, 00:14:10.491 "base_bdevs_list": [ 00:14:10.492 { 00:14:10.492 "name": null, 00:14:10.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.492 "is_configured": false, 00:14:10.492 "data_offset": 2048, 00:14:10.492 "data_size": 63488 00:14:10.492 }, 00:14:10.492 { 00:14:10.492 "name": "pt2", 00:14:10.492 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:10.492 "is_configured": true, 00:14:10.492 "data_offset": 2048, 00:14:10.492 "data_size": 63488 00:14:10.492 } 00:14:10.492 ] 00:14:10.492 }' 00:14:10.492 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.492 02:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.750 02:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:11.008 [2024-05-15 02:14:59.014486] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.008 [2024-05-15 02:14:59.014519] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.008 [2024-05-15 02:14:59.014544] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.008 [2024-05-15 02:14:59.014556] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.008 [2024-05-15 02:14:59.014560] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b213180 name raid_bdev1, state offline 00:14:11.265 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.266 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:11.524 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:11.524 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:11.524 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:11.524 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:11.524 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:14:11.782 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:12.040 [2024-05-15 02:14:59.906623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:12.040 [2024-05-15 02:14:59.906689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.040 [2024-05-15 02:14:59.906718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212f00 00:14:12.040 [2024-05-15 02:14:59.906727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.040 [2024-05-15 02:14:59.907255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.040 [2024-05-15 02:14:59.907289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:12.040 [2024-05-15 02:14:59.907314] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:12.040 [2024-05-15 02:14:59.907325] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:12.040 [2024-05-15 02:14:59.907350] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b213180 00:14:12.040 [2024-05-15 02:14:59.907354] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.040 [2024-05-15 02:14:59.907375] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b275e20 00:14:12.040 [2024-05-15 02:14:59.907410] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b213180 00:14:12.040 [2024-05-15 02:14:59.907415] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b213180 00:14:12.040 [2024-05-15 02:14:59.907433] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.040 pt2 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.040 02:14:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.298 02:15:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.298 "name": "raid_bdev1", 00:14:12.298 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:12.298 "strip_size_kb": 0, 00:14:12.298 "state": "online", 00:14:12.298 "raid_level": "raid1", 00:14:12.298 "superblock": true, 00:14:12.298 "num_base_bdevs": 2, 00:14:12.298 "num_base_bdevs_discovered": 1, 00:14:12.298 "num_base_bdevs_operational": 1, 00:14:12.298 "base_bdevs_list": [ 00:14:12.298 { 00:14:12.298 "name": null, 00:14:12.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.298 "is_configured": false, 00:14:12.298 "data_offset": 2048, 00:14:12.298 "data_size": 63488 00:14:12.298 }, 00:14:12.298 { 00:14:12.298 "name": "pt2", 00:14:12.298 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:12.298 "is_configured": true, 00:14:12.298 "data_offset": 2048, 00:14:12.298 "data_size": 63488 00:14:12.298 } 00:14:12.298 ] 00:14:12.298 }' 00:14:12.298 02:15:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.298 02:15:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.864 02:15:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:12.864 [2024-05-15 02:15:00.858747] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.864 [2024-05-15 02:15:00.858779] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.864 [2024-05-15 02:15:00.858801] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.864 [2024-05-15 02:15:00.858814] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.864 [2024-05-15 02:15:00.858818] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b213180 name raid_bdev1, state offline 00:14:13.122 02:15:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.122 02:15:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:13.392 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:13.392 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:13.392 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:13.392 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.654 [2024-05-15 02:15:01.454845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.654 [2024-05-15 02:15:01.454908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.654 [2024-05-15 02:15:01.454937] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b212c80 00:14:13.654 [2024-05-15 02:15:01.454946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.654 [2024-05-15 02:15:01.455484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.654 [2024-05-15 02:15:01.455523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.654 [2024-05-15 02:15:01.455550] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:13.654 [2024-05-15 02:15:01.455561] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:13.654 [2024-05-15 02:15:01.455588] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:13.654 [2024-05-15 02:15:01.455593] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.654 [2024-05-15 02:15:01.455598] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b212780 name raid_bdev1, state configuring 00:14:13.654 [2024-05-15 02:15:01.455605] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.654 [2024-05-15 02:15:01.455618] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b212780 00:14:13.654 [2024-05-15 02:15:01.455622] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:13.654 [2024-05-15 02:15:01.455642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b275e20 00:14:13.654 [2024-05-15 02:15:01.455680] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b212780 00:14:13.654 [2024-05-15 02:15:01.455684] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b212780 00:14:13.654 [2024-05-15 02:15:01.455701] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.654 pt1 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.654 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.912 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.912 "name": "raid_bdev1", 00:14:13.912 "uuid": "e8c79a6f-1260-11ef-99fd-bfc7c66e2865", 00:14:13.912 "strip_size_kb": 0, 00:14:13.912 "state": "online", 00:14:13.912 "raid_level": "raid1", 00:14:13.912 "superblock": true, 00:14:13.912 "num_base_bdevs": 2, 00:14:13.912 "num_base_bdevs_discovered": 1, 00:14:13.912 "num_base_bdevs_operational": 1, 00:14:13.912 "base_bdevs_list": [ 00:14:13.912 { 00:14:13.912 "name": null, 00:14:13.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.912 "is_configured": false, 00:14:13.912 "data_offset": 2048, 00:14:13.912 "data_size": 63488 00:14:13.912 }, 00:14:13.912 { 00:14:13.912 "name": "pt2", 00:14:13.912 "uuid": "a7bb1bb2-ac31-c650-882f-2412f38860cd", 00:14:13.912 "is_configured": true, 00:14:13.912 "data_offset": 2048, 00:14:13.912 "data_size": 63488 00:14:13.912 } 00:14:13.912 ] 00:14:13.912 }' 00:14:13.912 02:15:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.912 02:15:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.169 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:14.169 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:14.426 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:14.426 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:14.426 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:14.684 [2024-05-15 02:15:02.591035] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e8c79a6f-1260-11ef-99fd-bfc7c66e2865 '!=' e8c79a6f-1260-11ef-99fd-bfc7c66e2865 ']' 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 51154 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 51154 ']' 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 51154 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 51154 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:14:14.684 killing process with pid 51154 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51154' 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 51154 00:14:14.684 [2024-05-15 02:15:02.633465] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.684 [2024-05-15 02:15:02.633513] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.684 [2024-05-15 02:15:02.633527] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.684 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 51154 00:14:14.684 [2024-05-15 02:15:02.633533] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b212780 name raid_bdev1, state offline 00:14:14.684 [2024-05-15 02:15:02.643532] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.941 02:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:14.941 00:14:14.941 real 0m15.079s 00:14:14.941 user 0m27.078s 00:14:14.941 sys 0m2.381s 00:14:14.941 ************************************ 00:14:14.941 END TEST raid_superblock_test 00:14:14.941 ************************************ 00:14:14.941 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:14.941 02:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.941 02:15:02 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:14:14.941 02:15:02 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:14:14.942 02:15:02 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:14.942 02:15:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:14.942 02:15:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:14.942 02:15:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 ************************************ 00:14:14.942 START TEST raid_state_function_test 00:14:14.942 ************************************ 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=51553 00:14:14.942 Process raid pid: 51553 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 51553' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 51553 /var/tmp/spdk-raid.sock 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 51553 ']' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.942 02:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 [2024-05-15 02:15:02.866401] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:14.942 [2024-05-15 02:15:02.866693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:15.509 EAL: TSC is not safe to use in SMP mode 00:14:15.509 EAL: TSC is not invariant 00:14:15.509 [2024-05-15 02:15:03.355924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.509 [2024-05-15 02:15:03.467919] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:15.509 [2024-05-15 02:15:03.470669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.509 [2024-05-15 02:15:03.471751] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.509 [2024-05-15 02:15:03.471780] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.075 02:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:16.075 02:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:16.075 02:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:16.334 [2024-05-15 02:15:04.179174] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.334 [2024-05-15 02:15:04.179228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.334 [2024-05-15 02:15:04.179234] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.334 [2024-05-15 02:15:04.179242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.334 [2024-05-15 02:15:04.179245] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.334 [2024-05-15 02:15:04.179252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.334 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:16.334 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.335 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.592 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.592 "name": "Existed_Raid", 00:14:16.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.592 "strip_size_kb": 64, 00:14:16.592 "state": "configuring", 00:14:16.592 "raid_level": "raid0", 00:14:16.592 "superblock": false, 00:14:16.592 "num_base_bdevs": 3, 00:14:16.592 "num_base_bdevs_discovered": 0, 00:14:16.592 "num_base_bdevs_operational": 3, 00:14:16.592 "base_bdevs_list": [ 00:14:16.592 { 00:14:16.592 "name": "BaseBdev1", 00:14:16.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.593 "is_configured": false, 00:14:16.593 "data_offset": 0, 00:14:16.593 "data_size": 0 00:14:16.593 }, 00:14:16.593 { 00:14:16.593 "name": "BaseBdev2", 00:14:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.593 "is_configured": false, 00:14:16.593 "data_offset": 0, 00:14:16.593 "data_size": 0 00:14:16.593 }, 00:14:16.593 { 00:14:16.593 "name": "BaseBdev3", 00:14:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.593 "is_configured": false, 00:14:16.593 "data_offset": 0, 00:14:16.593 "data_size": 0 00:14:16.593 } 00:14:16.593 ] 00:14:16.593 }' 00:14:16.593 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.593 02:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 02:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.109 [2024-05-15 02:15:05.091324] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.109 [2024-05-15 02:15:05.091374] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeb500 name Existed_Raid, state configuring 00:14:17.109 02:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:17.368 [2024-05-15 02:15:05.319342] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.368 [2024-05-15 02:15:05.319403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.368 [2024-05-15 02:15:05.319408] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.368 [2024-05-15 02:15:05.319417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.368 [2024-05-15 02:15:05.319420] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.368 [2024-05-15 02:15:05.319428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.368 02:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.626 [2024-05-15 02:15:05.552385] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.626 BaseBdev1 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:17.626 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.884 02:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.142 [ 00:14:18.142 { 00:14:18.142 "name": "BaseBdev1", 00:14:18.142 "aliases": [ 00:14:18.142 "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865" 00:14:18.142 ], 00:14:18.142 "product_name": "Malloc disk", 00:14:18.142 "block_size": 512, 00:14:18.142 "num_blocks": 65536, 00:14:18.142 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:18.142 "assigned_rate_limits": { 00:14:18.142 "rw_ios_per_sec": 0, 00:14:18.142 "rw_mbytes_per_sec": 0, 00:14:18.142 "r_mbytes_per_sec": 0, 00:14:18.142 "w_mbytes_per_sec": 0 00:14:18.142 }, 00:14:18.142 "claimed": true, 00:14:18.142 "claim_type": "exclusive_write", 00:14:18.142 "zoned": false, 00:14:18.142 "supported_io_types": { 00:14:18.142 "read": true, 00:14:18.142 "write": true, 00:14:18.142 "unmap": true, 00:14:18.142 "write_zeroes": true, 00:14:18.142 "flush": true, 00:14:18.142 "reset": true, 00:14:18.142 "compare": false, 00:14:18.142 "compare_and_write": false, 00:14:18.142 "abort": true, 00:14:18.142 "nvme_admin": false, 00:14:18.142 "nvme_io": false 00:14:18.142 }, 00:14:18.142 "memory_domains": [ 00:14:18.142 { 00:14:18.142 "dma_device_id": "system", 00:14:18.142 "dma_device_type": 1 00:14:18.142 }, 00:14:18.142 { 00:14:18.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.142 "dma_device_type": 2 00:14:18.142 } 00:14:18.142 ], 00:14:18.142 "driver_specific": {} 00:14:18.142 } 00:14:18.142 ] 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.142 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.399 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.399 "name": "Existed_Raid", 00:14:18.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.399 "strip_size_kb": 64, 00:14:18.399 "state": "configuring", 00:14:18.399 "raid_level": "raid0", 00:14:18.399 "superblock": false, 00:14:18.399 "num_base_bdevs": 3, 00:14:18.399 "num_base_bdevs_discovered": 1, 00:14:18.399 "num_base_bdevs_operational": 3, 00:14:18.399 "base_bdevs_list": [ 00:14:18.399 { 00:14:18.399 "name": "BaseBdev1", 00:14:18.399 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:18.399 "is_configured": true, 00:14:18.399 "data_offset": 0, 00:14:18.399 "data_size": 65536 00:14:18.399 }, 00:14:18.399 { 00:14:18.399 "name": "BaseBdev2", 00:14:18.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.399 "is_configured": false, 00:14:18.399 "data_offset": 0, 00:14:18.399 "data_size": 0 00:14:18.399 }, 00:14:18.399 { 00:14:18.399 "name": "BaseBdev3", 00:14:18.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.399 "is_configured": false, 00:14:18.399 "data_offset": 0, 00:14:18.399 "data_size": 0 00:14:18.399 } 00:14:18.399 ] 00:14:18.399 }' 00:14:18.399 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.399 02:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.965 02:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:19.224 [2024-05-15 02:15:07.023585] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.224 [2024-05-15 02:15:07.023623] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeb500 name Existed_Raid, state configuring 00:14:19.224 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:19.482 [2024-05-15 02:15:07.339666] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.482 [2024-05-15 02:15:07.340432] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.482 [2024-05-15 02:15:07.340504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.482 [2024-05-15 02:15:07.340509] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.482 [2024-05-15 02:15:07.340519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.483 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.740 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.740 "name": "Existed_Raid", 00:14:19.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.740 "strip_size_kb": 64, 00:14:19.740 "state": "configuring", 00:14:19.740 "raid_level": "raid0", 00:14:19.740 "superblock": false, 00:14:19.740 "num_base_bdevs": 3, 00:14:19.740 "num_base_bdevs_discovered": 1, 00:14:19.740 "num_base_bdevs_operational": 3, 00:14:19.740 "base_bdevs_list": [ 00:14:19.740 { 00:14:19.740 "name": "BaseBdev1", 00:14:19.740 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:19.740 "is_configured": true, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 65536 00:14:19.740 }, 00:14:19.740 { 00:14:19.740 "name": "BaseBdev2", 00:14:19.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.740 "is_configured": false, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 0 00:14:19.740 }, 00:14:19.740 { 00:14:19.740 "name": "BaseBdev3", 00:14:19.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.740 "is_configured": false, 00:14:19.740 "data_offset": 0, 00:14:19.740 "data_size": 0 00:14:19.740 } 00:14:19.740 ] 00:14:19.740 }' 00:14:19.740 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.740 02:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.998 02:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.256 [2024-05-15 02:15:08.195895] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.256 BaseBdev2 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:20.256 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.514 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.771 [ 00:14:20.771 { 00:14:20.771 "name": "BaseBdev2", 00:14:20.771 "aliases": [ 00:14:20.771 "f36d3cf7-1260-11ef-99fd-bfc7c66e2865" 00:14:20.771 ], 00:14:20.771 "product_name": "Malloc disk", 00:14:20.771 "block_size": 512, 00:14:20.771 "num_blocks": 65536, 00:14:20.771 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:20.771 "assigned_rate_limits": { 00:14:20.771 "rw_ios_per_sec": 0, 00:14:20.771 "rw_mbytes_per_sec": 0, 00:14:20.771 "r_mbytes_per_sec": 0, 00:14:20.771 "w_mbytes_per_sec": 0 00:14:20.771 }, 00:14:20.771 "claimed": true, 00:14:20.771 "claim_type": "exclusive_write", 00:14:20.771 "zoned": false, 00:14:20.771 "supported_io_types": { 00:14:20.771 "read": true, 00:14:20.771 "write": true, 00:14:20.771 "unmap": true, 00:14:20.771 "write_zeroes": true, 00:14:20.771 "flush": true, 00:14:20.771 "reset": true, 00:14:20.771 "compare": false, 00:14:20.771 "compare_and_write": false, 00:14:20.771 "abort": true, 00:14:20.771 "nvme_admin": false, 00:14:20.771 "nvme_io": false 00:14:20.771 }, 00:14:20.771 "memory_domains": [ 00:14:20.771 { 00:14:20.771 "dma_device_id": "system", 00:14:20.771 "dma_device_type": 1 00:14:20.771 }, 00:14:20.771 { 00:14:20.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.771 "dma_device_type": 2 00:14:20.771 } 00:14:20.771 ], 00:14:20.771 "driver_specific": {} 00:14:20.771 } 00:14:20.772 ] 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.772 02:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.339 02:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.339 "name": "Existed_Raid", 00:14:21.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.339 "strip_size_kb": 64, 00:14:21.339 "state": "configuring", 00:14:21.339 "raid_level": "raid0", 00:14:21.339 "superblock": false, 00:14:21.339 "num_base_bdevs": 3, 00:14:21.339 "num_base_bdevs_discovered": 2, 00:14:21.339 "num_base_bdevs_operational": 3, 00:14:21.339 "base_bdevs_list": [ 00:14:21.339 { 00:14:21.339 "name": "BaseBdev1", 00:14:21.339 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:21.339 "is_configured": true, 00:14:21.339 "data_offset": 0, 00:14:21.339 "data_size": 65536 00:14:21.339 }, 00:14:21.339 { 00:14:21.339 "name": "BaseBdev2", 00:14:21.339 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:21.339 "is_configured": true, 00:14:21.339 "data_offset": 0, 00:14:21.339 "data_size": 65536 00:14:21.339 }, 00:14:21.339 { 00:14:21.339 "name": "BaseBdev3", 00:14:21.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.339 "is_configured": false, 00:14:21.339 "data_offset": 0, 00:14:21.339 "data_size": 0 00:14:21.339 } 00:14:21.339 ] 00:14:21.339 }' 00:14:21.339 02:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.339 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.597 02:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.856 [2024-05-15 02:15:09.660059] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.856 [2024-05-15 02:15:09.660098] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ceeba00 00:14:21.856 [2024-05-15 02:15:09.660102] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:21.856 [2024-05-15 02:15:09.660124] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cf4eec0 00:14:21.856 [2024-05-15 02:15:09.660220] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ceeba00 00:14:21.856 [2024-05-15 02:15:09.660225] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ceeba00 00:14:21.856 [2024-05-15 02:15:09.660255] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.856 BaseBdev3 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:21.856 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.114 02:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.372 [ 00:14:22.372 { 00:14:22.372 "name": "BaseBdev3", 00:14:22.372 "aliases": [ 00:14:22.372 "f44ca7d1-1260-11ef-99fd-bfc7c66e2865" 00:14:22.372 ], 00:14:22.372 "product_name": "Malloc disk", 00:14:22.372 "block_size": 512, 00:14:22.372 "num_blocks": 65536, 00:14:22.372 "uuid": "f44ca7d1-1260-11ef-99fd-bfc7c66e2865", 00:14:22.372 "assigned_rate_limits": { 00:14:22.372 "rw_ios_per_sec": 0, 00:14:22.372 "rw_mbytes_per_sec": 0, 00:14:22.372 "r_mbytes_per_sec": 0, 00:14:22.372 "w_mbytes_per_sec": 0 00:14:22.372 }, 00:14:22.372 "claimed": true, 00:14:22.372 "claim_type": "exclusive_write", 00:14:22.372 "zoned": false, 00:14:22.372 "supported_io_types": { 00:14:22.372 "read": true, 00:14:22.372 "write": true, 00:14:22.372 "unmap": true, 00:14:22.372 "write_zeroes": true, 00:14:22.372 "flush": true, 00:14:22.372 "reset": true, 00:14:22.372 "compare": false, 00:14:22.372 "compare_and_write": false, 00:14:22.372 "abort": true, 00:14:22.372 "nvme_admin": false, 00:14:22.372 "nvme_io": false 00:14:22.372 }, 00:14:22.372 "memory_domains": [ 00:14:22.372 { 00:14:22.372 "dma_device_id": "system", 00:14:22.372 "dma_device_type": 1 00:14:22.372 }, 00:14:22.372 { 00:14:22.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.372 "dma_device_type": 2 00:14:22.372 } 00:14:22.372 ], 00:14:22.372 "driver_specific": {} 00:14:22.372 } 00:14:22.372 ] 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.372 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.630 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.630 "name": "Existed_Raid", 00:14:22.630 "uuid": "f44cadf5-1260-11ef-99fd-bfc7c66e2865", 00:14:22.630 "strip_size_kb": 64, 00:14:22.630 "state": "online", 00:14:22.630 "raid_level": "raid0", 00:14:22.630 "superblock": false, 00:14:22.630 "num_base_bdevs": 3, 00:14:22.630 "num_base_bdevs_discovered": 3, 00:14:22.630 "num_base_bdevs_operational": 3, 00:14:22.630 "base_bdevs_list": [ 00:14:22.630 { 00:14:22.630 "name": "BaseBdev1", 00:14:22.630 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:22.630 "is_configured": true, 00:14:22.630 "data_offset": 0, 00:14:22.630 "data_size": 65536 00:14:22.630 }, 00:14:22.630 { 00:14:22.630 "name": "BaseBdev2", 00:14:22.630 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:22.630 "is_configured": true, 00:14:22.630 "data_offset": 0, 00:14:22.630 "data_size": 65536 00:14:22.630 }, 00:14:22.630 { 00:14:22.630 "name": "BaseBdev3", 00:14:22.630 "uuid": "f44ca7d1-1260-11ef-99fd-bfc7c66e2865", 00:14:22.630 "is_configured": true, 00:14:22.630 "data_offset": 0, 00:14:22.630 "data_size": 65536 00:14:22.630 } 00:14:22.630 ] 00:14:22.630 }' 00:14:22.630 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.630 02:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:22.888 02:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:23.189 [2024-05-15 02:15:11.160144] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.189 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:23.189 "name": "Existed_Raid", 00:14:23.189 "aliases": [ 00:14:23.189 "f44cadf5-1260-11ef-99fd-bfc7c66e2865" 00:14:23.189 ], 00:14:23.189 "product_name": "Raid Volume", 00:14:23.189 "block_size": 512, 00:14:23.189 "num_blocks": 196608, 00:14:23.189 "uuid": "f44cadf5-1260-11ef-99fd-bfc7c66e2865", 00:14:23.189 "assigned_rate_limits": { 00:14:23.189 "rw_ios_per_sec": 0, 00:14:23.189 "rw_mbytes_per_sec": 0, 00:14:23.189 "r_mbytes_per_sec": 0, 00:14:23.189 "w_mbytes_per_sec": 0 00:14:23.189 }, 00:14:23.189 "claimed": false, 00:14:23.189 "zoned": false, 00:14:23.189 "supported_io_types": { 00:14:23.189 "read": true, 00:14:23.189 "write": true, 00:14:23.189 "unmap": true, 00:14:23.189 "write_zeroes": true, 00:14:23.189 "flush": true, 00:14:23.189 "reset": true, 00:14:23.189 "compare": false, 00:14:23.189 "compare_and_write": false, 00:14:23.189 "abort": false, 00:14:23.189 "nvme_admin": false, 00:14:23.189 "nvme_io": false 00:14:23.189 }, 00:14:23.189 "memory_domains": [ 00:14:23.189 { 00:14:23.189 "dma_device_id": "system", 00:14:23.189 "dma_device_type": 1 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.189 "dma_device_type": 2 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "dma_device_id": "system", 00:14:23.189 "dma_device_type": 1 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.189 "dma_device_type": 2 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "dma_device_id": "system", 00:14:23.189 "dma_device_type": 1 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.189 "dma_device_type": 2 00:14:23.189 } 00:14:23.189 ], 00:14:23.189 "driver_specific": { 00:14:23.189 "raid": { 00:14:23.189 "uuid": "f44cadf5-1260-11ef-99fd-bfc7c66e2865", 00:14:23.189 "strip_size_kb": 64, 00:14:23.189 "state": "online", 00:14:23.189 "raid_level": "raid0", 00:14:23.189 "superblock": false, 00:14:23.189 "num_base_bdevs": 3, 00:14:23.189 "num_base_bdevs_discovered": 3, 00:14:23.189 "num_base_bdevs_operational": 3, 00:14:23.189 "base_bdevs_list": [ 00:14:23.189 { 00:14:23.189 "name": "BaseBdev1", 00:14:23.189 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:23.189 "is_configured": true, 00:14:23.189 "data_offset": 0, 00:14:23.189 "data_size": 65536 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "name": "BaseBdev2", 00:14:23.189 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:23.189 "is_configured": true, 00:14:23.189 "data_offset": 0, 00:14:23.189 "data_size": 65536 00:14:23.189 }, 00:14:23.189 { 00:14:23.189 "name": "BaseBdev3", 00:14:23.189 "uuid": "f44ca7d1-1260-11ef-99fd-bfc7c66e2865", 00:14:23.189 "is_configured": true, 00:14:23.189 "data_offset": 0, 00:14:23.189 "data_size": 65536 00:14:23.189 } 00:14:23.189 ] 00:14:23.189 } 00:14:23.189 } 00:14:23.189 }' 00:14:23.189 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.451 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:23.451 BaseBdev2 00:14:23.451 BaseBdev3' 00:14:23.451 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:23.452 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:23.452 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:23.709 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:23.709 "name": "BaseBdev1", 00:14:23.709 "aliases": [ 00:14:23.709 "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865" 00:14:23.710 ], 00:14:23.710 "product_name": "Malloc disk", 00:14:23.710 "block_size": 512, 00:14:23.710 "num_blocks": 65536, 00:14:23.710 "uuid": "f1d9bcf8-1260-11ef-99fd-bfc7c66e2865", 00:14:23.710 "assigned_rate_limits": { 00:14:23.710 "rw_ios_per_sec": 0, 00:14:23.710 "rw_mbytes_per_sec": 0, 00:14:23.710 "r_mbytes_per_sec": 0, 00:14:23.710 "w_mbytes_per_sec": 0 00:14:23.710 }, 00:14:23.710 "claimed": true, 00:14:23.710 "claim_type": "exclusive_write", 00:14:23.710 "zoned": false, 00:14:23.710 "supported_io_types": { 00:14:23.710 "read": true, 00:14:23.710 "write": true, 00:14:23.710 "unmap": true, 00:14:23.710 "write_zeroes": true, 00:14:23.710 "flush": true, 00:14:23.710 "reset": true, 00:14:23.710 "compare": false, 00:14:23.710 "compare_and_write": false, 00:14:23.710 "abort": true, 00:14:23.710 "nvme_admin": false, 00:14:23.710 "nvme_io": false 00:14:23.710 }, 00:14:23.710 "memory_domains": [ 00:14:23.710 { 00:14:23.710 "dma_device_id": "system", 00:14:23.710 "dma_device_type": 1 00:14:23.710 }, 00:14:23.710 { 00:14:23.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.710 "dma_device_type": 2 00:14:23.710 } 00:14:23.710 ], 00:14:23.710 "driver_specific": {} 00:14:23.710 }' 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:23.710 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:23.967 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:23.967 "name": "BaseBdev2", 00:14:23.967 "aliases": [ 00:14:23.967 "f36d3cf7-1260-11ef-99fd-bfc7c66e2865" 00:14:23.967 ], 00:14:23.967 "product_name": "Malloc disk", 00:14:23.967 "block_size": 512, 00:14:23.967 "num_blocks": 65536, 00:14:23.967 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:23.967 "assigned_rate_limits": { 00:14:23.967 "rw_ios_per_sec": 0, 00:14:23.967 "rw_mbytes_per_sec": 0, 00:14:23.967 "r_mbytes_per_sec": 0, 00:14:23.967 "w_mbytes_per_sec": 0 00:14:23.967 }, 00:14:23.967 "claimed": true, 00:14:23.967 "claim_type": "exclusive_write", 00:14:23.967 "zoned": false, 00:14:23.967 "supported_io_types": { 00:14:23.967 "read": true, 00:14:23.967 "write": true, 00:14:23.967 "unmap": true, 00:14:23.968 "write_zeroes": true, 00:14:23.968 "flush": true, 00:14:23.968 "reset": true, 00:14:23.968 "compare": false, 00:14:23.968 "compare_and_write": false, 00:14:23.968 "abort": true, 00:14:23.968 "nvme_admin": false, 00:14:23.968 "nvme_io": false 00:14:23.968 }, 00:14:23.968 "memory_domains": [ 00:14:23.968 { 00:14:23.968 "dma_device_id": "system", 00:14:23.968 "dma_device_type": 1 00:14:23.968 }, 00:14:23.968 { 00:14:23.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.968 "dma_device_type": 2 00:14:23.968 } 00:14:23.968 ], 00:14:23.968 "driver_specific": {} 00:14:23.968 }' 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:23.968 02:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:24.225 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:24.225 "name": "BaseBdev3", 00:14:24.225 "aliases": [ 00:14:24.225 "f44ca7d1-1260-11ef-99fd-bfc7c66e2865" 00:14:24.225 ], 00:14:24.225 "product_name": "Malloc disk", 00:14:24.225 "block_size": 512, 00:14:24.225 "num_blocks": 65536, 00:14:24.225 "uuid": "f44ca7d1-1260-11ef-99fd-bfc7c66e2865", 00:14:24.226 "assigned_rate_limits": { 00:14:24.226 "rw_ios_per_sec": 0, 00:14:24.226 "rw_mbytes_per_sec": 0, 00:14:24.226 "r_mbytes_per_sec": 0, 00:14:24.226 "w_mbytes_per_sec": 0 00:14:24.226 }, 00:14:24.226 "claimed": true, 00:14:24.226 "claim_type": "exclusive_write", 00:14:24.226 "zoned": false, 00:14:24.226 "supported_io_types": { 00:14:24.226 "read": true, 00:14:24.226 "write": true, 00:14:24.226 "unmap": true, 00:14:24.226 "write_zeroes": true, 00:14:24.226 "flush": true, 00:14:24.226 "reset": true, 00:14:24.226 "compare": false, 00:14:24.226 "compare_and_write": false, 00:14:24.226 "abort": true, 00:14:24.226 "nvme_admin": false, 00:14:24.226 "nvme_io": false 00:14:24.226 }, 00:14:24.226 "memory_domains": [ 00:14:24.226 { 00:14:24.226 "dma_device_id": "system", 00:14:24.226 "dma_device_type": 1 00:14:24.226 }, 00:14:24.226 { 00:14:24.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.226 "dma_device_type": 2 00:14:24.226 } 00:14:24.226 ], 00:14:24.226 "driver_specific": {} 00:14:24.226 }' 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:24.226 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:24.791 [2024-05-15 02:15:12.524263] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.791 [2024-05-15 02:15:12.524291] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.791 [2024-05-15 02:15:12.524305] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.791 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.049 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.049 "name": "Existed_Raid", 00:14:25.049 "uuid": "f44cadf5-1260-11ef-99fd-bfc7c66e2865", 00:14:25.049 "strip_size_kb": 64, 00:14:25.049 "state": "offline", 00:14:25.049 "raid_level": "raid0", 00:14:25.049 "superblock": false, 00:14:25.049 "num_base_bdevs": 3, 00:14:25.049 "num_base_bdevs_discovered": 2, 00:14:25.049 "num_base_bdevs_operational": 2, 00:14:25.049 "base_bdevs_list": [ 00:14:25.049 { 00:14:25.049 "name": null, 00:14:25.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.049 "is_configured": false, 00:14:25.049 "data_offset": 0, 00:14:25.049 "data_size": 65536 00:14:25.049 }, 00:14:25.049 { 00:14:25.049 "name": "BaseBdev2", 00:14:25.049 "uuid": "f36d3cf7-1260-11ef-99fd-bfc7c66e2865", 00:14:25.049 "is_configured": true, 00:14:25.049 "data_offset": 0, 00:14:25.049 "data_size": 65536 00:14:25.049 }, 00:14:25.049 { 00:14:25.049 "name": "BaseBdev3", 00:14:25.049 "uuid": "f44ca7d1-1260-11ef-99fd-bfc7c66e2865", 00:14:25.049 "is_configured": true, 00:14:25.049 "data_offset": 0, 00:14:25.049 "data_size": 65536 00:14:25.049 } 00:14:25.049 ] 00:14:25.049 }' 00:14:25.049 02:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.049 02:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.307 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:25.307 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.308 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.308 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:25.565 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:25.565 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.565 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:25.822 [2024-05-15 02:15:13.677219] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.822 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.822 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.822 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:25.822 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.080 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:26.080 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.080 02:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:26.339 [2024-05-15 02:15:14.274093] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.339 [2024-05-15 02:15:14.274130] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeba00 name Existed_Raid, state offline 00:14:26.339 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.339 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.339 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:26.339 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:26.599 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.165 BaseBdev2 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:27.165 02:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.423 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.681 [ 00:14:27.681 { 00:14:27.681 "name": "BaseBdev2", 00:14:27.681 "aliases": [ 00:14:27.681 "f769a9de-1260-11ef-99fd-bfc7c66e2865" 00:14:27.681 ], 00:14:27.681 "product_name": "Malloc disk", 00:14:27.681 "block_size": 512, 00:14:27.681 "num_blocks": 65536, 00:14:27.681 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:27.681 "assigned_rate_limits": { 00:14:27.681 "rw_ios_per_sec": 0, 00:14:27.681 "rw_mbytes_per_sec": 0, 00:14:27.681 "r_mbytes_per_sec": 0, 00:14:27.681 "w_mbytes_per_sec": 0 00:14:27.681 }, 00:14:27.681 "claimed": false, 00:14:27.681 "zoned": false, 00:14:27.681 "supported_io_types": { 00:14:27.681 "read": true, 00:14:27.681 "write": true, 00:14:27.681 "unmap": true, 00:14:27.681 "write_zeroes": true, 00:14:27.681 "flush": true, 00:14:27.681 "reset": true, 00:14:27.681 "compare": false, 00:14:27.681 "compare_and_write": false, 00:14:27.681 "abort": true, 00:14:27.681 "nvme_admin": false, 00:14:27.681 "nvme_io": false 00:14:27.681 }, 00:14:27.681 "memory_domains": [ 00:14:27.681 { 00:14:27.681 "dma_device_id": "system", 00:14:27.681 "dma_device_type": 1 00:14:27.681 }, 00:14:27.681 { 00:14:27.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.681 "dma_device_type": 2 00:14:27.681 } 00:14:27.681 ], 00:14:27.681 "driver_specific": {} 00:14:27.681 } 00:14:27.681 ] 00:14:27.681 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:27.681 02:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:27.681 02:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:27.681 02:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.987 BaseBdev3 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:27.987 02:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.270 02:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.526 [ 00:14:28.526 { 00:14:28.526 "name": "BaseBdev3", 00:14:28.526 "aliases": [ 00:14:28.526 "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865" 00:14:28.526 ], 00:14:28.526 "product_name": "Malloc disk", 00:14:28.526 "block_size": 512, 00:14:28.526 "num_blocks": 65536, 00:14:28.526 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:28.526 "assigned_rate_limits": { 00:14:28.526 "rw_ios_per_sec": 0, 00:14:28.526 "rw_mbytes_per_sec": 0, 00:14:28.527 "r_mbytes_per_sec": 0, 00:14:28.527 "w_mbytes_per_sec": 0 00:14:28.527 }, 00:14:28.527 "claimed": false, 00:14:28.527 "zoned": false, 00:14:28.527 "supported_io_types": { 00:14:28.527 "read": true, 00:14:28.527 "write": true, 00:14:28.527 "unmap": true, 00:14:28.527 "write_zeroes": true, 00:14:28.527 "flush": true, 00:14:28.527 "reset": true, 00:14:28.527 "compare": false, 00:14:28.527 "compare_and_write": false, 00:14:28.527 "abort": true, 00:14:28.527 "nvme_admin": false, 00:14:28.527 "nvme_io": false 00:14:28.527 }, 00:14:28.527 "memory_domains": [ 00:14:28.527 { 00:14:28.527 "dma_device_id": "system", 00:14:28.527 "dma_device_type": 1 00:14:28.527 }, 00:14:28.527 { 00:14:28.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.527 "dma_device_type": 2 00:14:28.527 } 00:14:28.527 ], 00:14:28.527 "driver_specific": {} 00:14:28.527 } 00:14:28.527 ] 00:14:28.527 02:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:28.527 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:28.527 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:28.527 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:28.784 [2024-05-15 02:15:16.679424] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.784 [2024-05-15 02:15:16.679481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.784 [2024-05-15 02:15:16.679490] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.784 [2024-05-15 02:15:16.679924] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.784 02:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.352 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.352 "name": "Existed_Raid", 00:14:29.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.352 "strip_size_kb": 64, 00:14:29.353 "state": "configuring", 00:14:29.353 "raid_level": "raid0", 00:14:29.353 "superblock": false, 00:14:29.353 "num_base_bdevs": 3, 00:14:29.353 "num_base_bdevs_discovered": 2, 00:14:29.353 "num_base_bdevs_operational": 3, 00:14:29.353 "base_bdevs_list": [ 00:14:29.353 { 00:14:29.353 "name": "BaseBdev1", 00:14:29.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.353 "is_configured": false, 00:14:29.353 "data_offset": 0, 00:14:29.353 "data_size": 0 00:14:29.353 }, 00:14:29.353 { 00:14:29.353 "name": "BaseBdev2", 00:14:29.353 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:29.353 "is_configured": true, 00:14:29.353 "data_offset": 0, 00:14:29.353 "data_size": 65536 00:14:29.353 }, 00:14:29.353 { 00:14:29.353 "name": "BaseBdev3", 00:14:29.353 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:29.353 "is_configured": true, 00:14:29.353 "data_offset": 0, 00:14:29.353 "data_size": 65536 00:14:29.353 } 00:14:29.353 ] 00:14:29.353 }' 00:14:29.353 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.353 02:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.615 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:29.881 [2024-05-15 02:15:17.767554] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.881 02:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.151 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.151 "name": "Existed_Raid", 00:14:30.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.151 "strip_size_kb": 64, 00:14:30.151 "state": "configuring", 00:14:30.151 "raid_level": "raid0", 00:14:30.151 "superblock": false, 00:14:30.151 "num_base_bdevs": 3, 00:14:30.151 "num_base_bdevs_discovered": 1, 00:14:30.151 "num_base_bdevs_operational": 3, 00:14:30.151 "base_bdevs_list": [ 00:14:30.151 { 00:14:30.151 "name": "BaseBdev1", 00:14:30.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.151 "is_configured": false, 00:14:30.151 "data_offset": 0, 00:14:30.151 "data_size": 0 00:14:30.151 }, 00:14:30.151 { 00:14:30.151 "name": null, 00:14:30.151 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:30.151 "is_configured": false, 00:14:30.151 "data_offset": 0, 00:14:30.151 "data_size": 65536 00:14:30.151 }, 00:14:30.151 { 00:14:30.151 "name": "BaseBdev3", 00:14:30.151 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:30.151 "is_configured": true, 00:14:30.151 "data_offset": 0, 00:14:30.151 "data_size": 65536 00:14:30.151 } 00:14:30.151 ] 00:14:30.151 }' 00:14:30.151 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.151 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.422 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.422 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.740 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:30.740 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.015 [2024-05-15 02:15:18.875791] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.015 BaseBdev1 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:31.015 02:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.278 02:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.536 [ 00:14:31.536 { 00:14:31.536 "name": "BaseBdev1", 00:14:31.536 "aliases": [ 00:14:31.536 "f9cadd18-1260-11ef-99fd-bfc7c66e2865" 00:14:31.536 ], 00:14:31.536 "product_name": "Malloc disk", 00:14:31.536 "block_size": 512, 00:14:31.536 "num_blocks": 65536, 00:14:31.536 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:31.536 "assigned_rate_limits": { 00:14:31.536 "rw_ios_per_sec": 0, 00:14:31.536 "rw_mbytes_per_sec": 0, 00:14:31.536 "r_mbytes_per_sec": 0, 00:14:31.536 "w_mbytes_per_sec": 0 00:14:31.536 }, 00:14:31.536 "claimed": true, 00:14:31.536 "claim_type": "exclusive_write", 00:14:31.536 "zoned": false, 00:14:31.536 "supported_io_types": { 00:14:31.536 "read": true, 00:14:31.536 "write": true, 00:14:31.536 "unmap": true, 00:14:31.536 "write_zeroes": true, 00:14:31.536 "flush": true, 00:14:31.536 "reset": true, 00:14:31.536 "compare": false, 00:14:31.536 "compare_and_write": false, 00:14:31.536 "abort": true, 00:14:31.536 "nvme_admin": false, 00:14:31.536 "nvme_io": false 00:14:31.536 }, 00:14:31.536 "memory_domains": [ 00:14:31.536 { 00:14:31.536 "dma_device_id": "system", 00:14:31.536 "dma_device_type": 1 00:14:31.536 }, 00:14:31.536 { 00:14:31.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.536 "dma_device_type": 2 00:14:31.536 } 00:14:31.536 ], 00:14:31.536 "driver_specific": {} 00:14:31.536 } 00:14:31.536 ] 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.536 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.793 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.793 "name": "Existed_Raid", 00:14:31.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.793 "strip_size_kb": 64, 00:14:31.793 "state": "configuring", 00:14:31.793 "raid_level": "raid0", 00:14:31.793 "superblock": false, 00:14:31.793 "num_base_bdevs": 3, 00:14:31.793 "num_base_bdevs_discovered": 2, 00:14:31.793 "num_base_bdevs_operational": 3, 00:14:31.793 "base_bdevs_list": [ 00:14:31.793 { 00:14:31.793 "name": "BaseBdev1", 00:14:31.793 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:31.793 "is_configured": true, 00:14:31.793 "data_offset": 0, 00:14:31.793 "data_size": 65536 00:14:31.793 }, 00:14:31.793 { 00:14:31.793 "name": null, 00:14:31.793 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:31.793 "is_configured": false, 00:14:31.793 "data_offset": 0, 00:14:31.793 "data_size": 65536 00:14:31.793 }, 00:14:31.793 { 00:14:31.793 "name": "BaseBdev3", 00:14:31.793 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:31.793 "is_configured": true, 00:14:31.793 "data_offset": 0, 00:14:31.793 "data_size": 65536 00:14:31.793 } 00:14:31.793 ] 00:14:31.793 }' 00:14:31.793 02:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.793 02:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.357 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.357 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.615 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:32.615 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:32.873 [2024-05-15 02:15:20.656278] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.873 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.131 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:33.131 "name": "Existed_Raid", 00:14:33.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.131 "strip_size_kb": 64, 00:14:33.131 "state": "configuring", 00:14:33.131 "raid_level": "raid0", 00:14:33.131 "superblock": false, 00:14:33.131 "num_base_bdevs": 3, 00:14:33.131 "num_base_bdevs_discovered": 1, 00:14:33.131 "num_base_bdevs_operational": 3, 00:14:33.131 "base_bdevs_list": [ 00:14:33.131 { 00:14:33.131 "name": "BaseBdev1", 00:14:33.131 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:33.131 "is_configured": true, 00:14:33.131 "data_offset": 0, 00:14:33.131 "data_size": 65536 00:14:33.131 }, 00:14:33.131 { 00:14:33.131 "name": null, 00:14:33.131 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:33.131 "is_configured": false, 00:14:33.131 "data_offset": 0, 00:14:33.131 "data_size": 65536 00:14:33.131 }, 00:14:33.131 { 00:14:33.131 "name": null, 00:14:33.131 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:33.131 "is_configured": false, 00:14:33.131 "data_offset": 0, 00:14:33.131 "data_size": 65536 00:14:33.131 } 00:14:33.131 ] 00:14:33.131 }' 00:14:33.131 02:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:33.131 02:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.389 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.389 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.647 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:33.647 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:33.905 [2024-05-15 02:15:21.836671] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.905 02:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.162 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.163 "name": "Existed_Raid", 00:14:34.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.163 "strip_size_kb": 64, 00:14:34.163 "state": "configuring", 00:14:34.163 "raid_level": "raid0", 00:14:34.163 "superblock": false, 00:14:34.163 "num_base_bdevs": 3, 00:14:34.163 "num_base_bdevs_discovered": 2, 00:14:34.163 "num_base_bdevs_operational": 3, 00:14:34.163 "base_bdevs_list": [ 00:14:34.163 { 00:14:34.163 "name": "BaseBdev1", 00:14:34.163 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:34.163 "is_configured": true, 00:14:34.163 "data_offset": 0, 00:14:34.163 "data_size": 65536 00:14:34.163 }, 00:14:34.163 { 00:14:34.163 "name": null, 00:14:34.163 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:34.163 "is_configured": false, 00:14:34.163 "data_offset": 0, 00:14:34.163 "data_size": 65536 00:14:34.163 }, 00:14:34.163 { 00:14:34.163 "name": "BaseBdev3", 00:14:34.163 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:34.163 "is_configured": true, 00:14:34.163 "data_offset": 0, 00:14:34.163 "data_size": 65536 00:14:34.163 } 00:14:34.163 ] 00:14:34.163 }' 00:14:34.163 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.163 02:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.728 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.728 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:34.728 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:34.728 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:34.986 [2024-05-15 02:15:22.945043] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.986 02:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.553 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.553 "name": "Existed_Raid", 00:14:35.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.553 "strip_size_kb": 64, 00:14:35.553 "state": "configuring", 00:14:35.553 "raid_level": "raid0", 00:14:35.553 "superblock": false, 00:14:35.553 "num_base_bdevs": 3, 00:14:35.553 "num_base_bdevs_discovered": 1, 00:14:35.553 "num_base_bdevs_operational": 3, 00:14:35.553 "base_bdevs_list": [ 00:14:35.553 { 00:14:35.553 "name": null, 00:14:35.553 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:35.553 "is_configured": false, 00:14:35.553 "data_offset": 0, 00:14:35.553 "data_size": 65536 00:14:35.553 }, 00:14:35.553 { 00:14:35.553 "name": null, 00:14:35.553 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:35.553 "is_configured": false, 00:14:35.553 "data_offset": 0, 00:14:35.553 "data_size": 65536 00:14:35.553 }, 00:14:35.553 { 00:14:35.553 "name": "BaseBdev3", 00:14:35.553 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:35.553 "is_configured": true, 00:14:35.553 "data_offset": 0, 00:14:35.553 "data_size": 65536 00:14:35.553 } 00:14:35.553 ] 00:14:35.553 }' 00:14:35.553 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.553 02:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.811 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.811 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.074 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:36.074 02:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:36.332 [2024-05-15 02:15:24.162254] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.332 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.333 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.333 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.333 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.591 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.591 "name": "Existed_Raid", 00:14:36.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.591 "strip_size_kb": 64, 00:14:36.591 "state": "configuring", 00:14:36.591 "raid_level": "raid0", 00:14:36.591 "superblock": false, 00:14:36.591 "num_base_bdevs": 3, 00:14:36.591 "num_base_bdevs_discovered": 2, 00:14:36.591 "num_base_bdevs_operational": 3, 00:14:36.591 "base_bdevs_list": [ 00:14:36.591 { 00:14:36.591 "name": null, 00:14:36.591 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:36.591 "is_configured": false, 00:14:36.591 "data_offset": 0, 00:14:36.591 "data_size": 65536 00:14:36.591 }, 00:14:36.591 { 00:14:36.591 "name": "BaseBdev2", 00:14:36.591 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:36.591 "is_configured": true, 00:14:36.591 "data_offset": 0, 00:14:36.591 "data_size": 65536 00:14:36.591 }, 00:14:36.591 { 00:14:36.591 "name": "BaseBdev3", 00:14:36.591 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:36.591 "is_configured": true, 00:14:36.591 "data_offset": 0, 00:14:36.591 "data_size": 65536 00:14:36.591 } 00:14:36.591 ] 00:14:36.591 }' 00:14:36.591 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.591 02:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.849 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.850 02:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:37.108 02:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:37.108 02:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.108 02:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:37.367 02:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f9cadd18-1260-11ef-99fd-bfc7c66e2865 00:14:37.626 [2024-05-15 02:15:25.586810] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:37.626 [2024-05-15 02:15:25.586839] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ceeba00 00:14:37.626 [2024-05-15 02:15:25.586844] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:37.626 [2024-05-15 02:15:25.586867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cf4ee20 00:14:37.626 [2024-05-15 02:15:25.586928] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ceeba00 00:14:37.626 [2024-05-15 02:15:25.586932] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ceeba00 00:14:37.626 [2024-05-15 02:15:25.586963] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.626 NewBaseBdev 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:37.626 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.885 02:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:38.143 [ 00:14:38.143 { 00:14:38.143 "name": "NewBaseBdev", 00:14:38.143 "aliases": [ 00:14:38.143 "f9cadd18-1260-11ef-99fd-bfc7c66e2865" 00:14:38.143 ], 00:14:38.143 "product_name": "Malloc disk", 00:14:38.143 "block_size": 512, 00:14:38.143 "num_blocks": 65536, 00:14:38.143 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:38.143 "assigned_rate_limits": { 00:14:38.143 "rw_ios_per_sec": 0, 00:14:38.143 "rw_mbytes_per_sec": 0, 00:14:38.143 "r_mbytes_per_sec": 0, 00:14:38.143 "w_mbytes_per_sec": 0 00:14:38.143 }, 00:14:38.143 "claimed": true, 00:14:38.143 "claim_type": "exclusive_write", 00:14:38.143 "zoned": false, 00:14:38.143 "supported_io_types": { 00:14:38.143 "read": true, 00:14:38.143 "write": true, 00:14:38.143 "unmap": true, 00:14:38.143 "write_zeroes": true, 00:14:38.143 "flush": true, 00:14:38.143 "reset": true, 00:14:38.143 "compare": false, 00:14:38.143 "compare_and_write": false, 00:14:38.143 "abort": true, 00:14:38.143 "nvme_admin": false, 00:14:38.143 "nvme_io": false 00:14:38.143 }, 00:14:38.143 "memory_domains": [ 00:14:38.143 { 00:14:38.143 "dma_device_id": "system", 00:14:38.143 "dma_device_type": 1 00:14:38.143 }, 00:14:38.143 { 00:14:38.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.143 "dma_device_type": 2 00:14:38.143 } 00:14:38.143 ], 00:14:38.144 "driver_specific": {} 00:14:38.144 } 00:14:38.144 ] 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.144 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.403 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.403 "name": "Existed_Raid", 00:14:38.403 "uuid": "fdcae858-1260-11ef-99fd-bfc7c66e2865", 00:14:38.403 "strip_size_kb": 64, 00:14:38.403 "state": "online", 00:14:38.403 "raid_level": "raid0", 00:14:38.403 "superblock": false, 00:14:38.403 "num_base_bdevs": 3, 00:14:38.403 "num_base_bdevs_discovered": 3, 00:14:38.403 "num_base_bdevs_operational": 3, 00:14:38.403 "base_bdevs_list": [ 00:14:38.403 { 00:14:38.403 "name": "NewBaseBdev", 00:14:38.403 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:38.403 "is_configured": true, 00:14:38.403 "data_offset": 0, 00:14:38.403 "data_size": 65536 00:14:38.403 }, 00:14:38.403 { 00:14:38.403 "name": "BaseBdev2", 00:14:38.403 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:38.403 "is_configured": true, 00:14:38.403 "data_offset": 0, 00:14:38.403 "data_size": 65536 00:14:38.403 }, 00:14:38.403 { 00:14:38.403 "name": "BaseBdev3", 00:14:38.403 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:38.403 "is_configured": true, 00:14:38.403 "data_offset": 0, 00:14:38.403 "data_size": 65536 00:14:38.403 } 00:14:38.403 ] 00:14:38.403 }' 00:14:38.403 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.403 02:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:38.970 02:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:38.970 [2024-05-15 02:15:26.987129] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:39.229 "name": "Existed_Raid", 00:14:39.229 "aliases": [ 00:14:39.229 "fdcae858-1260-11ef-99fd-bfc7c66e2865" 00:14:39.229 ], 00:14:39.229 "product_name": "Raid Volume", 00:14:39.229 "block_size": 512, 00:14:39.229 "num_blocks": 196608, 00:14:39.229 "uuid": "fdcae858-1260-11ef-99fd-bfc7c66e2865", 00:14:39.229 "assigned_rate_limits": { 00:14:39.229 "rw_ios_per_sec": 0, 00:14:39.229 "rw_mbytes_per_sec": 0, 00:14:39.229 "r_mbytes_per_sec": 0, 00:14:39.229 "w_mbytes_per_sec": 0 00:14:39.229 }, 00:14:39.229 "claimed": false, 00:14:39.229 "zoned": false, 00:14:39.229 "supported_io_types": { 00:14:39.229 "read": true, 00:14:39.229 "write": true, 00:14:39.229 "unmap": true, 00:14:39.229 "write_zeroes": true, 00:14:39.229 "flush": true, 00:14:39.229 "reset": true, 00:14:39.229 "compare": false, 00:14:39.229 "compare_and_write": false, 00:14:39.229 "abort": false, 00:14:39.229 "nvme_admin": false, 00:14:39.229 "nvme_io": false 00:14:39.229 }, 00:14:39.229 "memory_domains": [ 00:14:39.229 { 00:14:39.229 "dma_device_id": "system", 00:14:39.229 "dma_device_type": 1 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.229 "dma_device_type": 2 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "dma_device_id": "system", 00:14:39.229 "dma_device_type": 1 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.229 "dma_device_type": 2 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "dma_device_id": "system", 00:14:39.229 "dma_device_type": 1 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.229 "dma_device_type": 2 00:14:39.229 } 00:14:39.229 ], 00:14:39.229 "driver_specific": { 00:14:39.229 "raid": { 00:14:39.229 "uuid": "fdcae858-1260-11ef-99fd-bfc7c66e2865", 00:14:39.229 "strip_size_kb": 64, 00:14:39.229 "state": "online", 00:14:39.229 "raid_level": "raid0", 00:14:39.229 "superblock": false, 00:14:39.229 "num_base_bdevs": 3, 00:14:39.229 "num_base_bdevs_discovered": 3, 00:14:39.229 "num_base_bdevs_operational": 3, 00:14:39.229 "base_bdevs_list": [ 00:14:39.229 { 00:14:39.229 "name": "NewBaseBdev", 00:14:39.229 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:39.229 "is_configured": true, 00:14:39.229 "data_offset": 0, 00:14:39.229 "data_size": 65536 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "name": "BaseBdev2", 00:14:39.229 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:39.229 "is_configured": true, 00:14:39.229 "data_offset": 0, 00:14:39.229 "data_size": 65536 00:14:39.229 }, 00:14:39.229 { 00:14:39.229 "name": "BaseBdev3", 00:14:39.229 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:39.229 "is_configured": true, 00:14:39.229 "data_offset": 0, 00:14:39.229 "data_size": 65536 00:14:39.229 } 00:14:39.229 ] 00:14:39.229 } 00:14:39.229 } 00:14:39.229 }' 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:39.229 BaseBdev2 00:14:39.229 BaseBdev3' 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:39.229 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:39.491 "name": "NewBaseBdev", 00:14:39.491 "aliases": [ 00:14:39.491 "f9cadd18-1260-11ef-99fd-bfc7c66e2865" 00:14:39.491 ], 00:14:39.491 "product_name": "Malloc disk", 00:14:39.491 "block_size": 512, 00:14:39.491 "num_blocks": 65536, 00:14:39.491 "uuid": "f9cadd18-1260-11ef-99fd-bfc7c66e2865", 00:14:39.491 "assigned_rate_limits": { 00:14:39.491 "rw_ios_per_sec": 0, 00:14:39.491 "rw_mbytes_per_sec": 0, 00:14:39.491 "r_mbytes_per_sec": 0, 00:14:39.491 "w_mbytes_per_sec": 0 00:14:39.491 }, 00:14:39.491 "claimed": true, 00:14:39.491 "claim_type": "exclusive_write", 00:14:39.491 "zoned": false, 00:14:39.491 "supported_io_types": { 00:14:39.491 "read": true, 00:14:39.491 "write": true, 00:14:39.491 "unmap": true, 00:14:39.491 "write_zeroes": true, 00:14:39.491 "flush": true, 00:14:39.491 "reset": true, 00:14:39.491 "compare": false, 00:14:39.491 "compare_and_write": false, 00:14:39.491 "abort": true, 00:14:39.491 "nvme_admin": false, 00:14:39.491 "nvme_io": false 00:14:39.491 }, 00:14:39.491 "memory_domains": [ 00:14:39.491 { 00:14:39.491 "dma_device_id": "system", 00:14:39.491 "dma_device_type": 1 00:14:39.491 }, 00:14:39.491 { 00:14:39.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.491 "dma_device_type": 2 00:14:39.491 } 00:14:39.491 ], 00:14:39.491 "driver_specific": {} 00:14:39.491 }' 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:39.491 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:39.776 "name": "BaseBdev2", 00:14:39.776 "aliases": [ 00:14:39.776 "f769a9de-1260-11ef-99fd-bfc7c66e2865" 00:14:39.776 ], 00:14:39.776 "product_name": "Malloc disk", 00:14:39.776 "block_size": 512, 00:14:39.776 "num_blocks": 65536, 00:14:39.776 "uuid": "f769a9de-1260-11ef-99fd-bfc7c66e2865", 00:14:39.776 "assigned_rate_limits": { 00:14:39.776 "rw_ios_per_sec": 0, 00:14:39.776 "rw_mbytes_per_sec": 0, 00:14:39.776 "r_mbytes_per_sec": 0, 00:14:39.776 "w_mbytes_per_sec": 0 00:14:39.776 }, 00:14:39.776 "claimed": true, 00:14:39.776 "claim_type": "exclusive_write", 00:14:39.776 "zoned": false, 00:14:39.776 "supported_io_types": { 00:14:39.776 "read": true, 00:14:39.776 "write": true, 00:14:39.776 "unmap": true, 00:14:39.776 "write_zeroes": true, 00:14:39.776 "flush": true, 00:14:39.776 "reset": true, 00:14:39.776 "compare": false, 00:14:39.776 "compare_and_write": false, 00:14:39.776 "abort": true, 00:14:39.776 "nvme_admin": false, 00:14:39.776 "nvme_io": false 00:14:39.776 }, 00:14:39.776 "memory_domains": [ 00:14:39.776 { 00:14:39.776 "dma_device_id": "system", 00:14:39.776 "dma_device_type": 1 00:14:39.776 }, 00:14:39.776 { 00:14:39.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.776 "dma_device_type": 2 00:14:39.776 } 00:14:39.776 ], 00:14:39.776 "driver_specific": {} 00:14:39.776 }' 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:39.776 02:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:40.042 "name": "BaseBdev3", 00:14:40.042 "aliases": [ 00:14:40.042 "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865" 00:14:40.042 ], 00:14:40.042 "product_name": "Malloc disk", 00:14:40.042 "block_size": 512, 00:14:40.042 "num_blocks": 65536, 00:14:40.042 "uuid": "f7ec4bdc-1260-11ef-99fd-bfc7c66e2865", 00:14:40.042 "assigned_rate_limits": { 00:14:40.042 "rw_ios_per_sec": 0, 00:14:40.042 "rw_mbytes_per_sec": 0, 00:14:40.042 "r_mbytes_per_sec": 0, 00:14:40.042 "w_mbytes_per_sec": 0 00:14:40.042 }, 00:14:40.042 "claimed": true, 00:14:40.042 "claim_type": "exclusive_write", 00:14:40.042 "zoned": false, 00:14:40.042 "supported_io_types": { 00:14:40.042 "read": true, 00:14:40.042 "write": true, 00:14:40.042 "unmap": true, 00:14:40.042 "write_zeroes": true, 00:14:40.042 "flush": true, 00:14:40.042 "reset": true, 00:14:40.042 "compare": false, 00:14:40.042 "compare_and_write": false, 00:14:40.042 "abort": true, 00:14:40.042 "nvme_admin": false, 00:14:40.042 "nvme_io": false 00:14:40.042 }, 00:14:40.042 "memory_domains": [ 00:14:40.042 { 00:14:40.042 "dma_device_id": "system", 00:14:40.042 "dma_device_type": 1 00:14:40.042 }, 00:14:40.042 { 00:14:40.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.042 "dma_device_type": 2 00:14:40.042 } 00:14:40.042 ], 00:14:40.042 "driver_specific": {} 00:14:40.042 }' 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:40.042 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:40.300 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:40.300 [2024-05-15 02:15:28.303441] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.300 [2024-05-15 02:15:28.303473] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.300 [2024-05-15 02:15:28.303493] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.300 [2024-05-15 02:15:28.303509] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.300 [2024-05-15 02:15:28.303513] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeba00 name Existed_Raid, state offline 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 51553 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 51553 ']' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 51553 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 51553 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51553' 00:14:40.558 killing process with pid 51553 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 51553 00:14:40.558 [2024-05-15 02:15:28.337437] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 51553 00:14:40.558 [2024-05-15 02:15:28.351712] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:14:40.558 00:14:40.558 real 0m25.652s 00:14:40.558 user 0m46.755s 00:14:40.558 sys 0m3.842s 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.558 ************************************ 00:14:40.558 END TEST raid_state_function_test 00:14:40.558 ************************************ 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.558 02:15:28 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:40.558 02:15:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:40.558 02:15:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.558 02:15:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.558 ************************************ 00:14:40.558 START TEST raid_state_function_test_sb 00:14:40.558 ************************************ 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=52286 00:14:40.558 Process raid pid: 52286 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 52286' 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 52286 /var/tmp/spdk-raid.sock 00:14:40.558 02:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 52286 ']' 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.559 02:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.816 [2024-05-15 02:15:28.575704] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:40.816 [2024-05-15 02:15:28.575943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:41.075 EAL: TSC is not safe to use in SMP mode 00:14:41.075 EAL: TSC is not invariant 00:14:41.075 [2024-05-15 02:15:29.072711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.333 [2024-05-15 02:15:29.161063] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:41.333 [2024-05-15 02:15:29.163324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.333 [2024-05-15 02:15:29.164075] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.333 [2024-05-15 02:15:29.164089] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.898 02:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:41.898 02:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:41.898 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:41.898 [2024-05-15 02:15:29.903766] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.898 [2024-05-15 02:15:29.903837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.898 [2024-05-15 02:15:29.903842] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.898 [2024-05-15 02:15:29.903851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.898 [2024-05-15 02:15:29.903854] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.898 [2024-05-15 02:15:29.903861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.156 02:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.415 02:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.415 "name": "Existed_Raid", 00:14:42.415 "uuid": "005d9d9f-1261-11ef-99fd-bfc7c66e2865", 00:14:42.415 "strip_size_kb": 64, 00:14:42.415 "state": "configuring", 00:14:42.415 "raid_level": "raid0", 00:14:42.415 "superblock": true, 00:14:42.415 "num_base_bdevs": 3, 00:14:42.415 "num_base_bdevs_discovered": 0, 00:14:42.415 "num_base_bdevs_operational": 3, 00:14:42.415 "base_bdevs_list": [ 00:14:42.415 { 00:14:42.415 "name": "BaseBdev1", 00:14:42.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.415 "is_configured": false, 00:14:42.415 "data_offset": 0, 00:14:42.415 "data_size": 0 00:14:42.415 }, 00:14:42.415 { 00:14:42.415 "name": "BaseBdev2", 00:14:42.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.415 "is_configured": false, 00:14:42.415 "data_offset": 0, 00:14:42.415 "data_size": 0 00:14:42.415 }, 00:14:42.415 { 00:14:42.415 "name": "BaseBdev3", 00:14:42.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.415 "is_configured": false, 00:14:42.415 "data_offset": 0, 00:14:42.415 "data_size": 0 00:14:42.415 } 00:14:42.415 ] 00:14:42.415 }' 00:14:42.416 02:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.416 02:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.675 02:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.958 [2024-05-15 02:15:30.956963] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.958 [2024-05-15 02:15:30.957003] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be07500 name Existed_Raid, state configuring 00:14:43.216 02:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:43.474 [2024-05-15 02:15:31.261024] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.474 [2024-05-15 02:15:31.261098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.474 [2024-05-15 02:15:31.261104] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.474 [2024-05-15 02:15:31.261113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.474 [2024-05-15 02:15:31.261117] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.474 [2024-05-15 02:15:31.261124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.474 02:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.731 [2024-05-15 02:15:31.522051] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.731 BaseBdev1 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:43.731 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:43.732 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.989 02:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.248 [ 00:14:44.248 { 00:14:44.248 "name": "BaseBdev1", 00:14:44.248 "aliases": [ 00:14:44.248 "0154648d-1261-11ef-99fd-bfc7c66e2865" 00:14:44.248 ], 00:14:44.248 "product_name": "Malloc disk", 00:14:44.248 "block_size": 512, 00:14:44.248 "num_blocks": 65536, 00:14:44.248 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:44.248 "assigned_rate_limits": { 00:14:44.248 "rw_ios_per_sec": 0, 00:14:44.248 "rw_mbytes_per_sec": 0, 00:14:44.248 "r_mbytes_per_sec": 0, 00:14:44.248 "w_mbytes_per_sec": 0 00:14:44.248 }, 00:14:44.248 "claimed": true, 00:14:44.248 "claim_type": "exclusive_write", 00:14:44.248 "zoned": false, 00:14:44.248 "supported_io_types": { 00:14:44.248 "read": true, 00:14:44.248 "write": true, 00:14:44.248 "unmap": true, 00:14:44.248 "write_zeroes": true, 00:14:44.248 "flush": true, 00:14:44.248 "reset": true, 00:14:44.248 "compare": false, 00:14:44.248 "compare_and_write": false, 00:14:44.248 "abort": true, 00:14:44.248 "nvme_admin": false, 00:14:44.248 "nvme_io": false 00:14:44.248 }, 00:14:44.248 "memory_domains": [ 00:14:44.248 { 00:14:44.248 "dma_device_id": "system", 00:14:44.248 "dma_device_type": 1 00:14:44.248 }, 00:14:44.248 { 00:14:44.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.248 "dma_device_type": 2 00:14:44.248 } 00:14:44.248 ], 00:14:44.248 "driver_specific": {} 00:14:44.248 } 00:14:44.248 ] 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.248 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.506 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.506 "name": "Existed_Raid", 00:14:44.506 "uuid": "012cb769-1261-11ef-99fd-bfc7c66e2865", 00:14:44.506 "strip_size_kb": 64, 00:14:44.506 "state": "configuring", 00:14:44.506 "raid_level": "raid0", 00:14:44.506 "superblock": true, 00:14:44.506 "num_base_bdevs": 3, 00:14:44.506 "num_base_bdevs_discovered": 1, 00:14:44.506 "num_base_bdevs_operational": 3, 00:14:44.506 "base_bdevs_list": [ 00:14:44.506 { 00:14:44.506 "name": "BaseBdev1", 00:14:44.506 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:44.506 "is_configured": true, 00:14:44.506 "data_offset": 2048, 00:14:44.506 "data_size": 63488 00:14:44.506 }, 00:14:44.506 { 00:14:44.506 "name": "BaseBdev2", 00:14:44.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.506 "is_configured": false, 00:14:44.506 "data_offset": 0, 00:14:44.506 "data_size": 0 00:14:44.506 }, 00:14:44.506 { 00:14:44.506 "name": "BaseBdev3", 00:14:44.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.506 "is_configured": false, 00:14:44.506 "data_offset": 0, 00:14:44.506 "data_size": 0 00:14:44.506 } 00:14:44.506 ] 00:14:44.506 }' 00:14:44.506 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.506 02:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.072 02:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.330 [2024-05-15 02:15:33.177177] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.330 [2024-05-15 02:15:33.177217] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be07500 name Existed_Raid, state configuring 00:14:45.330 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:45.590 [2024-05-15 02:15:33.433205] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.590 [2024-05-15 02:15:33.433919] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.590 [2024-05-15 02:15:33.433971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.590 [2024-05-15 02:15:33.433975] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.590 [2024-05-15 02:15:33.433984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.590 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.848 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.848 "name": "Existed_Raid", 00:14:45.848 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:45.848 "strip_size_kb": 64, 00:14:45.848 "state": "configuring", 00:14:45.848 "raid_level": "raid0", 00:14:45.848 "superblock": true, 00:14:45.848 "num_base_bdevs": 3, 00:14:45.848 "num_base_bdevs_discovered": 1, 00:14:45.848 "num_base_bdevs_operational": 3, 00:14:45.848 "base_bdevs_list": [ 00:14:45.848 { 00:14:45.848 "name": "BaseBdev1", 00:14:45.848 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:45.848 "is_configured": true, 00:14:45.848 "data_offset": 2048, 00:14:45.848 "data_size": 63488 00:14:45.848 }, 00:14:45.848 { 00:14:45.848 "name": "BaseBdev2", 00:14:45.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.848 "is_configured": false, 00:14:45.848 "data_offset": 0, 00:14:45.848 "data_size": 0 00:14:45.848 }, 00:14:45.848 { 00:14:45.848 "name": "BaseBdev3", 00:14:45.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.848 "is_configured": false, 00:14:45.848 "data_offset": 0, 00:14:45.848 "data_size": 0 00:14:45.848 } 00:14:45.848 ] 00:14:45.848 }' 00:14:45.848 02:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.848 02:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.107 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.379 [2024-05-15 02:15:34.353405] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.379 BaseBdev2 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:46.379 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.638 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.895 [ 00:14:46.895 { 00:14:46.895 "name": "BaseBdev2", 00:14:46.895 "aliases": [ 00:14:46.895 "03048f4a-1261-11ef-99fd-bfc7c66e2865" 00:14:46.895 ], 00:14:46.895 "product_name": "Malloc disk", 00:14:46.895 "block_size": 512, 00:14:46.895 "num_blocks": 65536, 00:14:46.895 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:46.895 "assigned_rate_limits": { 00:14:46.895 "rw_ios_per_sec": 0, 00:14:46.895 "rw_mbytes_per_sec": 0, 00:14:46.895 "r_mbytes_per_sec": 0, 00:14:46.895 "w_mbytes_per_sec": 0 00:14:46.895 }, 00:14:46.895 "claimed": true, 00:14:46.895 "claim_type": "exclusive_write", 00:14:46.895 "zoned": false, 00:14:46.895 "supported_io_types": { 00:14:46.895 "read": true, 00:14:46.896 "write": true, 00:14:46.896 "unmap": true, 00:14:46.896 "write_zeroes": true, 00:14:46.896 "flush": true, 00:14:46.896 "reset": true, 00:14:46.896 "compare": false, 00:14:46.896 "compare_and_write": false, 00:14:46.896 "abort": true, 00:14:46.896 "nvme_admin": false, 00:14:46.896 "nvme_io": false 00:14:46.896 }, 00:14:46.896 "memory_domains": [ 00:14:46.896 { 00:14:46.896 "dma_device_id": "system", 00:14:46.896 "dma_device_type": 1 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.896 "dma_device_type": 2 00:14:46.896 } 00:14:46.896 ], 00:14:46.896 "driver_specific": {} 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.896 02:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.153 02:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.154 "name": "Existed_Raid", 00:14:47.154 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:47.154 "strip_size_kb": 64, 00:14:47.154 "state": "configuring", 00:14:47.154 "raid_level": "raid0", 00:14:47.154 "superblock": true, 00:14:47.154 "num_base_bdevs": 3, 00:14:47.154 "num_base_bdevs_discovered": 2, 00:14:47.154 "num_base_bdevs_operational": 3, 00:14:47.154 "base_bdevs_list": [ 00:14:47.154 { 00:14:47.154 "name": "BaseBdev1", 00:14:47.154 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:47.154 "is_configured": true, 00:14:47.154 "data_offset": 2048, 00:14:47.154 "data_size": 63488 00:14:47.154 }, 00:14:47.154 { 00:14:47.154 "name": "BaseBdev2", 00:14:47.154 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:47.154 "is_configured": true, 00:14:47.154 "data_offset": 2048, 00:14:47.154 "data_size": 63488 00:14:47.154 }, 00:14:47.154 { 00:14:47.154 "name": "BaseBdev3", 00:14:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.154 "is_configured": false, 00:14:47.154 "data_offset": 0, 00:14:47.154 "data_size": 0 00:14:47.154 } 00:14:47.154 ] 00:14:47.154 }' 00:14:47.154 02:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.154 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.410 02:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.667 [2024-05-15 02:15:35.653511] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.667 [2024-05-15 02:15:35.653582] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be07a00 00:14:47.667 [2024-05-15 02:15:35.653588] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:47.667 [2024-05-15 02:15:35.653608] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be6aec0 00:14:47.667 [2024-05-15 02:15:35.653648] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be07a00 00:14:47.667 [2024-05-15 02:15:35.653652] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be07a00 00:14:47.667 [2024-05-15 02:15:35.653669] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.667 BaseBdev3 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:47.667 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.925 02:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.184 [ 00:14:48.184 { 00:14:48.184 "name": "BaseBdev3", 00:14:48.184 "aliases": [ 00:14:48.184 "03caf130-1261-11ef-99fd-bfc7c66e2865" 00:14:48.184 ], 00:14:48.184 "product_name": "Malloc disk", 00:14:48.184 "block_size": 512, 00:14:48.184 "num_blocks": 65536, 00:14:48.184 "uuid": "03caf130-1261-11ef-99fd-bfc7c66e2865", 00:14:48.184 "assigned_rate_limits": { 00:14:48.184 "rw_ios_per_sec": 0, 00:14:48.184 "rw_mbytes_per_sec": 0, 00:14:48.184 "r_mbytes_per_sec": 0, 00:14:48.184 "w_mbytes_per_sec": 0 00:14:48.184 }, 00:14:48.184 "claimed": true, 00:14:48.184 "claim_type": "exclusive_write", 00:14:48.184 "zoned": false, 00:14:48.184 "supported_io_types": { 00:14:48.184 "read": true, 00:14:48.184 "write": true, 00:14:48.184 "unmap": true, 00:14:48.184 "write_zeroes": true, 00:14:48.184 "flush": true, 00:14:48.184 "reset": true, 00:14:48.184 "compare": false, 00:14:48.184 "compare_and_write": false, 00:14:48.184 "abort": true, 00:14:48.184 "nvme_admin": false, 00:14:48.184 "nvme_io": false 00:14:48.184 }, 00:14:48.184 "memory_domains": [ 00:14:48.184 { 00:14:48.184 "dma_device_id": "system", 00:14:48.184 "dma_device_type": 1 00:14:48.184 }, 00:14:48.184 { 00:14:48.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.184 "dma_device_type": 2 00:14:48.184 } 00:14:48.184 ], 00:14:48.184 "driver_specific": {} 00:14:48.184 } 00:14:48.184 ] 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.443 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.701 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.701 "name": "Existed_Raid", 00:14:48.701 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:48.701 "strip_size_kb": 64, 00:14:48.701 "state": "online", 00:14:48.701 "raid_level": "raid0", 00:14:48.701 "superblock": true, 00:14:48.701 "num_base_bdevs": 3, 00:14:48.701 "num_base_bdevs_discovered": 3, 00:14:48.701 "num_base_bdevs_operational": 3, 00:14:48.701 "base_bdevs_list": [ 00:14:48.701 { 00:14:48.701 "name": "BaseBdev1", 00:14:48.701 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:48.701 "is_configured": true, 00:14:48.701 "data_offset": 2048, 00:14:48.701 "data_size": 63488 00:14:48.701 }, 00:14:48.701 { 00:14:48.701 "name": "BaseBdev2", 00:14:48.701 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:48.701 "is_configured": true, 00:14:48.701 "data_offset": 2048, 00:14:48.701 "data_size": 63488 00:14:48.701 }, 00:14:48.701 { 00:14:48.701 "name": "BaseBdev3", 00:14:48.701 "uuid": "03caf130-1261-11ef-99fd-bfc7c66e2865", 00:14:48.701 "is_configured": true, 00:14:48.701 "data_offset": 2048, 00:14:48.701 "data_size": 63488 00:14:48.701 } 00:14:48.701 ] 00:14:48.701 }' 00:14:48.701 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.701 02:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:48.959 02:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:49.218 [2024-05-15 02:15:37.049545] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:49.218 "name": "Existed_Raid", 00:14:49.218 "aliases": [ 00:14:49.218 "02782a29-1261-11ef-99fd-bfc7c66e2865" 00:14:49.218 ], 00:14:49.218 "product_name": "Raid Volume", 00:14:49.218 "block_size": 512, 00:14:49.218 "num_blocks": 190464, 00:14:49.218 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:49.218 "assigned_rate_limits": { 00:14:49.218 "rw_ios_per_sec": 0, 00:14:49.218 "rw_mbytes_per_sec": 0, 00:14:49.218 "r_mbytes_per_sec": 0, 00:14:49.218 "w_mbytes_per_sec": 0 00:14:49.218 }, 00:14:49.218 "claimed": false, 00:14:49.218 "zoned": false, 00:14:49.218 "supported_io_types": { 00:14:49.218 "read": true, 00:14:49.218 "write": true, 00:14:49.218 "unmap": true, 00:14:49.218 "write_zeroes": true, 00:14:49.218 "flush": true, 00:14:49.218 "reset": true, 00:14:49.218 "compare": false, 00:14:49.218 "compare_and_write": false, 00:14:49.218 "abort": false, 00:14:49.218 "nvme_admin": false, 00:14:49.218 "nvme_io": false 00:14:49.218 }, 00:14:49.218 "memory_domains": [ 00:14:49.218 { 00:14:49.218 "dma_device_id": "system", 00:14:49.218 "dma_device_type": 1 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.218 "dma_device_type": 2 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "dma_device_id": "system", 00:14:49.218 "dma_device_type": 1 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.218 "dma_device_type": 2 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "dma_device_id": "system", 00:14:49.218 "dma_device_type": 1 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.218 "dma_device_type": 2 00:14:49.218 } 00:14:49.218 ], 00:14:49.218 "driver_specific": { 00:14:49.218 "raid": { 00:14:49.218 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:49.218 "strip_size_kb": 64, 00:14:49.218 "state": "online", 00:14:49.218 "raid_level": "raid0", 00:14:49.218 "superblock": true, 00:14:49.218 "num_base_bdevs": 3, 00:14:49.218 "num_base_bdevs_discovered": 3, 00:14:49.218 "num_base_bdevs_operational": 3, 00:14:49.218 "base_bdevs_list": [ 00:14:49.218 { 00:14:49.218 "name": "BaseBdev1", 00:14:49.218 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:49.218 "is_configured": true, 00:14:49.218 "data_offset": 2048, 00:14:49.218 "data_size": 63488 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "name": "BaseBdev2", 00:14:49.218 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:49.218 "is_configured": true, 00:14:49.218 "data_offset": 2048, 00:14:49.218 "data_size": 63488 00:14:49.218 }, 00:14:49.218 { 00:14:49.218 "name": "BaseBdev3", 00:14:49.218 "uuid": "03caf130-1261-11ef-99fd-bfc7c66e2865", 00:14:49.218 "is_configured": true, 00:14:49.218 "data_offset": 2048, 00:14:49.218 "data_size": 63488 00:14:49.218 } 00:14:49.218 ] 00:14:49.218 } 00:14:49.218 } 00:14:49.218 }' 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:49.218 BaseBdev2 00:14:49.218 BaseBdev3' 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:49.218 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:49.507 "name": "BaseBdev1", 00:14:49.507 "aliases": [ 00:14:49.507 "0154648d-1261-11ef-99fd-bfc7c66e2865" 00:14:49.507 ], 00:14:49.507 "product_name": "Malloc disk", 00:14:49.507 "block_size": 512, 00:14:49.507 "num_blocks": 65536, 00:14:49.507 "uuid": "0154648d-1261-11ef-99fd-bfc7c66e2865", 00:14:49.507 "assigned_rate_limits": { 00:14:49.507 "rw_ios_per_sec": 0, 00:14:49.507 "rw_mbytes_per_sec": 0, 00:14:49.507 "r_mbytes_per_sec": 0, 00:14:49.507 "w_mbytes_per_sec": 0 00:14:49.507 }, 00:14:49.507 "claimed": true, 00:14:49.507 "claim_type": "exclusive_write", 00:14:49.507 "zoned": false, 00:14:49.507 "supported_io_types": { 00:14:49.507 "read": true, 00:14:49.507 "write": true, 00:14:49.507 "unmap": true, 00:14:49.507 "write_zeroes": true, 00:14:49.507 "flush": true, 00:14:49.507 "reset": true, 00:14:49.507 "compare": false, 00:14:49.507 "compare_and_write": false, 00:14:49.507 "abort": true, 00:14:49.507 "nvme_admin": false, 00:14:49.507 "nvme_io": false 00:14:49.507 }, 00:14:49.507 "memory_domains": [ 00:14:49.507 { 00:14:49.507 "dma_device_id": "system", 00:14:49.507 "dma_device_type": 1 00:14:49.507 }, 00:14:49.507 { 00:14:49.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.507 "dma_device_type": 2 00:14:49.507 } 00:14:49.507 ], 00:14:49.507 "driver_specific": {} 00:14:49.507 }' 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:49.507 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:49.508 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:49.508 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:49.508 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:49.777 "name": "BaseBdev2", 00:14:49.777 "aliases": [ 00:14:49.777 "03048f4a-1261-11ef-99fd-bfc7c66e2865" 00:14:49.777 ], 00:14:49.777 "product_name": "Malloc disk", 00:14:49.777 "block_size": 512, 00:14:49.777 "num_blocks": 65536, 00:14:49.777 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:49.777 "assigned_rate_limits": { 00:14:49.777 "rw_ios_per_sec": 0, 00:14:49.777 "rw_mbytes_per_sec": 0, 00:14:49.777 "r_mbytes_per_sec": 0, 00:14:49.777 "w_mbytes_per_sec": 0 00:14:49.777 }, 00:14:49.777 "claimed": true, 00:14:49.777 "claim_type": "exclusive_write", 00:14:49.777 "zoned": false, 00:14:49.777 "supported_io_types": { 00:14:49.777 "read": true, 00:14:49.777 "write": true, 00:14:49.777 "unmap": true, 00:14:49.777 "write_zeroes": true, 00:14:49.777 "flush": true, 00:14:49.777 "reset": true, 00:14:49.777 "compare": false, 00:14:49.777 "compare_and_write": false, 00:14:49.777 "abort": true, 00:14:49.777 "nvme_admin": false, 00:14:49.777 "nvme_io": false 00:14:49.777 }, 00:14:49.777 "memory_domains": [ 00:14:49.777 { 00:14:49.777 "dma_device_id": "system", 00:14:49.777 "dma_device_type": 1 00:14:49.777 }, 00:14:49.777 { 00:14:49.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.777 "dma_device_type": 2 00:14:49.777 } 00:14:49.777 ], 00:14:49.777 "driver_specific": {} 00:14:49.777 }' 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:49.777 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:50.036 02:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:50.294 "name": "BaseBdev3", 00:14:50.294 "aliases": [ 00:14:50.294 "03caf130-1261-11ef-99fd-bfc7c66e2865" 00:14:50.294 ], 00:14:50.294 "product_name": "Malloc disk", 00:14:50.294 "block_size": 512, 00:14:50.294 "num_blocks": 65536, 00:14:50.294 "uuid": "03caf130-1261-11ef-99fd-bfc7c66e2865", 00:14:50.294 "assigned_rate_limits": { 00:14:50.294 "rw_ios_per_sec": 0, 00:14:50.294 "rw_mbytes_per_sec": 0, 00:14:50.294 "r_mbytes_per_sec": 0, 00:14:50.294 "w_mbytes_per_sec": 0 00:14:50.294 }, 00:14:50.294 "claimed": true, 00:14:50.294 "claim_type": "exclusive_write", 00:14:50.294 "zoned": false, 00:14:50.294 "supported_io_types": { 00:14:50.294 "read": true, 00:14:50.294 "write": true, 00:14:50.294 "unmap": true, 00:14:50.294 "write_zeroes": true, 00:14:50.294 "flush": true, 00:14:50.294 "reset": true, 00:14:50.294 "compare": false, 00:14:50.294 "compare_and_write": false, 00:14:50.294 "abort": true, 00:14:50.294 "nvme_admin": false, 00:14:50.294 "nvme_io": false 00:14:50.294 }, 00:14:50.294 "memory_domains": [ 00:14:50.294 { 00:14:50.294 "dma_device_id": "system", 00:14:50.294 "dma_device_type": 1 00:14:50.294 }, 00:14:50.294 { 00:14:50.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.294 "dma_device_type": 2 00:14:50.294 } 00:14:50.294 ], 00:14:50.294 "driver_specific": {} 00:14:50.294 }' 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:50.294 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.551 [2024-05-15 02:15:38.349631] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.551 [2024-05-15 02:15:38.349663] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.551 [2024-05-15 02:15:38.349678] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.551 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.552 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.552 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.552 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.552 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.810 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.810 "name": "Existed_Raid", 00:14:50.810 "uuid": "02782a29-1261-11ef-99fd-bfc7c66e2865", 00:14:50.810 "strip_size_kb": 64, 00:14:50.810 "state": "offline", 00:14:50.810 "raid_level": "raid0", 00:14:50.810 "superblock": true, 00:14:50.810 "num_base_bdevs": 3, 00:14:50.810 "num_base_bdevs_discovered": 2, 00:14:50.810 "num_base_bdevs_operational": 2, 00:14:50.810 "base_bdevs_list": [ 00:14:50.810 { 00:14:50.810 "name": null, 00:14:50.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.810 "is_configured": false, 00:14:50.810 "data_offset": 2048, 00:14:50.810 "data_size": 63488 00:14:50.810 }, 00:14:50.810 { 00:14:50.810 "name": "BaseBdev2", 00:14:50.810 "uuid": "03048f4a-1261-11ef-99fd-bfc7c66e2865", 00:14:50.810 "is_configured": true, 00:14:50.810 "data_offset": 2048, 00:14:50.810 "data_size": 63488 00:14:50.810 }, 00:14:50.810 { 00:14:50.810 "name": "BaseBdev3", 00:14:50.810 "uuid": "03caf130-1261-11ef-99fd-bfc7c66e2865", 00:14:50.810 "is_configured": true, 00:14:50.810 "data_offset": 2048, 00:14:50.810 "data_size": 63488 00:14:50.810 } 00:14:50.810 ] 00:14:50.810 }' 00:14:50.810 02:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.810 02:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.069 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:51.069 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.070 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.070 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:51.636 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:51.636 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.636 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:51.636 [2024-05-15 02:15:39.638607] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.894 02:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:52.152 [2024-05-15 02:15:40.144722] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.152 [2024-05-15 02:15:40.144761] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be07a00 name Existed_Raid, state offline 00:14:52.152 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.152 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.152 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.152 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.717 BaseBdev2 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:52.717 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.974 02:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.232 [ 00:14:53.232 { 00:14:53.232 "name": "BaseBdev2", 00:14:53.232 "aliases": [ 00:14:53.232 "06ce767f-1261-11ef-99fd-bfc7c66e2865" 00:14:53.232 ], 00:14:53.232 "product_name": "Malloc disk", 00:14:53.232 "block_size": 512, 00:14:53.232 "num_blocks": 65536, 00:14:53.232 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:53.232 "assigned_rate_limits": { 00:14:53.232 "rw_ios_per_sec": 0, 00:14:53.232 "rw_mbytes_per_sec": 0, 00:14:53.232 "r_mbytes_per_sec": 0, 00:14:53.232 "w_mbytes_per_sec": 0 00:14:53.232 }, 00:14:53.232 "claimed": false, 00:14:53.232 "zoned": false, 00:14:53.232 "supported_io_types": { 00:14:53.232 "read": true, 00:14:53.232 "write": true, 00:14:53.232 "unmap": true, 00:14:53.232 "write_zeroes": true, 00:14:53.232 "flush": true, 00:14:53.232 "reset": true, 00:14:53.232 "compare": false, 00:14:53.232 "compare_and_write": false, 00:14:53.232 "abort": true, 00:14:53.232 "nvme_admin": false, 00:14:53.232 "nvme_io": false 00:14:53.232 }, 00:14:53.232 "memory_domains": [ 00:14:53.232 { 00:14:53.232 "dma_device_id": "system", 00:14:53.232 "dma_device_type": 1 00:14:53.232 }, 00:14:53.232 { 00:14:53.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.232 "dma_device_type": 2 00:14:53.232 } 00:14:53.232 ], 00:14:53.232 "driver_specific": {} 00:14:53.232 } 00:14:53.232 ] 00:14:53.232 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:53.232 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:53.232 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:53.232 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.490 BaseBdev3 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:53.490 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.748 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.007 [ 00:14:54.007 { 00:14:54.007 "name": "BaseBdev3", 00:14:54.007 "aliases": [ 00:14:54.007 "0739e481-1261-11ef-99fd-bfc7c66e2865" 00:14:54.007 ], 00:14:54.007 "product_name": "Malloc disk", 00:14:54.007 "block_size": 512, 00:14:54.007 "num_blocks": 65536, 00:14:54.007 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:54.007 "assigned_rate_limits": { 00:14:54.007 "rw_ios_per_sec": 0, 00:14:54.007 "rw_mbytes_per_sec": 0, 00:14:54.007 "r_mbytes_per_sec": 0, 00:14:54.007 "w_mbytes_per_sec": 0 00:14:54.007 }, 00:14:54.007 "claimed": false, 00:14:54.007 "zoned": false, 00:14:54.007 "supported_io_types": { 00:14:54.007 "read": true, 00:14:54.007 "write": true, 00:14:54.007 "unmap": true, 00:14:54.007 "write_zeroes": true, 00:14:54.007 "flush": true, 00:14:54.007 "reset": true, 00:14:54.007 "compare": false, 00:14:54.007 "compare_and_write": false, 00:14:54.007 "abort": true, 00:14:54.007 "nvme_admin": false, 00:14:54.007 "nvme_io": false 00:14:54.007 }, 00:14:54.007 "memory_domains": [ 00:14:54.007 { 00:14:54.007 "dma_device_id": "system", 00:14:54.007 "dma_device_type": 1 00:14:54.007 }, 00:14:54.007 { 00:14:54.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.007 "dma_device_type": 2 00:14:54.007 } 00:14:54.007 ], 00:14:54.007 "driver_specific": {} 00:14:54.007 } 00:14:54.007 ] 00:14:54.007 02:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:54.007 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:54.007 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:54.007 02:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:54.266 [2024-05-15 02:15:42.169774] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.266 [2024-05-15 02:15:42.169833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.266 [2024-05-15 02:15:42.169842] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.266 [2024-05-15 02:15:42.170301] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.266 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.524 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.524 "name": "Existed_Raid", 00:14:54.524 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:14:54.524 "strip_size_kb": 64, 00:14:54.524 "state": "configuring", 00:14:54.524 "raid_level": "raid0", 00:14:54.524 "superblock": true, 00:14:54.524 "num_base_bdevs": 3, 00:14:54.524 "num_base_bdevs_discovered": 2, 00:14:54.524 "num_base_bdevs_operational": 3, 00:14:54.524 "base_bdevs_list": [ 00:14:54.524 { 00:14:54.524 "name": "BaseBdev1", 00:14:54.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.524 "is_configured": false, 00:14:54.524 "data_offset": 0, 00:14:54.524 "data_size": 0 00:14:54.524 }, 00:14:54.524 { 00:14:54.525 "name": "BaseBdev2", 00:14:54.525 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:54.525 "is_configured": true, 00:14:54.525 "data_offset": 2048, 00:14:54.525 "data_size": 63488 00:14:54.525 }, 00:14:54.525 { 00:14:54.525 "name": "BaseBdev3", 00:14:54.525 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:54.525 "is_configured": true, 00:14:54.525 "data_offset": 2048, 00:14:54.525 "data_size": 63488 00:14:54.525 } 00:14:54.525 ] 00:14:54.525 }' 00:14:54.525 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.525 02:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.783 02:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:55.041 [2024-05-15 02:15:43.037844] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.299 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.300 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.300 "name": "Existed_Raid", 00:14:55.300 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:14:55.300 "strip_size_kb": 64, 00:14:55.300 "state": "configuring", 00:14:55.300 "raid_level": "raid0", 00:14:55.300 "superblock": true, 00:14:55.300 "num_base_bdevs": 3, 00:14:55.300 "num_base_bdevs_discovered": 1, 00:14:55.300 "num_base_bdevs_operational": 3, 00:14:55.300 "base_bdevs_list": [ 00:14:55.300 { 00:14:55.300 "name": "BaseBdev1", 00:14:55.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.300 "is_configured": false, 00:14:55.300 "data_offset": 0, 00:14:55.300 "data_size": 0 00:14:55.300 }, 00:14:55.300 { 00:14:55.300 "name": null, 00:14:55.300 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:55.300 "is_configured": false, 00:14:55.300 "data_offset": 2048, 00:14:55.300 "data_size": 63488 00:14:55.300 }, 00:14:55.300 { 00:14:55.300 "name": "BaseBdev3", 00:14:55.300 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:55.300 "is_configured": true, 00:14:55.300 "data_offset": 2048, 00:14:55.300 "data_size": 63488 00:14:55.300 } 00:14:55.300 ] 00:14:55.300 }' 00:14:55.300 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.300 02:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.867 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.867 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.126 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:56.126 02:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.126 [2024-05-15 02:15:44.138043] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.126 BaseBdev1 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.385 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.644 [ 00:14:56.644 { 00:14:56.644 "name": "BaseBdev1", 00:14:56.644 "aliases": [ 00:14:56.644 "08d993c4-1261-11ef-99fd-bfc7c66e2865" 00:14:56.644 ], 00:14:56.644 "product_name": "Malloc disk", 00:14:56.644 "block_size": 512, 00:14:56.644 "num_blocks": 65536, 00:14:56.644 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:14:56.644 "assigned_rate_limits": { 00:14:56.644 "rw_ios_per_sec": 0, 00:14:56.644 "rw_mbytes_per_sec": 0, 00:14:56.644 "r_mbytes_per_sec": 0, 00:14:56.644 "w_mbytes_per_sec": 0 00:14:56.644 }, 00:14:56.644 "claimed": true, 00:14:56.644 "claim_type": "exclusive_write", 00:14:56.644 "zoned": false, 00:14:56.644 "supported_io_types": { 00:14:56.644 "read": true, 00:14:56.644 "write": true, 00:14:56.644 "unmap": true, 00:14:56.644 "write_zeroes": true, 00:14:56.644 "flush": true, 00:14:56.644 "reset": true, 00:14:56.644 "compare": false, 00:14:56.644 "compare_and_write": false, 00:14:56.644 "abort": true, 00:14:56.644 "nvme_admin": false, 00:14:56.644 "nvme_io": false 00:14:56.644 }, 00:14:56.644 "memory_domains": [ 00:14:56.644 { 00:14:56.644 "dma_device_id": "system", 00:14:56.644 "dma_device_type": 1 00:14:56.644 }, 00:14:56.644 { 00:14:56.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.644 "dma_device_type": 2 00:14:56.644 } 00:14:56.644 ], 00:14:56.644 "driver_specific": {} 00:14:56.644 } 00:14:56.644 ] 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.644 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.933 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.933 "name": "Existed_Raid", 00:14:56.933 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:14:56.933 "strip_size_kb": 64, 00:14:56.933 "state": "configuring", 00:14:56.933 "raid_level": "raid0", 00:14:56.933 "superblock": true, 00:14:56.933 "num_base_bdevs": 3, 00:14:56.933 "num_base_bdevs_discovered": 2, 00:14:56.933 "num_base_bdevs_operational": 3, 00:14:56.933 "base_bdevs_list": [ 00:14:56.933 { 00:14:56.933 "name": "BaseBdev1", 00:14:56.933 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:14:56.933 "is_configured": true, 00:14:56.933 "data_offset": 2048, 00:14:56.933 "data_size": 63488 00:14:56.933 }, 00:14:56.933 { 00:14:56.933 "name": null, 00:14:56.933 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:56.933 "is_configured": false, 00:14:56.933 "data_offset": 2048, 00:14:56.933 "data_size": 63488 00:14:56.933 }, 00:14:56.933 { 00:14:56.933 "name": "BaseBdev3", 00:14:56.933 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:56.933 "is_configured": true, 00:14:56.933 "data_offset": 2048, 00:14:56.933 "data_size": 63488 00:14:56.933 } 00:14:56.933 ] 00:14:56.933 }' 00:14:56.933 02:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.933 02:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.500 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.500 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.758 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:57.758 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:58.017 [2024-05-15 02:15:45.786034] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.017 02:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.275 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.276 "name": "Existed_Raid", 00:14:58.276 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:14:58.276 "strip_size_kb": 64, 00:14:58.276 "state": "configuring", 00:14:58.276 "raid_level": "raid0", 00:14:58.276 "superblock": true, 00:14:58.276 "num_base_bdevs": 3, 00:14:58.276 "num_base_bdevs_discovered": 1, 00:14:58.276 "num_base_bdevs_operational": 3, 00:14:58.276 "base_bdevs_list": [ 00:14:58.276 { 00:14:58.276 "name": "BaseBdev1", 00:14:58.276 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:14:58.276 "is_configured": true, 00:14:58.276 "data_offset": 2048, 00:14:58.276 "data_size": 63488 00:14:58.276 }, 00:14:58.276 { 00:14:58.276 "name": null, 00:14:58.276 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:58.276 "is_configured": false, 00:14:58.276 "data_offset": 2048, 00:14:58.276 "data_size": 63488 00:14:58.276 }, 00:14:58.276 { 00:14:58.276 "name": null, 00:14:58.276 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:58.276 "is_configured": false, 00:14:58.276 "data_offset": 2048, 00:14:58.276 "data_size": 63488 00:14:58.276 } 00:14:58.276 ] 00:14:58.276 }' 00:14:58.276 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.276 02:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.534 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.534 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.792 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:58.792 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:59.051 [2024-05-15 02:15:46.966122] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.051 02:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.310 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.310 "name": "Existed_Raid", 00:14:59.310 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:14:59.310 "strip_size_kb": 64, 00:14:59.310 "state": "configuring", 00:14:59.310 "raid_level": "raid0", 00:14:59.310 "superblock": true, 00:14:59.310 "num_base_bdevs": 3, 00:14:59.310 "num_base_bdevs_discovered": 2, 00:14:59.310 "num_base_bdevs_operational": 3, 00:14:59.310 "base_bdevs_list": [ 00:14:59.310 { 00:14:59.310 "name": "BaseBdev1", 00:14:59.310 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:14:59.310 "is_configured": true, 00:14:59.310 "data_offset": 2048, 00:14:59.310 "data_size": 63488 00:14:59.310 }, 00:14:59.310 { 00:14:59.310 "name": null, 00:14:59.310 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:14:59.310 "is_configured": false, 00:14:59.310 "data_offset": 2048, 00:14:59.310 "data_size": 63488 00:14:59.310 }, 00:14:59.310 { 00:14:59.311 "name": "BaseBdev3", 00:14:59.311 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:14:59.311 "is_configured": true, 00:14:59.311 "data_offset": 2048, 00:14:59.311 "data_size": 63488 00:14:59.311 } 00:14:59.311 ] 00:14:59.311 }' 00:14:59.311 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.311 02:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.878 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.878 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.137 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:00.137 02:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:00.396 [2024-05-15 02:15:48.202212] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.396 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.655 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.655 "name": "Existed_Raid", 00:15:00.655 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:15:00.655 "strip_size_kb": 64, 00:15:00.655 "state": "configuring", 00:15:00.655 "raid_level": "raid0", 00:15:00.655 "superblock": true, 00:15:00.655 "num_base_bdevs": 3, 00:15:00.655 "num_base_bdevs_discovered": 1, 00:15:00.655 "num_base_bdevs_operational": 3, 00:15:00.655 "base_bdevs_list": [ 00:15:00.655 { 00:15:00.655 "name": null, 00:15:00.655 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:00.655 "is_configured": false, 00:15:00.655 "data_offset": 2048, 00:15:00.655 "data_size": 63488 00:15:00.655 }, 00:15:00.655 { 00:15:00.655 "name": null, 00:15:00.655 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:15:00.655 "is_configured": false, 00:15:00.655 "data_offset": 2048, 00:15:00.655 "data_size": 63488 00:15:00.655 }, 00:15:00.655 { 00:15:00.655 "name": "BaseBdev3", 00:15:00.655 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:15:00.655 "is_configured": true, 00:15:00.655 "data_offset": 2048, 00:15:00.655 "data_size": 63488 00:15:00.655 } 00:15:00.655 ] 00:15:00.655 }' 00:15:00.655 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.655 02:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.913 02:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.171 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:01.171 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:01.429 [2024-05-15 02:15:49.435099] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.687 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.946 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.946 "name": "Existed_Raid", 00:15:01.946 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:15:01.946 "strip_size_kb": 64, 00:15:01.946 "state": "configuring", 00:15:01.946 "raid_level": "raid0", 00:15:01.946 "superblock": true, 00:15:01.946 "num_base_bdevs": 3, 00:15:01.946 "num_base_bdevs_discovered": 2, 00:15:01.946 "num_base_bdevs_operational": 3, 00:15:01.946 "base_bdevs_list": [ 00:15:01.946 { 00:15:01.946 "name": null, 00:15:01.946 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:01.946 "is_configured": false, 00:15:01.946 "data_offset": 2048, 00:15:01.946 "data_size": 63488 00:15:01.946 }, 00:15:01.946 { 00:15:01.946 "name": "BaseBdev2", 00:15:01.946 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:15:01.946 "is_configured": true, 00:15:01.946 "data_offset": 2048, 00:15:01.946 "data_size": 63488 00:15:01.946 }, 00:15:01.946 { 00:15:01.946 "name": "BaseBdev3", 00:15:01.946 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:15:01.946 "is_configured": true, 00:15:01.946 "data_offset": 2048, 00:15:01.946 "data_size": 63488 00:15:01.946 } 00:15:01.946 ] 00:15:01.946 }' 00:15:01.946 02:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.946 02:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.204 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.463 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:15:02.463 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.463 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.029 02:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 08d993c4-1261-11ef-99fd-bfc7c66e2865 00:15:03.288 [2024-05-15 02:15:51.083344] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.288 [2024-05-15 02:15:51.083404] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be07a00 00:15:03.288 [2024-05-15 02:15:51.083410] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:03.288 [2024-05-15 02:15:51.083430] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be6ae20 00:15:03.288 [2024-05-15 02:15:51.083471] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be07a00 00:15:03.288 [2024-05-15 02:15:51.083475] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be07a00 00:15:03.288 [2024-05-15 02:15:51.083494] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.288 NewBaseBdev 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:03.288 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.547 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.114 [ 00:15:04.114 { 00:15:04.114 "name": "NewBaseBdev", 00:15:04.114 "aliases": [ 00:15:04.114 "08d993c4-1261-11ef-99fd-bfc7c66e2865" 00:15:04.114 ], 00:15:04.114 "product_name": "Malloc disk", 00:15:04.114 "block_size": 512, 00:15:04.114 "num_blocks": 65536, 00:15:04.114 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:04.114 "assigned_rate_limits": { 00:15:04.114 "rw_ios_per_sec": 0, 00:15:04.114 "rw_mbytes_per_sec": 0, 00:15:04.114 "r_mbytes_per_sec": 0, 00:15:04.114 "w_mbytes_per_sec": 0 00:15:04.114 }, 00:15:04.114 "claimed": true, 00:15:04.114 "claim_type": "exclusive_write", 00:15:04.114 "zoned": false, 00:15:04.114 "supported_io_types": { 00:15:04.114 "read": true, 00:15:04.114 "write": true, 00:15:04.114 "unmap": true, 00:15:04.114 "write_zeroes": true, 00:15:04.114 "flush": true, 00:15:04.114 "reset": true, 00:15:04.114 "compare": false, 00:15:04.114 "compare_and_write": false, 00:15:04.114 "abort": true, 00:15:04.114 "nvme_admin": false, 00:15:04.114 "nvme_io": false 00:15:04.114 }, 00:15:04.114 "memory_domains": [ 00:15:04.114 { 00:15:04.114 "dma_device_id": "system", 00:15:04.114 "dma_device_type": 1 00:15:04.114 }, 00:15:04.114 { 00:15:04.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.114 "dma_device_type": 2 00:15:04.114 } 00:15:04.114 ], 00:15:04.114 "driver_specific": {} 00:15:04.114 } 00:15:04.114 ] 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.114 02:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.373 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.373 "name": "Existed_Raid", 00:15:04.373 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:15:04.373 "strip_size_kb": 64, 00:15:04.373 "state": "online", 00:15:04.373 "raid_level": "raid0", 00:15:04.373 "superblock": true, 00:15:04.373 "num_base_bdevs": 3, 00:15:04.373 "num_base_bdevs_discovered": 3, 00:15:04.373 "num_base_bdevs_operational": 3, 00:15:04.373 "base_bdevs_list": [ 00:15:04.373 { 00:15:04.373 "name": "NewBaseBdev", 00:15:04.373 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:04.373 "is_configured": true, 00:15:04.373 "data_offset": 2048, 00:15:04.373 "data_size": 63488 00:15:04.373 }, 00:15:04.373 { 00:15:04.373 "name": "BaseBdev2", 00:15:04.373 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:15:04.373 "is_configured": true, 00:15:04.373 "data_offset": 2048, 00:15:04.373 "data_size": 63488 00:15:04.373 }, 00:15:04.373 { 00:15:04.373 "name": "BaseBdev3", 00:15:04.373 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:15:04.373 "is_configured": true, 00:15:04.373 "data_offset": 2048, 00:15:04.373 "data_size": 63488 00:15:04.373 } 00:15:04.373 ] 00:15:04.373 }' 00:15:04.373 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.373 02:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:04.630 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:04.910 [2024-05-15 02:15:52.699377] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.910 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:04.910 "name": "Existed_Raid", 00:15:04.910 "aliases": [ 00:15:04.910 "07ad4283-1261-11ef-99fd-bfc7c66e2865" 00:15:04.910 ], 00:15:04.910 "product_name": "Raid Volume", 00:15:04.910 "block_size": 512, 00:15:04.910 "num_blocks": 190464, 00:15:04.910 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:15:04.910 "assigned_rate_limits": { 00:15:04.910 "rw_ios_per_sec": 0, 00:15:04.910 "rw_mbytes_per_sec": 0, 00:15:04.910 "r_mbytes_per_sec": 0, 00:15:04.910 "w_mbytes_per_sec": 0 00:15:04.910 }, 00:15:04.910 "claimed": false, 00:15:04.910 "zoned": false, 00:15:04.910 "supported_io_types": { 00:15:04.910 "read": true, 00:15:04.910 "write": true, 00:15:04.910 "unmap": true, 00:15:04.910 "write_zeroes": true, 00:15:04.910 "flush": true, 00:15:04.910 "reset": true, 00:15:04.910 "compare": false, 00:15:04.910 "compare_and_write": false, 00:15:04.910 "abort": false, 00:15:04.910 "nvme_admin": false, 00:15:04.910 "nvme_io": false 00:15:04.910 }, 00:15:04.910 "memory_domains": [ 00:15:04.910 { 00:15:04.910 "dma_device_id": "system", 00:15:04.910 "dma_device_type": 1 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.910 "dma_device_type": 2 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "dma_device_id": "system", 00:15:04.910 "dma_device_type": 1 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.910 "dma_device_type": 2 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "dma_device_id": "system", 00:15:04.910 "dma_device_type": 1 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.910 "dma_device_type": 2 00:15:04.910 } 00:15:04.910 ], 00:15:04.910 "driver_specific": { 00:15:04.910 "raid": { 00:15:04.910 "uuid": "07ad4283-1261-11ef-99fd-bfc7c66e2865", 00:15:04.910 "strip_size_kb": 64, 00:15:04.910 "state": "online", 00:15:04.910 "raid_level": "raid0", 00:15:04.910 "superblock": true, 00:15:04.910 "num_base_bdevs": 3, 00:15:04.910 "num_base_bdevs_discovered": 3, 00:15:04.910 "num_base_bdevs_operational": 3, 00:15:04.910 "base_bdevs_list": [ 00:15:04.910 { 00:15:04.910 "name": "NewBaseBdev", 00:15:04.910 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:04.910 "is_configured": true, 00:15:04.910 "data_offset": 2048, 00:15:04.910 "data_size": 63488 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "name": "BaseBdev2", 00:15:04.910 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:15:04.910 "is_configured": true, 00:15:04.910 "data_offset": 2048, 00:15:04.910 "data_size": 63488 00:15:04.910 }, 00:15:04.910 { 00:15:04.910 "name": "BaseBdev3", 00:15:04.910 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:15:04.910 "is_configured": true, 00:15:04.910 "data_offset": 2048, 00:15:04.910 "data_size": 63488 00:15:04.910 } 00:15:04.910 ] 00:15:04.910 } 00:15:04.911 } 00:15:04.911 }' 00:15:04.911 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.911 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:15:04.911 BaseBdev2 00:15:04.911 BaseBdev3' 00:15:04.911 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:04.911 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:04.911 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:05.206 "name": "NewBaseBdev", 00:15:05.206 "aliases": [ 00:15:05.206 "08d993c4-1261-11ef-99fd-bfc7c66e2865" 00:15:05.206 ], 00:15:05.206 "product_name": "Malloc disk", 00:15:05.206 "block_size": 512, 00:15:05.206 "num_blocks": 65536, 00:15:05.206 "uuid": "08d993c4-1261-11ef-99fd-bfc7c66e2865", 00:15:05.206 "assigned_rate_limits": { 00:15:05.206 "rw_ios_per_sec": 0, 00:15:05.206 "rw_mbytes_per_sec": 0, 00:15:05.206 "r_mbytes_per_sec": 0, 00:15:05.206 "w_mbytes_per_sec": 0 00:15:05.206 }, 00:15:05.206 "claimed": true, 00:15:05.206 "claim_type": "exclusive_write", 00:15:05.206 "zoned": false, 00:15:05.206 "supported_io_types": { 00:15:05.206 "read": true, 00:15:05.206 "write": true, 00:15:05.206 "unmap": true, 00:15:05.206 "write_zeroes": true, 00:15:05.206 "flush": true, 00:15:05.206 "reset": true, 00:15:05.206 "compare": false, 00:15:05.206 "compare_and_write": false, 00:15:05.206 "abort": true, 00:15:05.206 "nvme_admin": false, 00:15:05.206 "nvme_io": false 00:15:05.206 }, 00:15:05.206 "memory_domains": [ 00:15:05.206 { 00:15:05.206 "dma_device_id": "system", 00:15:05.206 "dma_device_type": 1 00:15:05.206 }, 00:15:05.206 { 00:15:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.206 "dma_device_type": 2 00:15:05.206 } 00:15:05.206 ], 00:15:05.206 "driver_specific": {} 00:15:05.206 }' 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.206 02:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:05.206 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:05.465 "name": "BaseBdev2", 00:15:05.465 "aliases": [ 00:15:05.465 "06ce767f-1261-11ef-99fd-bfc7c66e2865" 00:15:05.465 ], 00:15:05.465 "product_name": "Malloc disk", 00:15:05.465 "block_size": 512, 00:15:05.465 "num_blocks": 65536, 00:15:05.465 "uuid": "06ce767f-1261-11ef-99fd-bfc7c66e2865", 00:15:05.465 "assigned_rate_limits": { 00:15:05.465 "rw_ios_per_sec": 0, 00:15:05.465 "rw_mbytes_per_sec": 0, 00:15:05.465 "r_mbytes_per_sec": 0, 00:15:05.465 "w_mbytes_per_sec": 0 00:15:05.465 }, 00:15:05.465 "claimed": true, 00:15:05.465 "claim_type": "exclusive_write", 00:15:05.465 "zoned": false, 00:15:05.465 "supported_io_types": { 00:15:05.465 "read": true, 00:15:05.465 "write": true, 00:15:05.465 "unmap": true, 00:15:05.465 "write_zeroes": true, 00:15:05.465 "flush": true, 00:15:05.465 "reset": true, 00:15:05.465 "compare": false, 00:15:05.465 "compare_and_write": false, 00:15:05.465 "abort": true, 00:15:05.465 "nvme_admin": false, 00:15:05.465 "nvme_io": false 00:15:05.465 }, 00:15:05.465 "memory_domains": [ 00:15:05.465 { 00:15:05.465 "dma_device_id": "system", 00:15:05.465 "dma_device_type": 1 00:15:05.465 }, 00:15:05.465 { 00:15:05.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.465 "dma_device_type": 2 00:15:05.465 } 00:15:05.465 ], 00:15:05.465 "driver_specific": {} 00:15:05.465 }' 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:05.465 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:05.722 "name": "BaseBdev3", 00:15:05.722 "aliases": [ 00:15:05.722 "0739e481-1261-11ef-99fd-bfc7c66e2865" 00:15:05.722 ], 00:15:05.722 "product_name": "Malloc disk", 00:15:05.722 "block_size": 512, 00:15:05.722 "num_blocks": 65536, 00:15:05.722 "uuid": "0739e481-1261-11ef-99fd-bfc7c66e2865", 00:15:05.722 "assigned_rate_limits": { 00:15:05.722 "rw_ios_per_sec": 0, 00:15:05.722 "rw_mbytes_per_sec": 0, 00:15:05.722 "r_mbytes_per_sec": 0, 00:15:05.722 "w_mbytes_per_sec": 0 00:15:05.722 }, 00:15:05.722 "claimed": true, 00:15:05.722 "claim_type": "exclusive_write", 00:15:05.722 "zoned": false, 00:15:05.722 "supported_io_types": { 00:15:05.722 "read": true, 00:15:05.722 "write": true, 00:15:05.722 "unmap": true, 00:15:05.722 "write_zeroes": true, 00:15:05.722 "flush": true, 00:15:05.722 "reset": true, 00:15:05.722 "compare": false, 00:15:05.722 "compare_and_write": false, 00:15:05.722 "abort": true, 00:15:05.722 "nvme_admin": false, 00:15:05.722 "nvme_io": false 00:15:05.722 }, 00:15:05.722 "memory_domains": [ 00:15:05.722 { 00:15:05.722 "dma_device_id": "system", 00:15:05.722 "dma_device_type": 1 00:15:05.722 }, 00:15:05.722 { 00:15:05.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.722 "dma_device_type": 2 00:15:05.722 } 00:15:05.722 ], 00:15:05.722 "driver_specific": {} 00:15:05.722 }' 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:05.722 02:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:05.980 [2024-05-15 02:15:53.983468] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.980 [2024-05-15 02:15:53.983498] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.980 [2024-05-15 02:15:53.983521] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.980 [2024-05-15 02:15:53.983535] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.980 [2024-05-15 02:15:53.983539] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be07a00 name Existed_Raid, state offline 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 52286 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 52286 ']' 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 52286 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 52286 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:06.239 killing process with pid 52286 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52286' 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 52286 00:15:06.239 [2024-05-15 02:15:54.012191] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 52286 00:15:06.239 [2024-05-15 02:15:54.026585] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:15:06.239 00:15:06.239 real 0m25.614s 00:15:06.239 user 0m47.198s 00:15:06.239 sys 0m3.211s 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.239 02:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 ************************************ 00:15:06.239 END TEST raid_state_function_test_sb 00:15:06.239 ************************************ 00:15:06.239 02:15:54 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:06.239 02:15:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:06.239 02:15:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.239 02:15:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 ************************************ 00:15:06.239 START TEST raid_superblock_test 00:15:06.239 ************************************ 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=53018 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 53018 /var/tmp/spdk-raid.sock 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 53018 ']' 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.239 02:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.239 [2024-05-15 02:15:54.228793] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:06.239 [2024-05-15 02:15:54.229031] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:06.807 EAL: TSC is not safe to use in SMP mode 00:15:06.807 EAL: TSC is not invariant 00:15:06.807 [2024-05-15 02:15:54.739010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.065 [2024-05-15 02:15:54.849277] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:07.065 [2024-05-15 02:15:54.852116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.065 [2024-05-15 02:15:54.853171] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.065 [2024-05-15 02:15:54.853199] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.324 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:07.582 malloc1 00:15:07.582 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.149 [2024-05-15 02:15:55.944641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.149 [2024-05-15 02:15:55.944712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.149 [2024-05-15 02:15:55.945414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba55780 00:15:08.149 [2024-05-15 02:15:55.945446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.149 [2024-05-15 02:15:55.946240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.149 [2024-05-15 02:15:55.946270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.149 pt1 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.149 02:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:08.407 malloc2 00:15:08.407 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.665 [2024-05-15 02:15:56.476682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.665 [2024-05-15 02:15:56.476766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.665 [2024-05-15 02:15:56.476798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba55c80 00:15:08.665 [2024-05-15 02:15:56.476806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.665 [2024-05-15 02:15:56.477357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.665 [2024-05-15 02:15:56.477420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.665 pt2 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.665 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:08.922 malloc3 00:15:08.922 02:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.181 [2024-05-15 02:15:57.080711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.181 [2024-05-15 02:15:57.080777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.181 [2024-05-15 02:15:57.080807] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba56180 00:15:09.181 [2024-05-15 02:15:57.080816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.181 [2024-05-15 02:15:57.081443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.181 [2024-05-15 02:15:57.081494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.181 pt3 00:15:09.181 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:09.181 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:09.181 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:09.439 [2024-05-15 02:15:57.428771] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.439 [2024-05-15 02:15:57.429298] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.439 [2024-05-15 02:15:57.429325] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.439 [2024-05-15 02:15:57.429375] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ba56400 00:15:09.439 [2024-05-15 02:15:57.429380] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.439 [2024-05-15 02:15:57.429423] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bab8e20 00:15:09.439 [2024-05-15 02:15:57.429501] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ba56400 00:15:09.439 [2024-05-15 02:15:57.429510] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82ba56400 00:15:09.439 [2024-05-15 02:15:57.429537] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.439 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.698 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.698 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.956 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.956 "name": "raid_bdev1", 00:15:09.956 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:09.956 "strip_size_kb": 64, 00:15:09.956 "state": "online", 00:15:09.956 "raid_level": "raid0", 00:15:09.956 "superblock": true, 00:15:09.956 "num_base_bdevs": 3, 00:15:09.956 "num_base_bdevs_discovered": 3, 00:15:09.956 "num_base_bdevs_operational": 3, 00:15:09.956 "base_bdevs_list": [ 00:15:09.956 { 00:15:09.956 "name": "pt1", 00:15:09.956 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:09.956 "is_configured": true, 00:15:09.956 "data_offset": 2048, 00:15:09.956 "data_size": 63488 00:15:09.956 }, 00:15:09.956 { 00:15:09.956 "name": "pt2", 00:15:09.956 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:09.956 "is_configured": true, 00:15:09.956 "data_offset": 2048, 00:15:09.956 "data_size": 63488 00:15:09.956 }, 00:15:09.956 { 00:15:09.956 "name": "pt3", 00:15:09.956 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:09.956 "is_configured": true, 00:15:09.956 "data_offset": 2048, 00:15:09.956 "data_size": 63488 00:15:09.956 } 00:15:09.956 ] 00:15:09.956 }' 00:15:09.956 02:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.956 02:15:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.214 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:10.474 [2024-05-15 02:15:58.352914] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.474 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:10.474 "name": "raid_bdev1", 00:15:10.474 "aliases": [ 00:15:10.474 "10c598d8-1261-11ef-99fd-bfc7c66e2865" 00:15:10.474 ], 00:15:10.474 "product_name": "Raid Volume", 00:15:10.474 "block_size": 512, 00:15:10.474 "num_blocks": 190464, 00:15:10.474 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:10.474 "assigned_rate_limits": { 00:15:10.474 "rw_ios_per_sec": 0, 00:15:10.474 "rw_mbytes_per_sec": 0, 00:15:10.475 "r_mbytes_per_sec": 0, 00:15:10.475 "w_mbytes_per_sec": 0 00:15:10.475 }, 00:15:10.475 "claimed": false, 00:15:10.475 "zoned": false, 00:15:10.475 "supported_io_types": { 00:15:10.475 "read": true, 00:15:10.475 "write": true, 00:15:10.475 "unmap": true, 00:15:10.475 "write_zeroes": true, 00:15:10.475 "flush": true, 00:15:10.475 "reset": true, 00:15:10.475 "compare": false, 00:15:10.475 "compare_and_write": false, 00:15:10.475 "abort": false, 00:15:10.475 "nvme_admin": false, 00:15:10.475 "nvme_io": false 00:15:10.475 }, 00:15:10.475 "memory_domains": [ 00:15:10.475 { 00:15:10.475 "dma_device_id": "system", 00:15:10.475 "dma_device_type": 1 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.475 "dma_device_type": 2 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "dma_device_id": "system", 00:15:10.475 "dma_device_type": 1 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.475 "dma_device_type": 2 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "dma_device_id": "system", 00:15:10.475 "dma_device_type": 1 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.475 "dma_device_type": 2 00:15:10.475 } 00:15:10.475 ], 00:15:10.475 "driver_specific": { 00:15:10.475 "raid": { 00:15:10.475 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:10.475 "strip_size_kb": 64, 00:15:10.475 "state": "online", 00:15:10.475 "raid_level": "raid0", 00:15:10.475 "superblock": true, 00:15:10.475 "num_base_bdevs": 3, 00:15:10.475 "num_base_bdevs_discovered": 3, 00:15:10.475 "num_base_bdevs_operational": 3, 00:15:10.475 "base_bdevs_list": [ 00:15:10.475 { 00:15:10.475 "name": "pt1", 00:15:10.475 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:10.475 "is_configured": true, 00:15:10.475 "data_offset": 2048, 00:15:10.475 "data_size": 63488 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "name": "pt2", 00:15:10.475 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:10.475 "is_configured": true, 00:15:10.475 "data_offset": 2048, 00:15:10.475 "data_size": 63488 00:15:10.475 }, 00:15:10.475 { 00:15:10.475 "name": "pt3", 00:15:10.475 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:10.475 "is_configured": true, 00:15:10.475 "data_offset": 2048, 00:15:10.475 "data_size": 63488 00:15:10.475 } 00:15:10.475 ] 00:15:10.475 } 00:15:10.475 } 00:15:10.475 }' 00:15:10.475 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.475 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:10.475 pt2 00:15:10.475 pt3' 00:15:10.475 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:10.475 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.475 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:10.734 "name": "pt1", 00:15:10.734 "aliases": [ 00:15:10.734 "705177fb-26cf-3159-a0b2-b5120c01a7b7" 00:15:10.734 ], 00:15:10.734 "product_name": "passthru", 00:15:10.734 "block_size": 512, 00:15:10.734 "num_blocks": 65536, 00:15:10.734 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:10.734 "assigned_rate_limits": { 00:15:10.734 "rw_ios_per_sec": 0, 00:15:10.734 "rw_mbytes_per_sec": 0, 00:15:10.734 "r_mbytes_per_sec": 0, 00:15:10.734 "w_mbytes_per_sec": 0 00:15:10.734 }, 00:15:10.734 "claimed": true, 00:15:10.734 "claim_type": "exclusive_write", 00:15:10.734 "zoned": false, 00:15:10.734 "supported_io_types": { 00:15:10.734 "read": true, 00:15:10.734 "write": true, 00:15:10.734 "unmap": true, 00:15:10.734 "write_zeroes": true, 00:15:10.734 "flush": true, 00:15:10.734 "reset": true, 00:15:10.734 "compare": false, 00:15:10.734 "compare_and_write": false, 00:15:10.734 "abort": true, 00:15:10.734 "nvme_admin": false, 00:15:10.734 "nvme_io": false 00:15:10.734 }, 00:15:10.734 "memory_domains": [ 00:15:10.734 { 00:15:10.734 "dma_device_id": "system", 00:15:10.734 "dma_device_type": 1 00:15:10.734 }, 00:15:10.734 { 00:15:10.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.734 "dma_device_type": 2 00:15:10.734 } 00:15:10.734 ], 00:15:10.734 "driver_specific": { 00:15:10.734 "passthru": { 00:15:10.734 "name": "pt1", 00:15:10.734 "base_bdev_name": "malloc1" 00:15:10.734 } 00:15:10.734 } 00:15:10.734 }' 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.734 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.992 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:10.992 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:10.992 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:10.992 02:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:11.260 "name": "pt2", 00:15:11.260 "aliases": [ 00:15:11.260 "1c671daf-3ed4-7a5b-ae9f-7739cc956198" 00:15:11.260 ], 00:15:11.260 "product_name": "passthru", 00:15:11.260 "block_size": 512, 00:15:11.260 "num_blocks": 65536, 00:15:11.260 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:11.260 "assigned_rate_limits": { 00:15:11.260 "rw_ios_per_sec": 0, 00:15:11.260 "rw_mbytes_per_sec": 0, 00:15:11.260 "r_mbytes_per_sec": 0, 00:15:11.260 "w_mbytes_per_sec": 0 00:15:11.260 }, 00:15:11.260 "claimed": true, 00:15:11.260 "claim_type": "exclusive_write", 00:15:11.260 "zoned": false, 00:15:11.260 "supported_io_types": { 00:15:11.260 "read": true, 00:15:11.260 "write": true, 00:15:11.260 "unmap": true, 00:15:11.260 "write_zeroes": true, 00:15:11.260 "flush": true, 00:15:11.260 "reset": true, 00:15:11.260 "compare": false, 00:15:11.260 "compare_and_write": false, 00:15:11.260 "abort": true, 00:15:11.260 "nvme_admin": false, 00:15:11.260 "nvme_io": false 00:15:11.260 }, 00:15:11.260 "memory_domains": [ 00:15:11.260 { 00:15:11.260 "dma_device_id": "system", 00:15:11.260 "dma_device_type": 1 00:15:11.260 }, 00:15:11.260 { 00:15:11.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.260 "dma_device_type": 2 00:15:11.260 } 00:15:11.260 ], 00:15:11.260 "driver_specific": { 00:15:11.260 "passthru": { 00:15:11.260 "name": "pt2", 00:15:11.260 "base_bdev_name": "malloc2" 00:15:11.260 } 00:15:11.260 } 00:15:11.260 }' 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:11.260 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:11.519 "name": "pt3", 00:15:11.519 "aliases": [ 00:15:11.519 "bc6ed167-d690-cf58-a4b8-f03a677cbea9" 00:15:11.519 ], 00:15:11.519 "product_name": "passthru", 00:15:11.519 "block_size": 512, 00:15:11.519 "num_blocks": 65536, 00:15:11.519 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:11.519 "assigned_rate_limits": { 00:15:11.519 "rw_ios_per_sec": 0, 00:15:11.519 "rw_mbytes_per_sec": 0, 00:15:11.519 "r_mbytes_per_sec": 0, 00:15:11.519 "w_mbytes_per_sec": 0 00:15:11.519 }, 00:15:11.519 "claimed": true, 00:15:11.519 "claim_type": "exclusive_write", 00:15:11.519 "zoned": false, 00:15:11.519 "supported_io_types": { 00:15:11.519 "read": true, 00:15:11.519 "write": true, 00:15:11.519 "unmap": true, 00:15:11.519 "write_zeroes": true, 00:15:11.519 "flush": true, 00:15:11.519 "reset": true, 00:15:11.519 "compare": false, 00:15:11.519 "compare_and_write": false, 00:15:11.519 "abort": true, 00:15:11.519 "nvme_admin": false, 00:15:11.519 "nvme_io": false 00:15:11.519 }, 00:15:11.519 "memory_domains": [ 00:15:11.519 { 00:15:11.519 "dma_device_id": "system", 00:15:11.519 "dma_device_type": 1 00:15:11.519 }, 00:15:11.519 { 00:15:11.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.519 "dma_device_type": 2 00:15:11.519 } 00:15:11.519 ], 00:15:11.519 "driver_specific": { 00:15:11.519 "passthru": { 00:15:11.519 "name": "pt3", 00:15:11.519 "base_bdev_name": "malloc3" 00:15:11.519 } 00:15:11.519 } 00:15:11.519 }' 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.519 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:12.084 [2024-05-15 02:15:59.869650] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.084 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=10c598d8-1261-11ef-99fd-bfc7c66e2865 00:15:12.084 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 10c598d8-1261-11ef-99fd-bfc7c66e2865 ']' 00:15:12.084 02:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:12.343 [2024-05-15 02:16:00.149644] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.343 [2024-05-15 02:16:00.149687] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.343 [2024-05-15 02:16:00.149717] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.343 [2024-05-15 02:16:00.149737] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.343 [2024-05-15 02:16:00.149746] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba56400 name raid_bdev1, state offline 00:15:12.343 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.343 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:12.601 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:12.601 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:12.601 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.601 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:12.859 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.859 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:13.117 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.117 02:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:13.376 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:13.376 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:13.635 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:13.893 [2024-05-15 02:16:01.689750] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:13.893 [2024-05-15 02:16:01.690234] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:13.893 [2024-05-15 02:16:01.690253] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:13.893 [2024-05-15 02:16:01.690267] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:13.893 [2024-05-15 02:16:01.690310] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:13.893 [2024-05-15 02:16:01.690321] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:13.893 [2024-05-15 02:16:01.690330] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.893 [2024-05-15 02:16:01.690334] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba56180 name raid_bdev1, state configuring 00:15:13.893 request: 00:15:13.893 { 00:15:13.893 "name": "raid_bdev1", 00:15:13.893 "raid_level": "raid0", 00:15:13.893 "base_bdevs": [ 00:15:13.893 "malloc1", 00:15:13.893 "malloc2", 00:15:13.893 "malloc3" 00:15:13.893 ], 00:15:13.893 "superblock": false, 00:15:13.893 "strip_size_kb": 64, 00:15:13.893 "method": "bdev_raid_create", 00:15:13.893 "req_id": 1 00:15:13.893 } 00:15:13.893 Got JSON-RPC error response 00:15:13.893 response: 00:15:13.893 { 00:15:13.893 "code": -17, 00:15:13.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:13.893 } 00:15:13.893 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:13.893 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.894 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.894 02:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.894 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.894 02:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:14.153 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:14.153 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:14.153 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.478 [2024-05-15 02:16:02.301777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.478 [2024-05-15 02:16:02.301863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.478 [2024-05-15 02:16:02.301895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba55c80 00:15:14.478 [2024-05-15 02:16:02.301903] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.478 [2024-05-15 02:16:02.302496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.478 [2024-05-15 02:16:02.302545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.478 [2024-05-15 02:16:02.302587] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:14.478 [2024-05-15 02:16:02.302619] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.478 pt1 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.478 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.739 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.739 "name": "raid_bdev1", 00:15:14.739 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:14.739 "strip_size_kb": 64, 00:15:14.739 "state": "configuring", 00:15:14.739 "raid_level": "raid0", 00:15:14.739 "superblock": true, 00:15:14.739 "num_base_bdevs": 3, 00:15:14.739 "num_base_bdevs_discovered": 1, 00:15:14.739 "num_base_bdevs_operational": 3, 00:15:14.739 "base_bdevs_list": [ 00:15:14.739 { 00:15:14.739 "name": "pt1", 00:15:14.739 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:14.739 "is_configured": true, 00:15:14.739 "data_offset": 2048, 00:15:14.739 "data_size": 63488 00:15:14.739 }, 00:15:14.739 { 00:15:14.739 "name": null, 00:15:14.739 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:14.739 "is_configured": false, 00:15:14.739 "data_offset": 2048, 00:15:14.739 "data_size": 63488 00:15:14.739 }, 00:15:14.739 { 00:15:14.739 "name": null, 00:15:14.739 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:14.739 "is_configured": false, 00:15:14.739 "data_offset": 2048, 00:15:14.739 "data_size": 63488 00:15:14.739 } 00:15:14.739 ] 00:15:14.739 }' 00:15:14.739 02:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.739 02:16:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.997 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:14.997 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.255 [2024-05-15 02:16:03.225832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.255 [2024-05-15 02:16:03.225915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.255 [2024-05-15 02:16:03.225955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba56680 00:15:15.255 [2024-05-15 02:16:03.225968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.255 [2024-05-15 02:16:03.226103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.255 [2024-05-15 02:16:03.226135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.255 [2024-05-15 02:16:03.226177] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:15.255 [2024-05-15 02:16:03.226194] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.255 pt2 00:15:15.255 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:15.823 [2024-05-15 02:16:03.569886] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.823 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.081 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.081 "name": "raid_bdev1", 00:15:16.081 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:16.081 "strip_size_kb": 64, 00:15:16.081 "state": "configuring", 00:15:16.081 "raid_level": "raid0", 00:15:16.081 "superblock": true, 00:15:16.081 "num_base_bdevs": 3, 00:15:16.081 "num_base_bdevs_discovered": 1, 00:15:16.081 "num_base_bdevs_operational": 3, 00:15:16.081 "base_bdevs_list": [ 00:15:16.081 { 00:15:16.081 "name": "pt1", 00:15:16.081 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:16.081 "is_configured": true, 00:15:16.081 "data_offset": 2048, 00:15:16.081 "data_size": 63488 00:15:16.081 }, 00:15:16.081 { 00:15:16.081 "name": null, 00:15:16.081 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:16.081 "is_configured": false, 00:15:16.081 "data_offset": 2048, 00:15:16.081 "data_size": 63488 00:15:16.081 }, 00:15:16.081 { 00:15:16.081 "name": null, 00:15:16.081 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:16.081 "is_configured": false, 00:15:16.081 "data_offset": 2048, 00:15:16.081 "data_size": 63488 00:15:16.081 } 00:15:16.081 ] 00:15:16.081 }' 00:15:16.081 02:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.081 02:16:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.339 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:16.339 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.339 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.597 [2024-05-15 02:16:04.469927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.597 [2024-05-15 02:16:04.470004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.597 [2024-05-15 02:16:04.470035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba56680 00:15:16.597 [2024-05-15 02:16:04.470044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.597 [2024-05-15 02:16:04.470148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.597 [2024-05-15 02:16:04.470158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.597 [2024-05-15 02:16:04.470179] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:16.597 [2024-05-15 02:16:04.470188] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.598 pt2 00:15:16.598 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:16.598 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.598 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.856 [2024-05-15 02:16:04.761941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.856 [2024-05-15 02:16:04.762005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.856 [2024-05-15 02:16:04.762036] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba56400 00:15:16.856 [2024-05-15 02:16:04.762044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.856 [2024-05-15 02:16:04.762146] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.856 [2024-05-15 02:16:04.762161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.856 [2024-05-15 02:16:04.762184] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:16.856 [2024-05-15 02:16:04.762191] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.856 [2024-05-15 02:16:04.762226] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ba55780 00:15:16.856 [2024-05-15 02:16:04.762234] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:16.856 [2024-05-15 02:16:04.762264] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bab8e20 00:15:16.856 [2024-05-15 02:16:04.762309] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ba55780 00:15:16.856 [2024-05-15 02:16:04.762313] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82ba55780 00:15:16.856 [2024-05-15 02:16:04.762332] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.856 pt3 00:15:16.856 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:16.856 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.856 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:16.856 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.857 02:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.146 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.146 "name": "raid_bdev1", 00:15:17.146 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:17.146 "strip_size_kb": 64, 00:15:17.146 "state": "online", 00:15:17.146 "raid_level": "raid0", 00:15:17.147 "superblock": true, 00:15:17.147 "num_base_bdevs": 3, 00:15:17.147 "num_base_bdevs_discovered": 3, 00:15:17.147 "num_base_bdevs_operational": 3, 00:15:17.147 "base_bdevs_list": [ 00:15:17.147 { 00:15:17.147 "name": "pt1", 00:15:17.147 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:17.147 "is_configured": true, 00:15:17.147 "data_offset": 2048, 00:15:17.147 "data_size": 63488 00:15:17.147 }, 00:15:17.147 { 00:15:17.147 "name": "pt2", 00:15:17.147 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:17.147 "is_configured": true, 00:15:17.147 "data_offset": 2048, 00:15:17.147 "data_size": 63488 00:15:17.147 }, 00:15:17.147 { 00:15:17.147 "name": "pt3", 00:15:17.147 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:17.147 "is_configured": true, 00:15:17.147 "data_offset": 2048, 00:15:17.147 "data_size": 63488 00:15:17.147 } 00:15:17.147 ] 00:15:17.147 }' 00:15:17.147 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.147 02:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:17.405 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:17.679 [2024-05-15 02:16:05.654054] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.679 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:17.679 "name": "raid_bdev1", 00:15:17.679 "aliases": [ 00:15:17.679 "10c598d8-1261-11ef-99fd-bfc7c66e2865" 00:15:17.679 ], 00:15:17.679 "product_name": "Raid Volume", 00:15:17.679 "block_size": 512, 00:15:17.679 "num_blocks": 190464, 00:15:17.679 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:17.679 "assigned_rate_limits": { 00:15:17.679 "rw_ios_per_sec": 0, 00:15:17.679 "rw_mbytes_per_sec": 0, 00:15:17.679 "r_mbytes_per_sec": 0, 00:15:17.679 "w_mbytes_per_sec": 0 00:15:17.679 }, 00:15:17.679 "claimed": false, 00:15:17.679 "zoned": false, 00:15:17.679 "supported_io_types": { 00:15:17.679 "read": true, 00:15:17.679 "write": true, 00:15:17.679 "unmap": true, 00:15:17.679 "write_zeroes": true, 00:15:17.679 "flush": true, 00:15:17.679 "reset": true, 00:15:17.679 "compare": false, 00:15:17.680 "compare_and_write": false, 00:15:17.680 "abort": false, 00:15:17.680 "nvme_admin": false, 00:15:17.680 "nvme_io": false 00:15:17.680 }, 00:15:17.680 "memory_domains": [ 00:15:17.680 { 00:15:17.680 "dma_device_id": "system", 00:15:17.680 "dma_device_type": 1 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.680 "dma_device_type": 2 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "dma_device_id": "system", 00:15:17.680 "dma_device_type": 1 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.680 "dma_device_type": 2 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "dma_device_id": "system", 00:15:17.680 "dma_device_type": 1 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.680 "dma_device_type": 2 00:15:17.680 } 00:15:17.680 ], 00:15:17.680 "driver_specific": { 00:15:17.680 "raid": { 00:15:17.680 "uuid": "10c598d8-1261-11ef-99fd-bfc7c66e2865", 00:15:17.680 "strip_size_kb": 64, 00:15:17.680 "state": "online", 00:15:17.680 "raid_level": "raid0", 00:15:17.680 "superblock": true, 00:15:17.680 "num_base_bdevs": 3, 00:15:17.680 "num_base_bdevs_discovered": 3, 00:15:17.680 "num_base_bdevs_operational": 3, 00:15:17.680 "base_bdevs_list": [ 00:15:17.680 { 00:15:17.680 "name": "pt1", 00:15:17.680 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:17.680 "is_configured": true, 00:15:17.680 "data_offset": 2048, 00:15:17.680 "data_size": 63488 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "name": "pt2", 00:15:17.680 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:17.680 "is_configured": true, 00:15:17.680 "data_offset": 2048, 00:15:17.680 "data_size": 63488 00:15:17.680 }, 00:15:17.680 { 00:15:17.680 "name": "pt3", 00:15:17.680 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:17.680 "is_configured": true, 00:15:17.680 "data_offset": 2048, 00:15:17.680 "data_size": 63488 00:15:17.680 } 00:15:17.680 ] 00:15:17.680 } 00:15:17.680 } 00:15:17.680 }' 00:15:17.680 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.680 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:17.680 pt2 00:15:17.680 pt3' 00:15:17.680 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:17.680 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:17.680 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:18.248 "name": "pt1", 00:15:18.248 "aliases": [ 00:15:18.248 "705177fb-26cf-3159-a0b2-b5120c01a7b7" 00:15:18.248 ], 00:15:18.248 "product_name": "passthru", 00:15:18.248 "block_size": 512, 00:15:18.248 "num_blocks": 65536, 00:15:18.248 "uuid": "705177fb-26cf-3159-a0b2-b5120c01a7b7", 00:15:18.248 "assigned_rate_limits": { 00:15:18.248 "rw_ios_per_sec": 0, 00:15:18.248 "rw_mbytes_per_sec": 0, 00:15:18.248 "r_mbytes_per_sec": 0, 00:15:18.248 "w_mbytes_per_sec": 0 00:15:18.248 }, 00:15:18.248 "claimed": true, 00:15:18.248 "claim_type": "exclusive_write", 00:15:18.248 "zoned": false, 00:15:18.248 "supported_io_types": { 00:15:18.248 "read": true, 00:15:18.248 "write": true, 00:15:18.248 "unmap": true, 00:15:18.248 "write_zeroes": true, 00:15:18.248 "flush": true, 00:15:18.248 "reset": true, 00:15:18.248 "compare": false, 00:15:18.248 "compare_and_write": false, 00:15:18.248 "abort": true, 00:15:18.248 "nvme_admin": false, 00:15:18.248 "nvme_io": false 00:15:18.248 }, 00:15:18.248 "memory_domains": [ 00:15:18.248 { 00:15:18.248 "dma_device_id": "system", 00:15:18.248 "dma_device_type": 1 00:15:18.248 }, 00:15:18.248 { 00:15:18.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.248 "dma_device_type": 2 00:15:18.248 } 00:15:18.248 ], 00:15:18.248 "driver_specific": { 00:15:18.248 "passthru": { 00:15:18.248 "name": "pt1", 00:15:18.248 "base_bdev_name": "malloc1" 00:15:18.248 } 00:15:18.248 } 00:15:18.248 }' 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.248 02:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.248 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.248 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:18.249 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:18.506 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:18.506 "name": "pt2", 00:15:18.506 "aliases": [ 00:15:18.506 "1c671daf-3ed4-7a5b-ae9f-7739cc956198" 00:15:18.506 ], 00:15:18.506 "product_name": "passthru", 00:15:18.506 "block_size": 512, 00:15:18.506 "num_blocks": 65536, 00:15:18.506 "uuid": "1c671daf-3ed4-7a5b-ae9f-7739cc956198", 00:15:18.506 "assigned_rate_limits": { 00:15:18.506 "rw_ios_per_sec": 0, 00:15:18.506 "rw_mbytes_per_sec": 0, 00:15:18.506 "r_mbytes_per_sec": 0, 00:15:18.506 "w_mbytes_per_sec": 0 00:15:18.506 }, 00:15:18.506 "claimed": true, 00:15:18.506 "claim_type": "exclusive_write", 00:15:18.506 "zoned": false, 00:15:18.506 "supported_io_types": { 00:15:18.506 "read": true, 00:15:18.506 "write": true, 00:15:18.506 "unmap": true, 00:15:18.506 "write_zeroes": true, 00:15:18.506 "flush": true, 00:15:18.506 "reset": true, 00:15:18.506 "compare": false, 00:15:18.506 "compare_and_write": false, 00:15:18.506 "abort": true, 00:15:18.506 "nvme_admin": false, 00:15:18.506 "nvme_io": false 00:15:18.506 }, 00:15:18.506 "memory_domains": [ 00:15:18.506 { 00:15:18.506 "dma_device_id": "system", 00:15:18.506 "dma_device_type": 1 00:15:18.506 }, 00:15:18.506 { 00:15:18.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.506 "dma_device_type": 2 00:15:18.506 } 00:15:18.506 ], 00:15:18.506 "driver_specific": { 00:15:18.506 "passthru": { 00:15:18.506 "name": "pt2", 00:15:18.506 "base_bdev_name": "malloc2" 00:15:18.506 } 00:15:18.506 } 00:15:18.506 }' 00:15:18.506 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:18.507 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:18.765 "name": "pt3", 00:15:18.765 "aliases": [ 00:15:18.765 "bc6ed167-d690-cf58-a4b8-f03a677cbea9" 00:15:18.765 ], 00:15:18.765 "product_name": "passthru", 00:15:18.765 "block_size": 512, 00:15:18.765 "num_blocks": 65536, 00:15:18.765 "uuid": "bc6ed167-d690-cf58-a4b8-f03a677cbea9", 00:15:18.765 "assigned_rate_limits": { 00:15:18.765 "rw_ios_per_sec": 0, 00:15:18.765 "rw_mbytes_per_sec": 0, 00:15:18.765 "r_mbytes_per_sec": 0, 00:15:18.765 "w_mbytes_per_sec": 0 00:15:18.765 }, 00:15:18.765 "claimed": true, 00:15:18.765 "claim_type": "exclusive_write", 00:15:18.765 "zoned": false, 00:15:18.765 "supported_io_types": { 00:15:18.765 "read": true, 00:15:18.765 "write": true, 00:15:18.765 "unmap": true, 00:15:18.765 "write_zeroes": true, 00:15:18.765 "flush": true, 00:15:18.765 "reset": true, 00:15:18.765 "compare": false, 00:15:18.765 "compare_and_write": false, 00:15:18.765 "abort": true, 00:15:18.765 "nvme_admin": false, 00:15:18.765 "nvme_io": false 00:15:18.765 }, 00:15:18.765 "memory_domains": [ 00:15:18.765 { 00:15:18.765 "dma_device_id": "system", 00:15:18.765 "dma_device_type": 1 00:15:18.765 }, 00:15:18.765 { 00:15:18.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.765 "dma_device_type": 2 00:15:18.765 } 00:15:18.765 ], 00:15:18.765 "driver_specific": { 00:15:18.765 "passthru": { 00:15:18.765 "name": "pt3", 00:15:18.765 "base_bdev_name": "malloc3" 00:15:18.765 } 00:15:18.765 } 00:15:18.765 }' 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.765 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:19.023 02:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:19.023 [2024-05-15 02:16:07.026131] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 10c598d8-1261-11ef-99fd-bfc7c66e2865 '!=' 10c598d8-1261-11ef-99fd-bfc7c66e2865 ']' 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 53018 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 53018 ']' 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 53018 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 53018 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:19.281 killing process with pid 53018 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53018' 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 53018 00:15:19.281 [2024-05-15 02:16:07.059245] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.281 [2024-05-15 02:16:07.059290] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.281 [2024-05-15 02:16:07.059319] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 53018 00:15:19.281 [2024-05-15 02:16:07.059333] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ba55780 name raid_bdev1, state offline 00:15:19.281 [2024-05-15 02:16:07.074977] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:19.281 00:15:19.281 real 0m13.018s 00:15:19.281 user 0m23.237s 00:15:19.281 sys 0m2.029s 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:19.281 02:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.281 ************************************ 00:15:19.281 END TEST raid_superblock_test 00:15:19.281 ************************************ 00:15:19.281 02:16:07 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:15:19.281 02:16:07 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:19.281 02:16:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:19.281 02:16:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:19.281 02:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.281 ************************************ 00:15:19.281 START TEST raid_state_function_test 00:15:19.281 ************************************ 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:15:19.281 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=53375 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53375' 00:15:19.282 Process raid pid: 53375 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 53375 /var/tmp/spdk-raid.sock 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 53375 ']' 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:19.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:19.282 02:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.282 [2024-05-15 02:16:07.291974] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:19.282 [2024-05-15 02:16:07.292300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:19.847 EAL: TSC is not safe to use in SMP mode 00:15:19.847 EAL: TSC is not invariant 00:15:19.847 [2024-05-15 02:16:07.770187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.847 [2024-05-15 02:16:07.866331] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:20.105 [2024-05-15 02:16:07.868731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.105 [2024-05-15 02:16:07.869603] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.105 [2024-05-15 02:16:07.869623] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.670 02:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:20.670 02:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:20.670 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:20.929 [2024-05-15 02:16:08.731208] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.929 [2024-05-15 02:16:08.731296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.929 [2024-05-15 02:16:08.731304] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.929 [2024-05-15 02:16:08.731313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.929 [2024-05-15 02:16:08.731317] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.929 [2024-05-15 02:16:08.731324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.929 02:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.188 02:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.188 "name": "Existed_Raid", 00:15:21.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.188 "strip_size_kb": 64, 00:15:21.188 "state": "configuring", 00:15:21.188 "raid_level": "concat", 00:15:21.188 "superblock": false, 00:15:21.188 "num_base_bdevs": 3, 00:15:21.188 "num_base_bdevs_discovered": 0, 00:15:21.188 "num_base_bdevs_operational": 3, 00:15:21.188 "base_bdevs_list": [ 00:15:21.188 { 00:15:21.188 "name": "BaseBdev1", 00:15:21.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.188 "is_configured": false, 00:15:21.188 "data_offset": 0, 00:15:21.188 "data_size": 0 00:15:21.188 }, 00:15:21.188 { 00:15:21.188 "name": "BaseBdev2", 00:15:21.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.188 "is_configured": false, 00:15:21.188 "data_offset": 0, 00:15:21.188 "data_size": 0 00:15:21.188 }, 00:15:21.188 { 00:15:21.188 "name": "BaseBdev3", 00:15:21.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.188 "is_configured": false, 00:15:21.188 "data_offset": 0, 00:15:21.188 "data_size": 0 00:15:21.188 } 00:15:21.188 ] 00:15:21.188 }' 00:15:21.188 02:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.188 02:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.447 02:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.705 [2024-05-15 02:16:09.679200] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.705 [2024-05-15 02:16:09.679232] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c593500 name Existed_Raid, state configuring 00:15:21.705 02:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:22.272 [2024-05-15 02:16:10.035229] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.272 [2024-05-15 02:16:10.035300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.272 [2024-05-15 02:16:10.035305] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.272 [2024-05-15 02:16:10.035314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.272 [2024-05-15 02:16:10.035317] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.272 [2024-05-15 02:16:10.035324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.272 02:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.531 [2024-05-15 02:16:10.488801] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.531 BaseBdev1 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:22.531 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.790 02:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.049 [ 00:15:23.049 { 00:15:23.049 "name": "BaseBdev1", 00:15:23.049 "aliases": [ 00:15:23.049 "188e2def-1261-11ef-99fd-bfc7c66e2865" 00:15:23.049 ], 00:15:23.049 "product_name": "Malloc disk", 00:15:23.049 "block_size": 512, 00:15:23.049 "num_blocks": 65536, 00:15:23.049 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:23.049 "assigned_rate_limits": { 00:15:23.049 "rw_ios_per_sec": 0, 00:15:23.049 "rw_mbytes_per_sec": 0, 00:15:23.049 "r_mbytes_per_sec": 0, 00:15:23.049 "w_mbytes_per_sec": 0 00:15:23.049 }, 00:15:23.049 "claimed": true, 00:15:23.049 "claim_type": "exclusive_write", 00:15:23.049 "zoned": false, 00:15:23.049 "supported_io_types": { 00:15:23.049 "read": true, 00:15:23.049 "write": true, 00:15:23.049 "unmap": true, 00:15:23.049 "write_zeroes": true, 00:15:23.049 "flush": true, 00:15:23.049 "reset": true, 00:15:23.049 "compare": false, 00:15:23.049 "compare_and_write": false, 00:15:23.049 "abort": true, 00:15:23.049 "nvme_admin": false, 00:15:23.049 "nvme_io": false 00:15:23.049 }, 00:15:23.049 "memory_domains": [ 00:15:23.049 { 00:15:23.049 "dma_device_id": "system", 00:15:23.049 "dma_device_type": 1 00:15:23.049 }, 00:15:23.049 { 00:15:23.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.049 "dma_device_type": 2 00:15:23.049 } 00:15:23.049 ], 00:15:23.049 "driver_specific": {} 00:15:23.049 } 00:15:23.049 ] 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.049 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.308 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.308 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.308 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.308 "name": "Existed_Raid", 00:15:23.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.308 "strip_size_kb": 64, 00:15:23.308 "state": "configuring", 00:15:23.308 "raid_level": "concat", 00:15:23.308 "superblock": false, 00:15:23.308 "num_base_bdevs": 3, 00:15:23.308 "num_base_bdevs_discovered": 1, 00:15:23.308 "num_base_bdevs_operational": 3, 00:15:23.308 "base_bdevs_list": [ 00:15:23.308 { 00:15:23.308 "name": "BaseBdev1", 00:15:23.308 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:23.308 "is_configured": true, 00:15:23.308 "data_offset": 0, 00:15:23.308 "data_size": 65536 00:15:23.308 }, 00:15:23.308 { 00:15:23.308 "name": "BaseBdev2", 00:15:23.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.308 "is_configured": false, 00:15:23.308 "data_offset": 0, 00:15:23.308 "data_size": 0 00:15:23.308 }, 00:15:23.308 { 00:15:23.308 "name": "BaseBdev3", 00:15:23.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.308 "is_configured": false, 00:15:23.308 "data_offset": 0, 00:15:23.308 "data_size": 0 00:15:23.308 } 00:15:23.308 ] 00:15:23.308 }' 00:15:23.308 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.308 02:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.874 02:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.132 [2024-05-15 02:16:12.007368] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.132 [2024-05-15 02:16:12.007409] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c593500 name Existed_Raid, state configuring 00:15:24.132 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:24.390 [2024-05-15 02:16:12.335447] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.390 [2024-05-15 02:16:12.336331] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.390 [2024-05-15 02:16:12.336403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.390 [2024-05-15 02:16:12.336412] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.390 [2024-05-15 02:16:12.336425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.390 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.957 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.957 "name": "Existed_Raid", 00:15:24.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.957 "strip_size_kb": 64, 00:15:24.957 "state": "configuring", 00:15:24.957 "raid_level": "concat", 00:15:24.957 "superblock": false, 00:15:24.957 "num_base_bdevs": 3, 00:15:24.957 "num_base_bdevs_discovered": 1, 00:15:24.957 "num_base_bdevs_operational": 3, 00:15:24.957 "base_bdevs_list": [ 00:15:24.957 { 00:15:24.957 "name": "BaseBdev1", 00:15:24.957 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:24.957 "is_configured": true, 00:15:24.957 "data_offset": 0, 00:15:24.957 "data_size": 65536 00:15:24.957 }, 00:15:24.957 { 00:15:24.957 "name": "BaseBdev2", 00:15:24.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.957 "is_configured": false, 00:15:24.957 "data_offset": 0, 00:15:24.957 "data_size": 0 00:15:24.957 }, 00:15:24.957 { 00:15:24.957 "name": "BaseBdev3", 00:15:24.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.957 "is_configured": false, 00:15:24.957 "data_offset": 0, 00:15:24.957 "data_size": 0 00:15:24.957 } 00:15:24.957 ] 00:15:24.957 }' 00:15:24.957 02:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.957 02:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.215 02:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.528 [2024-05-15 02:16:13.455632] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.528 BaseBdev2 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:25.528 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.095 02:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.095 [ 00:15:26.095 { 00:15:26.095 "name": "BaseBdev2", 00:15:26.095 "aliases": [ 00:15:26.095 "1a5315bb-1261-11ef-99fd-bfc7c66e2865" 00:15:26.095 ], 00:15:26.095 "product_name": "Malloc disk", 00:15:26.095 "block_size": 512, 00:15:26.095 "num_blocks": 65536, 00:15:26.095 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:26.095 "assigned_rate_limits": { 00:15:26.095 "rw_ios_per_sec": 0, 00:15:26.095 "rw_mbytes_per_sec": 0, 00:15:26.095 "r_mbytes_per_sec": 0, 00:15:26.095 "w_mbytes_per_sec": 0 00:15:26.095 }, 00:15:26.095 "claimed": true, 00:15:26.095 "claim_type": "exclusive_write", 00:15:26.095 "zoned": false, 00:15:26.095 "supported_io_types": { 00:15:26.095 "read": true, 00:15:26.095 "write": true, 00:15:26.095 "unmap": true, 00:15:26.095 "write_zeroes": true, 00:15:26.095 "flush": true, 00:15:26.095 "reset": true, 00:15:26.095 "compare": false, 00:15:26.095 "compare_and_write": false, 00:15:26.095 "abort": true, 00:15:26.095 "nvme_admin": false, 00:15:26.095 "nvme_io": false 00:15:26.095 }, 00:15:26.095 "memory_domains": [ 00:15:26.095 { 00:15:26.095 "dma_device_id": "system", 00:15:26.095 "dma_device_type": 1 00:15:26.095 }, 00:15:26.095 { 00:15:26.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.095 "dma_device_type": 2 00:15:26.095 } 00:15:26.095 ], 00:15:26.095 "driver_specific": {} 00:15:26.095 } 00:15:26.095 ] 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.095 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.663 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.663 "name": "Existed_Raid", 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.663 "strip_size_kb": 64, 00:15:26.663 "state": "configuring", 00:15:26.663 "raid_level": "concat", 00:15:26.663 "superblock": false, 00:15:26.663 "num_base_bdevs": 3, 00:15:26.663 "num_base_bdevs_discovered": 2, 00:15:26.663 "num_base_bdevs_operational": 3, 00:15:26.663 "base_bdevs_list": [ 00:15:26.663 { 00:15:26.663 "name": "BaseBdev1", 00:15:26.663 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:26.663 "is_configured": true, 00:15:26.663 "data_offset": 0, 00:15:26.663 "data_size": 65536 00:15:26.663 }, 00:15:26.663 { 00:15:26.663 "name": "BaseBdev2", 00:15:26.663 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:26.663 "is_configured": true, 00:15:26.663 "data_offset": 0, 00:15:26.663 "data_size": 65536 00:15:26.663 }, 00:15:26.663 { 00:15:26.663 "name": "BaseBdev3", 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.663 "is_configured": false, 00:15:26.663 "data_offset": 0, 00:15:26.663 "data_size": 0 00:15:26.663 } 00:15:26.663 ] 00:15:26.663 }' 00:15:26.663 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.663 02:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.663 02:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.230 [2024-05-15 02:16:15.007730] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.230 [2024-05-15 02:16:15.007763] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c593a00 00:15:27.230 [2024-05-15 02:16:15.007767] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:27.230 [2024-05-15 02:16:15.007788] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c5f6ec0 00:15:27.230 [2024-05-15 02:16:15.007875] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c593a00 00:15:27.230 [2024-05-15 02:16:15.007879] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c593a00 00:15:27.230 [2024-05-15 02:16:15.007910] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.230 BaseBdev3 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:27.230 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.489 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.748 [ 00:15:27.748 { 00:15:27.748 "name": "BaseBdev3", 00:15:27.748 "aliases": [ 00:15:27.748 "1b3feb51-1261-11ef-99fd-bfc7c66e2865" 00:15:27.748 ], 00:15:27.748 "product_name": "Malloc disk", 00:15:27.748 "block_size": 512, 00:15:27.748 "num_blocks": 65536, 00:15:27.748 "uuid": "1b3feb51-1261-11ef-99fd-bfc7c66e2865", 00:15:27.748 "assigned_rate_limits": { 00:15:27.748 "rw_ios_per_sec": 0, 00:15:27.748 "rw_mbytes_per_sec": 0, 00:15:27.748 "r_mbytes_per_sec": 0, 00:15:27.748 "w_mbytes_per_sec": 0 00:15:27.748 }, 00:15:27.748 "claimed": true, 00:15:27.748 "claim_type": "exclusive_write", 00:15:27.748 "zoned": false, 00:15:27.748 "supported_io_types": { 00:15:27.748 "read": true, 00:15:27.748 "write": true, 00:15:27.748 "unmap": true, 00:15:27.748 "write_zeroes": true, 00:15:27.748 "flush": true, 00:15:27.748 "reset": true, 00:15:27.748 "compare": false, 00:15:27.748 "compare_and_write": false, 00:15:27.748 "abort": true, 00:15:27.748 "nvme_admin": false, 00:15:27.748 "nvme_io": false 00:15:27.748 }, 00:15:27.748 "memory_domains": [ 00:15:27.748 { 00:15:27.748 "dma_device_id": "system", 00:15:27.748 "dma_device_type": 1 00:15:27.748 }, 00:15:27.748 { 00:15:27.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.748 "dma_device_type": 2 00:15:27.748 } 00:15:27.748 ], 00:15:27.748 "driver_specific": {} 00:15:27.748 } 00:15:27.748 ] 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.748 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.007 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.007 "name": "Existed_Raid", 00:15:28.007 "uuid": "1b3ff122-1261-11ef-99fd-bfc7c66e2865", 00:15:28.007 "strip_size_kb": 64, 00:15:28.007 "state": "online", 00:15:28.007 "raid_level": "concat", 00:15:28.007 "superblock": false, 00:15:28.007 "num_base_bdevs": 3, 00:15:28.007 "num_base_bdevs_discovered": 3, 00:15:28.007 "num_base_bdevs_operational": 3, 00:15:28.007 "base_bdevs_list": [ 00:15:28.007 { 00:15:28.007 "name": "BaseBdev1", 00:15:28.007 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:28.007 "is_configured": true, 00:15:28.007 "data_offset": 0, 00:15:28.007 "data_size": 65536 00:15:28.007 }, 00:15:28.007 { 00:15:28.007 "name": "BaseBdev2", 00:15:28.007 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:28.007 "is_configured": true, 00:15:28.007 "data_offset": 0, 00:15:28.007 "data_size": 65536 00:15:28.007 }, 00:15:28.007 { 00:15:28.007 "name": "BaseBdev3", 00:15:28.007 "uuid": "1b3feb51-1261-11ef-99fd-bfc7c66e2865", 00:15:28.007 "is_configured": true, 00:15:28.007 "data_offset": 0, 00:15:28.007 "data_size": 65536 00:15:28.007 } 00:15:28.007 ] 00:15:28.007 }' 00:15:28.007 02:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.007 02:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:28.590 [2024-05-15 02:16:16.571743] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:28.590 "name": "Existed_Raid", 00:15:28.590 "aliases": [ 00:15:28.590 "1b3ff122-1261-11ef-99fd-bfc7c66e2865" 00:15:28.590 ], 00:15:28.590 "product_name": "Raid Volume", 00:15:28.590 "block_size": 512, 00:15:28.590 "num_blocks": 196608, 00:15:28.590 "uuid": "1b3ff122-1261-11ef-99fd-bfc7c66e2865", 00:15:28.590 "assigned_rate_limits": { 00:15:28.590 "rw_ios_per_sec": 0, 00:15:28.590 "rw_mbytes_per_sec": 0, 00:15:28.590 "r_mbytes_per_sec": 0, 00:15:28.590 "w_mbytes_per_sec": 0 00:15:28.590 }, 00:15:28.590 "claimed": false, 00:15:28.590 "zoned": false, 00:15:28.590 "supported_io_types": { 00:15:28.590 "read": true, 00:15:28.590 "write": true, 00:15:28.590 "unmap": true, 00:15:28.590 "write_zeroes": true, 00:15:28.590 "flush": true, 00:15:28.590 "reset": true, 00:15:28.590 "compare": false, 00:15:28.590 "compare_and_write": false, 00:15:28.590 "abort": false, 00:15:28.590 "nvme_admin": false, 00:15:28.590 "nvme_io": false 00:15:28.590 }, 00:15:28.590 "memory_domains": [ 00:15:28.590 { 00:15:28.590 "dma_device_id": "system", 00:15:28.590 "dma_device_type": 1 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.590 "dma_device_type": 2 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "dma_device_id": "system", 00:15:28.590 "dma_device_type": 1 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.590 "dma_device_type": 2 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "dma_device_id": "system", 00:15:28.590 "dma_device_type": 1 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.590 "dma_device_type": 2 00:15:28.590 } 00:15:28.590 ], 00:15:28.590 "driver_specific": { 00:15:28.590 "raid": { 00:15:28.590 "uuid": "1b3ff122-1261-11ef-99fd-bfc7c66e2865", 00:15:28.590 "strip_size_kb": 64, 00:15:28.590 "state": "online", 00:15:28.590 "raid_level": "concat", 00:15:28.590 "superblock": false, 00:15:28.590 "num_base_bdevs": 3, 00:15:28.590 "num_base_bdevs_discovered": 3, 00:15:28.590 "num_base_bdevs_operational": 3, 00:15:28.590 "base_bdevs_list": [ 00:15:28.590 { 00:15:28.590 "name": "BaseBdev1", 00:15:28.590 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:28.590 "is_configured": true, 00:15:28.590 "data_offset": 0, 00:15:28.590 "data_size": 65536 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "name": "BaseBdev2", 00:15:28.590 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:28.590 "is_configured": true, 00:15:28.590 "data_offset": 0, 00:15:28.590 "data_size": 65536 00:15:28.590 }, 00:15:28.590 { 00:15:28.590 "name": "BaseBdev3", 00:15:28.590 "uuid": "1b3feb51-1261-11ef-99fd-bfc7c66e2865", 00:15:28.590 "is_configured": true, 00:15:28.590 "data_offset": 0, 00:15:28.590 "data_size": 65536 00:15:28.590 } 00:15:28.590 ] 00:15:28.590 } 00:15:28.590 } 00:15:28.590 }' 00:15:28.590 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.907 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:28.907 BaseBdev2 00:15:28.907 BaseBdev3' 00:15:28.907 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:28.907 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:28.907 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:29.166 "name": "BaseBdev1", 00:15:29.166 "aliases": [ 00:15:29.166 "188e2def-1261-11ef-99fd-bfc7c66e2865" 00:15:29.166 ], 00:15:29.166 "product_name": "Malloc disk", 00:15:29.166 "block_size": 512, 00:15:29.166 "num_blocks": 65536, 00:15:29.166 "uuid": "188e2def-1261-11ef-99fd-bfc7c66e2865", 00:15:29.166 "assigned_rate_limits": { 00:15:29.166 "rw_ios_per_sec": 0, 00:15:29.166 "rw_mbytes_per_sec": 0, 00:15:29.166 "r_mbytes_per_sec": 0, 00:15:29.166 "w_mbytes_per_sec": 0 00:15:29.166 }, 00:15:29.166 "claimed": true, 00:15:29.166 "claim_type": "exclusive_write", 00:15:29.166 "zoned": false, 00:15:29.166 "supported_io_types": { 00:15:29.166 "read": true, 00:15:29.166 "write": true, 00:15:29.166 "unmap": true, 00:15:29.166 "write_zeroes": true, 00:15:29.166 "flush": true, 00:15:29.166 "reset": true, 00:15:29.166 "compare": false, 00:15:29.166 "compare_and_write": false, 00:15:29.166 "abort": true, 00:15:29.166 "nvme_admin": false, 00:15:29.166 "nvme_io": false 00:15:29.166 }, 00:15:29.166 "memory_domains": [ 00:15:29.166 { 00:15:29.166 "dma_device_id": "system", 00:15:29.166 "dma_device_type": 1 00:15:29.166 }, 00:15:29.166 { 00:15:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.166 "dma_device_type": 2 00:15:29.166 } 00:15:29.166 ], 00:15:29.166 "driver_specific": {} 00:15:29.166 }' 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:29.166 02:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.166 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.166 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:29.166 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:29.166 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:29.166 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:29.426 "name": "BaseBdev2", 00:15:29.426 "aliases": [ 00:15:29.426 "1a5315bb-1261-11ef-99fd-bfc7c66e2865" 00:15:29.426 ], 00:15:29.426 "product_name": "Malloc disk", 00:15:29.426 "block_size": 512, 00:15:29.426 "num_blocks": 65536, 00:15:29.426 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:29.426 "assigned_rate_limits": { 00:15:29.426 "rw_ios_per_sec": 0, 00:15:29.426 "rw_mbytes_per_sec": 0, 00:15:29.426 "r_mbytes_per_sec": 0, 00:15:29.426 "w_mbytes_per_sec": 0 00:15:29.426 }, 00:15:29.426 "claimed": true, 00:15:29.426 "claim_type": "exclusive_write", 00:15:29.426 "zoned": false, 00:15:29.426 "supported_io_types": { 00:15:29.426 "read": true, 00:15:29.426 "write": true, 00:15:29.426 "unmap": true, 00:15:29.426 "write_zeroes": true, 00:15:29.426 "flush": true, 00:15:29.426 "reset": true, 00:15:29.426 "compare": false, 00:15:29.426 "compare_and_write": false, 00:15:29.426 "abort": true, 00:15:29.426 "nvme_admin": false, 00:15:29.426 "nvme_io": false 00:15:29.426 }, 00:15:29.426 "memory_domains": [ 00:15:29.426 { 00:15:29.426 "dma_device_id": "system", 00:15:29.426 "dma_device_type": 1 00:15:29.426 }, 00:15:29.426 { 00:15:29.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.426 "dma_device_type": 2 00:15:29.426 } 00:15:29.426 ], 00:15:29.426 "driver_specific": {} 00:15:29.426 }' 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:29.426 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:29.684 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:29.684 "name": "BaseBdev3", 00:15:29.684 "aliases": [ 00:15:29.684 "1b3feb51-1261-11ef-99fd-bfc7c66e2865" 00:15:29.684 ], 00:15:29.684 "product_name": "Malloc disk", 00:15:29.684 "block_size": 512, 00:15:29.684 "num_blocks": 65536, 00:15:29.684 "uuid": "1b3feb51-1261-11ef-99fd-bfc7c66e2865", 00:15:29.684 "assigned_rate_limits": { 00:15:29.684 "rw_ios_per_sec": 0, 00:15:29.684 "rw_mbytes_per_sec": 0, 00:15:29.684 "r_mbytes_per_sec": 0, 00:15:29.684 "w_mbytes_per_sec": 0 00:15:29.684 }, 00:15:29.684 "claimed": true, 00:15:29.684 "claim_type": "exclusive_write", 00:15:29.684 "zoned": false, 00:15:29.684 "supported_io_types": { 00:15:29.684 "read": true, 00:15:29.684 "write": true, 00:15:29.684 "unmap": true, 00:15:29.684 "write_zeroes": true, 00:15:29.684 "flush": true, 00:15:29.684 "reset": true, 00:15:29.684 "compare": false, 00:15:29.684 "compare_and_write": false, 00:15:29.684 "abort": true, 00:15:29.684 "nvme_admin": false, 00:15:29.684 "nvme_io": false 00:15:29.684 }, 00:15:29.684 "memory_domains": [ 00:15:29.684 { 00:15:29.684 "dma_device_id": "system", 00:15:29.684 "dma_device_type": 1 00:15:29.684 }, 00:15:29.684 { 00:15:29.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.684 "dma_device_type": 2 00:15:29.684 } 00:15:29.684 ], 00:15:29.684 "driver_specific": {} 00:15:29.684 }' 00:15:29.684 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.684 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:29.684 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:29.685 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.943 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:29.943 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:29.943 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.943 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:29.943 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:29.944 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.944 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:29.944 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:29.944 02:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:30.203 [2024-05-15 02:16:17.987809] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.203 [2024-05-15 02:16:17.987837] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.203 [2024-05-15 02:16:17.987852] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.203 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.461 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.461 "name": "Existed_Raid", 00:15:30.461 "uuid": "1b3ff122-1261-11ef-99fd-bfc7c66e2865", 00:15:30.461 "strip_size_kb": 64, 00:15:30.461 "state": "offline", 00:15:30.461 "raid_level": "concat", 00:15:30.461 "superblock": false, 00:15:30.461 "num_base_bdevs": 3, 00:15:30.461 "num_base_bdevs_discovered": 2, 00:15:30.461 "num_base_bdevs_operational": 2, 00:15:30.461 "base_bdevs_list": [ 00:15:30.461 { 00:15:30.461 "name": null, 00:15:30.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.461 "is_configured": false, 00:15:30.461 "data_offset": 0, 00:15:30.461 "data_size": 65536 00:15:30.461 }, 00:15:30.461 { 00:15:30.461 "name": "BaseBdev2", 00:15:30.461 "uuid": "1a5315bb-1261-11ef-99fd-bfc7c66e2865", 00:15:30.461 "is_configured": true, 00:15:30.461 "data_offset": 0, 00:15:30.461 "data_size": 65536 00:15:30.461 }, 00:15:30.461 { 00:15:30.461 "name": "BaseBdev3", 00:15:30.461 "uuid": "1b3feb51-1261-11ef-99fd-bfc7c66e2865", 00:15:30.461 "is_configured": true, 00:15:30.461 "data_offset": 0, 00:15:30.461 "data_size": 65536 00:15:30.461 } 00:15:30.461 ] 00:15:30.461 }' 00:15:30.461 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.461 02:16:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.719 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:30.719 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.719 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.719 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:30.978 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:30.978 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:30.978 02:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:31.236 [2024-05-15 02:16:19.180855] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:31.236 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:31.236 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.236 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.236 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:31.802 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:31.802 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:31.802 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:31.802 [2024-05-15 02:16:19.758018] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.802 [2024-05-15 02:16:19.758055] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c593a00 name Existed_Raid, state offline 00:15:31.803 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:31.803 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.803 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:31.803 02:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:32.062 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:32.320 BaseBdev2 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:32.579 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.837 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.096 [ 00:15:33.096 { 00:15:33.096 "name": "BaseBdev2", 00:15:33.096 "aliases": [ 00:15:33.096 "1e69c788-1261-11ef-99fd-bfc7c66e2865" 00:15:33.096 ], 00:15:33.096 "product_name": "Malloc disk", 00:15:33.096 "block_size": 512, 00:15:33.096 "num_blocks": 65536, 00:15:33.096 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:33.096 "assigned_rate_limits": { 00:15:33.096 "rw_ios_per_sec": 0, 00:15:33.096 "rw_mbytes_per_sec": 0, 00:15:33.096 "r_mbytes_per_sec": 0, 00:15:33.096 "w_mbytes_per_sec": 0 00:15:33.096 }, 00:15:33.096 "claimed": false, 00:15:33.096 "zoned": false, 00:15:33.096 "supported_io_types": { 00:15:33.096 "read": true, 00:15:33.096 "write": true, 00:15:33.096 "unmap": true, 00:15:33.096 "write_zeroes": true, 00:15:33.096 "flush": true, 00:15:33.096 "reset": true, 00:15:33.096 "compare": false, 00:15:33.096 "compare_and_write": false, 00:15:33.096 "abort": true, 00:15:33.096 "nvme_admin": false, 00:15:33.096 "nvme_io": false 00:15:33.096 }, 00:15:33.096 "memory_domains": [ 00:15:33.096 { 00:15:33.096 "dma_device_id": "system", 00:15:33.096 "dma_device_type": 1 00:15:33.096 }, 00:15:33.096 { 00:15:33.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.096 "dma_device_type": 2 00:15:33.096 } 00:15:33.096 ], 00:15:33.096 "driver_specific": {} 00:15:33.096 } 00:15:33.096 ] 00:15:33.096 02:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:33.096 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:33.096 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:33.096 02:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.355 BaseBdev3 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:33.355 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.614 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.873 [ 00:15:33.873 { 00:15:33.873 "name": "BaseBdev3", 00:15:33.873 "aliases": [ 00:15:33.873 "1ef281c0-1261-11ef-99fd-bfc7c66e2865" 00:15:33.873 ], 00:15:33.874 "product_name": "Malloc disk", 00:15:33.874 "block_size": 512, 00:15:33.874 "num_blocks": 65536, 00:15:33.874 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:33.874 "assigned_rate_limits": { 00:15:33.874 "rw_ios_per_sec": 0, 00:15:33.874 "rw_mbytes_per_sec": 0, 00:15:33.874 "r_mbytes_per_sec": 0, 00:15:33.874 "w_mbytes_per_sec": 0 00:15:33.874 }, 00:15:33.874 "claimed": false, 00:15:33.874 "zoned": false, 00:15:33.874 "supported_io_types": { 00:15:33.874 "read": true, 00:15:33.874 "write": true, 00:15:33.874 "unmap": true, 00:15:33.874 "write_zeroes": true, 00:15:33.874 "flush": true, 00:15:33.874 "reset": true, 00:15:33.874 "compare": false, 00:15:33.874 "compare_and_write": false, 00:15:33.874 "abort": true, 00:15:33.874 "nvme_admin": false, 00:15:33.874 "nvme_io": false 00:15:33.874 }, 00:15:33.874 "memory_domains": [ 00:15:33.874 { 00:15:33.874 "dma_device_id": "system", 00:15:33.874 "dma_device_type": 1 00:15:33.874 }, 00:15:33.874 { 00:15:33.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.874 "dma_device_type": 2 00:15:33.874 } 00:15:33.874 ], 00:15:33.874 "driver_specific": {} 00:15:33.874 } 00:15:33.874 ] 00:15:33.874 02:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:33.874 02:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:33.874 02:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:33.874 02:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:34.134 [2024-05-15 02:16:22.071243] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.134 [2024-05-15 02:16:22.071303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.134 [2024-05-15 02:16:22.071313] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.134 [2024-05-15 02:16:22.071777] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.134 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.393 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.393 "name": "Existed_Raid", 00:15:34.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.393 "strip_size_kb": 64, 00:15:34.393 "state": "configuring", 00:15:34.393 "raid_level": "concat", 00:15:34.393 "superblock": false, 00:15:34.393 "num_base_bdevs": 3, 00:15:34.393 "num_base_bdevs_discovered": 2, 00:15:34.393 "num_base_bdevs_operational": 3, 00:15:34.393 "base_bdevs_list": [ 00:15:34.393 { 00:15:34.393 "name": "BaseBdev1", 00:15:34.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.393 "is_configured": false, 00:15:34.393 "data_offset": 0, 00:15:34.393 "data_size": 0 00:15:34.393 }, 00:15:34.393 { 00:15:34.393 "name": "BaseBdev2", 00:15:34.393 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:34.393 "is_configured": true, 00:15:34.393 "data_offset": 0, 00:15:34.393 "data_size": 65536 00:15:34.393 }, 00:15:34.393 { 00:15:34.393 "name": "BaseBdev3", 00:15:34.393 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:34.393 "is_configured": true, 00:15:34.393 "data_offset": 0, 00:15:34.393 "data_size": 65536 00:15:34.393 } 00:15:34.393 ] 00:15:34.393 }' 00:15:34.393 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.393 02:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.960 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:34.960 [2024-05-15 02:16:22.959291] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.235 02:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.494 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.494 "name": "Existed_Raid", 00:15:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.494 "strip_size_kb": 64, 00:15:35.494 "state": "configuring", 00:15:35.494 "raid_level": "concat", 00:15:35.494 "superblock": false, 00:15:35.494 "num_base_bdevs": 3, 00:15:35.494 "num_base_bdevs_discovered": 1, 00:15:35.494 "num_base_bdevs_operational": 3, 00:15:35.494 "base_bdevs_list": [ 00:15:35.494 { 00:15:35.494 "name": "BaseBdev1", 00:15:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.494 "is_configured": false, 00:15:35.494 "data_offset": 0, 00:15:35.494 "data_size": 0 00:15:35.494 }, 00:15:35.494 { 00:15:35.494 "name": null, 00:15:35.494 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:35.494 "is_configured": false, 00:15:35.494 "data_offset": 0, 00:15:35.494 "data_size": 65536 00:15:35.494 }, 00:15:35.494 { 00:15:35.494 "name": "BaseBdev3", 00:15:35.494 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:35.494 "is_configured": true, 00:15:35.494 "data_offset": 0, 00:15:35.494 "data_size": 65536 00:15:35.494 } 00:15:35.494 ] 00:15:35.494 }' 00:15:35.494 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.494 02:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.753 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.753 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.010 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:36.011 02:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.269 [2024-05-15 02:16:24.279521] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.269 BaseBdev1 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:36.527 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.785 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:37.043 [ 00:15:37.043 { 00:15:37.043 "name": "BaseBdev1", 00:15:37.043 "aliases": [ 00:15:37.043 "20c6ae4b-1261-11ef-99fd-bfc7c66e2865" 00:15:37.043 ], 00:15:37.043 "product_name": "Malloc disk", 00:15:37.043 "block_size": 512, 00:15:37.043 "num_blocks": 65536, 00:15:37.043 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:37.043 "assigned_rate_limits": { 00:15:37.043 "rw_ios_per_sec": 0, 00:15:37.043 "rw_mbytes_per_sec": 0, 00:15:37.043 "r_mbytes_per_sec": 0, 00:15:37.043 "w_mbytes_per_sec": 0 00:15:37.043 }, 00:15:37.043 "claimed": true, 00:15:37.043 "claim_type": "exclusive_write", 00:15:37.043 "zoned": false, 00:15:37.043 "supported_io_types": { 00:15:37.043 "read": true, 00:15:37.043 "write": true, 00:15:37.043 "unmap": true, 00:15:37.043 "write_zeroes": true, 00:15:37.043 "flush": true, 00:15:37.043 "reset": true, 00:15:37.043 "compare": false, 00:15:37.043 "compare_and_write": false, 00:15:37.043 "abort": true, 00:15:37.043 "nvme_admin": false, 00:15:37.043 "nvme_io": false 00:15:37.043 }, 00:15:37.043 "memory_domains": [ 00:15:37.043 { 00:15:37.043 "dma_device_id": "system", 00:15:37.043 "dma_device_type": 1 00:15:37.043 }, 00:15:37.043 { 00:15:37.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.043 "dma_device_type": 2 00:15:37.043 } 00:15:37.043 ], 00:15:37.043 "driver_specific": {} 00:15:37.043 } 00:15:37.043 ] 00:15:37.043 02:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:37.043 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:37.043 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.043 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.044 02:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.303 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.303 "name": "Existed_Raid", 00:15:37.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.303 "strip_size_kb": 64, 00:15:37.303 "state": "configuring", 00:15:37.303 "raid_level": "concat", 00:15:37.303 "superblock": false, 00:15:37.303 "num_base_bdevs": 3, 00:15:37.303 "num_base_bdevs_discovered": 2, 00:15:37.303 "num_base_bdevs_operational": 3, 00:15:37.303 "base_bdevs_list": [ 00:15:37.303 { 00:15:37.303 "name": "BaseBdev1", 00:15:37.303 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:37.303 "is_configured": true, 00:15:37.303 "data_offset": 0, 00:15:37.303 "data_size": 65536 00:15:37.303 }, 00:15:37.303 { 00:15:37.303 "name": null, 00:15:37.303 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:37.303 "is_configured": false, 00:15:37.303 "data_offset": 0, 00:15:37.303 "data_size": 65536 00:15:37.303 }, 00:15:37.303 { 00:15:37.303 "name": "BaseBdev3", 00:15:37.303 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:37.303 "is_configured": true, 00:15:37.303 "data_offset": 0, 00:15:37.303 "data_size": 65536 00:15:37.303 } 00:15:37.303 ] 00:15:37.303 }' 00:15:37.303 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.303 02:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.560 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.560 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.125 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:38.125 02:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:38.382 [2024-05-15 02:16:26.155539] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.382 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.637 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.637 "name": "Existed_Raid", 00:15:38.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.637 "strip_size_kb": 64, 00:15:38.637 "state": "configuring", 00:15:38.637 "raid_level": "concat", 00:15:38.637 "superblock": false, 00:15:38.637 "num_base_bdevs": 3, 00:15:38.637 "num_base_bdevs_discovered": 1, 00:15:38.637 "num_base_bdevs_operational": 3, 00:15:38.637 "base_bdevs_list": [ 00:15:38.637 { 00:15:38.637 "name": "BaseBdev1", 00:15:38.637 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:38.637 "is_configured": true, 00:15:38.637 "data_offset": 0, 00:15:38.637 "data_size": 65536 00:15:38.637 }, 00:15:38.637 { 00:15:38.637 "name": null, 00:15:38.637 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:38.637 "is_configured": false, 00:15:38.637 "data_offset": 0, 00:15:38.637 "data_size": 65536 00:15:38.637 }, 00:15:38.637 { 00:15:38.637 "name": null, 00:15:38.637 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:38.637 "is_configured": false, 00:15:38.637 "data_offset": 0, 00:15:38.637 "data_size": 65536 00:15:38.637 } 00:15:38.637 ] 00:15:38.637 }' 00:15:38.637 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.637 02:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.894 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.894 02:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.457 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:15:39.457 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.714 [2024-05-15 02:16:27.483654] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.714 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.971 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.971 "name": "Existed_Raid", 00:15:39.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.971 "strip_size_kb": 64, 00:15:39.971 "state": "configuring", 00:15:39.971 "raid_level": "concat", 00:15:39.971 "superblock": false, 00:15:39.971 "num_base_bdevs": 3, 00:15:39.971 "num_base_bdevs_discovered": 2, 00:15:39.971 "num_base_bdevs_operational": 3, 00:15:39.971 "base_bdevs_list": [ 00:15:39.971 { 00:15:39.971 "name": "BaseBdev1", 00:15:39.971 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:39.971 "is_configured": true, 00:15:39.971 "data_offset": 0, 00:15:39.971 "data_size": 65536 00:15:39.971 }, 00:15:39.971 { 00:15:39.971 "name": null, 00:15:39.971 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:39.971 "is_configured": false, 00:15:39.971 "data_offset": 0, 00:15:39.971 "data_size": 65536 00:15:39.971 }, 00:15:39.971 { 00:15:39.971 "name": "BaseBdev3", 00:15:39.971 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:39.971 "is_configured": true, 00:15:39.971 "data_offset": 0, 00:15:39.971 "data_size": 65536 00:15:39.971 } 00:15:39.971 ] 00:15:39.971 }' 00:15:39.971 02:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.971 02:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.229 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.229 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.487 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:40.487 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:40.745 [2024-05-15 02:16:28.663701] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.745 02:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.311 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.311 "name": "Existed_Raid", 00:15:41.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.311 "strip_size_kb": 64, 00:15:41.311 "state": "configuring", 00:15:41.311 "raid_level": "concat", 00:15:41.311 "superblock": false, 00:15:41.311 "num_base_bdevs": 3, 00:15:41.311 "num_base_bdevs_discovered": 1, 00:15:41.311 "num_base_bdevs_operational": 3, 00:15:41.311 "base_bdevs_list": [ 00:15:41.311 { 00:15:41.311 "name": null, 00:15:41.311 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:41.311 "is_configured": false, 00:15:41.311 "data_offset": 0, 00:15:41.311 "data_size": 65536 00:15:41.311 }, 00:15:41.311 { 00:15:41.311 "name": null, 00:15:41.311 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:41.311 "is_configured": false, 00:15:41.311 "data_offset": 0, 00:15:41.311 "data_size": 65536 00:15:41.311 }, 00:15:41.311 { 00:15:41.311 "name": "BaseBdev3", 00:15:41.311 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:41.311 "is_configured": true, 00:15:41.311 "data_offset": 0, 00:15:41.311 "data_size": 65536 00:15:41.311 } 00:15:41.311 ] 00:15:41.311 }' 00:15:41.311 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.311 02:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.570 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.570 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.829 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:41.829 02:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:42.088 [2024-05-15 02:16:30.045263] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.088 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.655 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.655 "name": "Existed_Raid", 00:15:42.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.655 "strip_size_kb": 64, 00:15:42.655 "state": "configuring", 00:15:42.655 "raid_level": "concat", 00:15:42.655 "superblock": false, 00:15:42.655 "num_base_bdevs": 3, 00:15:42.655 "num_base_bdevs_discovered": 2, 00:15:42.655 "num_base_bdevs_operational": 3, 00:15:42.655 "base_bdevs_list": [ 00:15:42.655 { 00:15:42.655 "name": null, 00:15:42.655 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:42.655 "is_configured": false, 00:15:42.655 "data_offset": 0, 00:15:42.655 "data_size": 65536 00:15:42.655 }, 00:15:42.655 { 00:15:42.655 "name": "BaseBdev2", 00:15:42.655 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:42.655 "is_configured": true, 00:15:42.655 "data_offset": 0, 00:15:42.655 "data_size": 65536 00:15:42.655 }, 00:15:42.655 { 00:15:42.655 "name": "BaseBdev3", 00:15:42.655 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:42.655 "is_configured": true, 00:15:42.655 "data_offset": 0, 00:15:42.655 "data_size": 65536 00:15:42.655 } 00:15:42.655 ] 00:15:42.655 }' 00:15:42.655 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.655 02:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.914 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.914 02:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.171 02:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:15:43.171 02:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.171 02:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:43.429 02:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 20c6ae4b-1261-11ef-99fd-bfc7c66e2865 00:15:44.041 [2024-05-15 02:16:31.777396] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:44.041 [2024-05-15 02:16:31.777456] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c593a00 00:15:44.041 [2024-05-15 02:16:31.777465] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:44.041 [2024-05-15 02:16:31.777506] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c5f6e20 00:15:44.041 [2024-05-15 02:16:31.777594] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c593a00 00:15:44.041 [2024-05-15 02:16:31.777602] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c593a00 00:15:44.041 [2024-05-15 02:16:31.777652] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.041 NewBaseBdev 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:44.041 02:16:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.300 02:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:44.559 [ 00:15:44.559 { 00:15:44.559 "name": "NewBaseBdev", 00:15:44.559 "aliases": [ 00:15:44.559 "20c6ae4b-1261-11ef-99fd-bfc7c66e2865" 00:15:44.559 ], 00:15:44.559 "product_name": "Malloc disk", 00:15:44.559 "block_size": 512, 00:15:44.559 "num_blocks": 65536, 00:15:44.559 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:44.559 "assigned_rate_limits": { 00:15:44.559 "rw_ios_per_sec": 0, 00:15:44.559 "rw_mbytes_per_sec": 0, 00:15:44.559 "r_mbytes_per_sec": 0, 00:15:44.559 "w_mbytes_per_sec": 0 00:15:44.559 }, 00:15:44.559 "claimed": true, 00:15:44.559 "claim_type": "exclusive_write", 00:15:44.559 "zoned": false, 00:15:44.559 "supported_io_types": { 00:15:44.559 "read": true, 00:15:44.559 "write": true, 00:15:44.559 "unmap": true, 00:15:44.559 "write_zeroes": true, 00:15:44.559 "flush": true, 00:15:44.559 "reset": true, 00:15:44.559 "compare": false, 00:15:44.559 "compare_and_write": false, 00:15:44.559 "abort": true, 00:15:44.559 "nvme_admin": false, 00:15:44.559 "nvme_io": false 00:15:44.559 }, 00:15:44.559 "memory_domains": [ 00:15:44.559 { 00:15:44.559 "dma_device_id": "system", 00:15:44.559 "dma_device_type": 1 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.559 "dma_device_type": 2 00:15:44.559 } 00:15:44.559 ], 00:15:44.559 "driver_specific": {} 00:15:44.559 } 00:15:44.559 ] 00:15:44.559 02:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:44.559 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:44.559 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.560 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.817 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.817 "name": "Existed_Raid", 00:15:44.817 "uuid": "253ecbc7-1261-11ef-99fd-bfc7c66e2865", 00:15:44.817 "strip_size_kb": 64, 00:15:44.817 "state": "online", 00:15:44.817 "raid_level": "concat", 00:15:44.817 "superblock": false, 00:15:44.817 "num_base_bdevs": 3, 00:15:44.817 "num_base_bdevs_discovered": 3, 00:15:44.817 "num_base_bdevs_operational": 3, 00:15:44.817 "base_bdevs_list": [ 00:15:44.817 { 00:15:44.817 "name": "NewBaseBdev", 00:15:44.817 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:44.817 "is_configured": true, 00:15:44.817 "data_offset": 0, 00:15:44.817 "data_size": 65536 00:15:44.817 }, 00:15:44.817 { 00:15:44.817 "name": "BaseBdev2", 00:15:44.817 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:44.818 "is_configured": true, 00:15:44.818 "data_offset": 0, 00:15:44.818 "data_size": 65536 00:15:44.818 }, 00:15:44.818 { 00:15:44.818 "name": "BaseBdev3", 00:15:44.818 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:44.818 "is_configured": true, 00:15:44.818 "data_offset": 0, 00:15:44.818 "data_size": 65536 00:15:44.818 } 00:15:44.818 ] 00:15:44.818 }' 00:15:44.818 02:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.818 02:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:45.385 [2024-05-15 02:16:33.369199] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:45.385 "name": "Existed_Raid", 00:15:45.385 "aliases": [ 00:15:45.385 "253ecbc7-1261-11ef-99fd-bfc7c66e2865" 00:15:45.385 ], 00:15:45.385 "product_name": "Raid Volume", 00:15:45.385 "block_size": 512, 00:15:45.385 "num_blocks": 196608, 00:15:45.385 "uuid": "253ecbc7-1261-11ef-99fd-bfc7c66e2865", 00:15:45.385 "assigned_rate_limits": { 00:15:45.385 "rw_ios_per_sec": 0, 00:15:45.385 "rw_mbytes_per_sec": 0, 00:15:45.385 "r_mbytes_per_sec": 0, 00:15:45.385 "w_mbytes_per_sec": 0 00:15:45.385 }, 00:15:45.385 "claimed": false, 00:15:45.385 "zoned": false, 00:15:45.385 "supported_io_types": { 00:15:45.385 "read": true, 00:15:45.385 "write": true, 00:15:45.385 "unmap": true, 00:15:45.385 "write_zeroes": true, 00:15:45.385 "flush": true, 00:15:45.385 "reset": true, 00:15:45.385 "compare": false, 00:15:45.385 "compare_and_write": false, 00:15:45.385 "abort": false, 00:15:45.385 "nvme_admin": false, 00:15:45.385 "nvme_io": false 00:15:45.385 }, 00:15:45.385 "memory_domains": [ 00:15:45.385 { 00:15:45.385 "dma_device_id": "system", 00:15:45.385 "dma_device_type": 1 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.385 "dma_device_type": 2 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "dma_device_id": "system", 00:15:45.385 "dma_device_type": 1 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.385 "dma_device_type": 2 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "dma_device_id": "system", 00:15:45.385 "dma_device_type": 1 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.385 "dma_device_type": 2 00:15:45.385 } 00:15:45.385 ], 00:15:45.385 "driver_specific": { 00:15:45.385 "raid": { 00:15:45.385 "uuid": "253ecbc7-1261-11ef-99fd-bfc7c66e2865", 00:15:45.385 "strip_size_kb": 64, 00:15:45.385 "state": "online", 00:15:45.385 "raid_level": "concat", 00:15:45.385 "superblock": false, 00:15:45.385 "num_base_bdevs": 3, 00:15:45.385 "num_base_bdevs_discovered": 3, 00:15:45.385 "num_base_bdevs_operational": 3, 00:15:45.385 "base_bdevs_list": [ 00:15:45.385 { 00:15:45.385 "name": "NewBaseBdev", 00:15:45.385 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:45.385 "is_configured": true, 00:15:45.385 "data_offset": 0, 00:15:45.385 "data_size": 65536 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "name": "BaseBdev2", 00:15:45.385 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:45.385 "is_configured": true, 00:15:45.385 "data_offset": 0, 00:15:45.385 "data_size": 65536 00:15:45.385 }, 00:15:45.385 { 00:15:45.385 "name": "BaseBdev3", 00:15:45.385 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:45.385 "is_configured": true, 00:15:45.385 "data_offset": 0, 00:15:45.385 "data_size": 65536 00:15:45.385 } 00:15:45.385 ] 00:15:45.385 } 00:15:45.385 } 00:15:45.385 }' 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.385 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:15:45.385 BaseBdev2 00:15:45.385 BaseBdev3' 00:15:45.642 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:45.642 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:45.642 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:45.899 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:45.899 "name": "NewBaseBdev", 00:15:45.899 "aliases": [ 00:15:45.899 "20c6ae4b-1261-11ef-99fd-bfc7c66e2865" 00:15:45.899 ], 00:15:45.899 "product_name": "Malloc disk", 00:15:45.899 "block_size": 512, 00:15:45.899 "num_blocks": 65536, 00:15:45.899 "uuid": "20c6ae4b-1261-11ef-99fd-bfc7c66e2865", 00:15:45.899 "assigned_rate_limits": { 00:15:45.899 "rw_ios_per_sec": 0, 00:15:45.899 "rw_mbytes_per_sec": 0, 00:15:45.899 "r_mbytes_per_sec": 0, 00:15:45.899 "w_mbytes_per_sec": 0 00:15:45.899 }, 00:15:45.899 "claimed": true, 00:15:45.900 "claim_type": "exclusive_write", 00:15:45.900 "zoned": false, 00:15:45.900 "supported_io_types": { 00:15:45.900 "read": true, 00:15:45.900 "write": true, 00:15:45.900 "unmap": true, 00:15:45.900 "write_zeroes": true, 00:15:45.900 "flush": true, 00:15:45.900 "reset": true, 00:15:45.900 "compare": false, 00:15:45.900 "compare_and_write": false, 00:15:45.900 "abort": true, 00:15:45.900 "nvme_admin": false, 00:15:45.900 "nvme_io": false 00:15:45.900 }, 00:15:45.900 "memory_domains": [ 00:15:45.900 { 00:15:45.900 "dma_device_id": "system", 00:15:45.900 "dma_device_type": 1 00:15:45.900 }, 00:15:45.900 { 00:15:45.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.900 "dma_device_type": 2 00:15:45.900 } 00:15:45.900 ], 00:15:45.900 "driver_specific": {} 00:15:45.900 }' 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:45.900 02:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:46.158 "name": "BaseBdev2", 00:15:46.158 "aliases": [ 00:15:46.158 "1e69c788-1261-11ef-99fd-bfc7c66e2865" 00:15:46.158 ], 00:15:46.158 "product_name": "Malloc disk", 00:15:46.158 "block_size": 512, 00:15:46.158 "num_blocks": 65536, 00:15:46.158 "uuid": "1e69c788-1261-11ef-99fd-bfc7c66e2865", 00:15:46.158 "assigned_rate_limits": { 00:15:46.158 "rw_ios_per_sec": 0, 00:15:46.158 "rw_mbytes_per_sec": 0, 00:15:46.158 "r_mbytes_per_sec": 0, 00:15:46.158 "w_mbytes_per_sec": 0 00:15:46.158 }, 00:15:46.158 "claimed": true, 00:15:46.158 "claim_type": "exclusive_write", 00:15:46.158 "zoned": false, 00:15:46.158 "supported_io_types": { 00:15:46.158 "read": true, 00:15:46.158 "write": true, 00:15:46.158 "unmap": true, 00:15:46.158 "write_zeroes": true, 00:15:46.158 "flush": true, 00:15:46.158 "reset": true, 00:15:46.158 "compare": false, 00:15:46.158 "compare_and_write": false, 00:15:46.158 "abort": true, 00:15:46.158 "nvme_admin": false, 00:15:46.158 "nvme_io": false 00:15:46.158 }, 00:15:46.158 "memory_domains": [ 00:15:46.158 { 00:15:46.158 "dma_device_id": "system", 00:15:46.158 "dma_device_type": 1 00:15:46.158 }, 00:15:46.158 { 00:15:46.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.158 "dma_device_type": 2 00:15:46.158 } 00:15:46.158 ], 00:15:46.158 "driver_specific": {} 00:15:46.158 }' 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:46.158 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:46.417 "name": "BaseBdev3", 00:15:46.417 "aliases": [ 00:15:46.417 "1ef281c0-1261-11ef-99fd-bfc7c66e2865" 00:15:46.417 ], 00:15:46.417 "product_name": "Malloc disk", 00:15:46.417 "block_size": 512, 00:15:46.417 "num_blocks": 65536, 00:15:46.417 "uuid": "1ef281c0-1261-11ef-99fd-bfc7c66e2865", 00:15:46.417 "assigned_rate_limits": { 00:15:46.417 "rw_ios_per_sec": 0, 00:15:46.417 "rw_mbytes_per_sec": 0, 00:15:46.417 "r_mbytes_per_sec": 0, 00:15:46.417 "w_mbytes_per_sec": 0 00:15:46.417 }, 00:15:46.417 "claimed": true, 00:15:46.417 "claim_type": "exclusive_write", 00:15:46.417 "zoned": false, 00:15:46.417 "supported_io_types": { 00:15:46.417 "read": true, 00:15:46.417 "write": true, 00:15:46.417 "unmap": true, 00:15:46.417 "write_zeroes": true, 00:15:46.417 "flush": true, 00:15:46.417 "reset": true, 00:15:46.417 "compare": false, 00:15:46.417 "compare_and_write": false, 00:15:46.417 "abort": true, 00:15:46.417 "nvme_admin": false, 00:15:46.417 "nvme_io": false 00:15:46.417 }, 00:15:46.417 "memory_domains": [ 00:15:46.417 { 00:15:46.417 "dma_device_id": "system", 00:15:46.417 "dma_device_type": 1 00:15:46.417 }, 00:15:46.417 { 00:15:46.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.417 "dma_device_type": 2 00:15:46.417 } 00:15:46.417 ], 00:15:46.417 "driver_specific": {} 00:15:46.417 }' 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.417 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:46.686 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.945 [2024-05-15 02:16:34.749163] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.945 [2024-05-15 02:16:34.749207] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.945 [2024-05-15 02:16:34.749231] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.945 [2024-05-15 02:16:34.749245] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.945 [2024-05-15 02:16:34.749250] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c593a00 name Existed_Raid, state offline 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 53375 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 53375 ']' 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 53375 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 53375 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:46.945 killing process with pid 53375 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53375' 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 53375 00:15:46.945 [2024-05-15 02:16:34.779652] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.945 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 53375 00:15:46.945 [2024-05-15 02:16:34.794877] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.204 02:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:15:47.204 00:15:47.204 real 0m27.692s 00:15:47.204 user 0m50.924s 00:15:47.204 sys 0m3.669s 00:15:47.204 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:47.204 02:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.204 ************************************ 00:15:47.204 END TEST raid_state_function_test 00:15:47.204 ************************************ 00:15:47.204 02:16:34 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:47.204 02:16:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:47.205 02:16:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.205 02:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.205 ************************************ 00:15:47.205 START TEST raid_state_function_test_sb 00:15:47.205 ************************************ 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=54116 00:15:47.205 Process raid pid: 54116 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54116' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 54116 /var/tmp/spdk-raid.sock 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 54116 ']' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:47.205 02:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.205 [2024-05-15 02:16:35.019096] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:15:47.205 [2024-05-15 02:16:35.019357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:47.772 EAL: TSC is not safe to use in SMP mode 00:15:47.772 EAL: TSC is not invariant 00:15:47.772 [2024-05-15 02:16:35.508863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.772 [2024-05-15 02:16:35.599472] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:47.772 [2024-05-15 02:16:35.601703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.772 [2024-05-15 02:16:35.602435] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.772 [2024-05-15 02:16:35.602451] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.338 02:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.338 02:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:48.338 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:48.338 [2024-05-15 02:16:36.354528] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.338 [2024-05-15 02:16:36.354592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.338 [2024-05-15 02:16:36.354597] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.338 [2024-05-15 02:16:36.354606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.338 [2024-05-15 02:16:36.354609] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:48.338 [2024-05-15 02:16:36.354617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.596 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.855 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.855 "name": "Existed_Raid", 00:15:48.855 "uuid": "27f9327a-1261-11ef-99fd-bfc7c66e2865", 00:15:48.855 "strip_size_kb": 64, 00:15:48.855 "state": "configuring", 00:15:48.855 "raid_level": "concat", 00:15:48.855 "superblock": true, 00:15:48.855 "num_base_bdevs": 3, 00:15:48.855 "num_base_bdevs_discovered": 0, 00:15:48.855 "num_base_bdevs_operational": 3, 00:15:48.855 "base_bdevs_list": [ 00:15:48.855 { 00:15:48.855 "name": "BaseBdev1", 00:15:48.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.855 "is_configured": false, 00:15:48.855 "data_offset": 0, 00:15:48.855 "data_size": 0 00:15:48.855 }, 00:15:48.855 { 00:15:48.855 "name": "BaseBdev2", 00:15:48.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.855 "is_configured": false, 00:15:48.855 "data_offset": 0, 00:15:48.855 "data_size": 0 00:15:48.855 }, 00:15:48.855 { 00:15:48.855 "name": "BaseBdev3", 00:15:48.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.855 "is_configured": false, 00:15:48.855 "data_offset": 0, 00:15:48.855 "data_size": 0 00:15:48.855 } 00:15:48.855 ] 00:15:48.855 }' 00:15:48.855 02:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.855 02:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.113 02:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:49.374 [2024-05-15 02:16:37.294475] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.374 [2024-05-15 02:16:37.294511] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7a8500 name Existed_Raid, state configuring 00:15:49.374 02:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:49.647 [2024-05-15 02:16:37.518484] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.647 [2024-05-15 02:16:37.518555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.647 [2024-05-15 02:16:37.518565] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.647 [2024-05-15 02:16:37.518578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.647 [2024-05-15 02:16:37.518583] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.647 [2024-05-15 02:16:37.518594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.647 02:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.906 [2024-05-15 02:16:37.779467] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.906 BaseBdev1 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:49.906 02:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.163 02:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.421 [ 00:15:50.421 { 00:15:50.421 "name": "BaseBdev1", 00:15:50.421 "aliases": [ 00:15:50.421 "28d279a4-1261-11ef-99fd-bfc7c66e2865" 00:15:50.421 ], 00:15:50.421 "product_name": "Malloc disk", 00:15:50.421 "block_size": 512, 00:15:50.421 "num_blocks": 65536, 00:15:50.421 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:50.421 "assigned_rate_limits": { 00:15:50.421 "rw_ios_per_sec": 0, 00:15:50.421 "rw_mbytes_per_sec": 0, 00:15:50.421 "r_mbytes_per_sec": 0, 00:15:50.421 "w_mbytes_per_sec": 0 00:15:50.421 }, 00:15:50.421 "claimed": true, 00:15:50.421 "claim_type": "exclusive_write", 00:15:50.421 "zoned": false, 00:15:50.421 "supported_io_types": { 00:15:50.421 "read": true, 00:15:50.421 "write": true, 00:15:50.421 "unmap": true, 00:15:50.421 "write_zeroes": true, 00:15:50.421 "flush": true, 00:15:50.421 "reset": true, 00:15:50.421 "compare": false, 00:15:50.421 "compare_and_write": false, 00:15:50.421 "abort": true, 00:15:50.421 "nvme_admin": false, 00:15:50.421 "nvme_io": false 00:15:50.421 }, 00:15:50.421 "memory_domains": [ 00:15:50.421 { 00:15:50.421 "dma_device_id": "system", 00:15:50.421 "dma_device_type": 1 00:15:50.421 }, 00:15:50.421 { 00:15:50.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.421 "dma_device_type": 2 00:15:50.421 } 00:15:50.421 ], 00:15:50.421 "driver_specific": {} 00:15:50.421 } 00:15:50.421 ] 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.421 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.679 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.679 "name": "Existed_Raid", 00:15:50.679 "uuid": "28aacd7a-1261-11ef-99fd-bfc7c66e2865", 00:15:50.679 "strip_size_kb": 64, 00:15:50.679 "state": "configuring", 00:15:50.679 "raid_level": "concat", 00:15:50.679 "superblock": true, 00:15:50.679 "num_base_bdevs": 3, 00:15:50.679 "num_base_bdevs_discovered": 1, 00:15:50.679 "num_base_bdevs_operational": 3, 00:15:50.679 "base_bdevs_list": [ 00:15:50.679 { 00:15:50.679 "name": "BaseBdev1", 00:15:50.679 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:50.679 "is_configured": true, 00:15:50.679 "data_offset": 2048, 00:15:50.679 "data_size": 63488 00:15:50.679 }, 00:15:50.679 { 00:15:50.679 "name": "BaseBdev2", 00:15:50.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.679 "is_configured": false, 00:15:50.679 "data_offset": 0, 00:15:50.679 "data_size": 0 00:15:50.679 }, 00:15:50.679 { 00:15:50.679 "name": "BaseBdev3", 00:15:50.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.679 "is_configured": false, 00:15:50.679 "data_offset": 0, 00:15:50.679 "data_size": 0 00:15:50.679 } 00:15:50.679 ] 00:15:50.679 }' 00:15:50.679 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.679 02:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.937 02:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:51.503 [2024-05-15 02:16:39.270465] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.503 [2024-05-15 02:16:39.270523] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7a8500 name Existed_Raid, state configuring 00:15:51.503 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:51.761 [2024-05-15 02:16:39.534470] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.761 [2024-05-15 02:16:39.535203] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.761 [2024-05-15 02:16:39.535248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.761 [2024-05-15 02:16:39.535254] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.761 [2024-05-15 02:16:39.535262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.762 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.020 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.020 "name": "Existed_Raid", 00:15:52.020 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:52.020 "strip_size_kb": 64, 00:15:52.020 "state": "configuring", 00:15:52.020 "raid_level": "concat", 00:15:52.020 "superblock": true, 00:15:52.020 "num_base_bdevs": 3, 00:15:52.020 "num_base_bdevs_discovered": 1, 00:15:52.020 "num_base_bdevs_operational": 3, 00:15:52.020 "base_bdevs_list": [ 00:15:52.020 { 00:15:52.020 "name": "BaseBdev1", 00:15:52.020 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:52.020 "is_configured": true, 00:15:52.020 "data_offset": 2048, 00:15:52.020 "data_size": 63488 00:15:52.020 }, 00:15:52.020 { 00:15:52.020 "name": "BaseBdev2", 00:15:52.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.020 "is_configured": false, 00:15:52.020 "data_offset": 0, 00:15:52.020 "data_size": 0 00:15:52.020 }, 00:15:52.020 { 00:15:52.020 "name": "BaseBdev3", 00:15:52.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.020 "is_configured": false, 00:15:52.020 "data_offset": 0, 00:15:52.020 "data_size": 0 00:15:52.020 } 00:15:52.020 ] 00:15:52.020 }' 00:15:52.020 02:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.020 02:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.278 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:52.536 [2024-05-15 02:16:40.318572] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.536 BaseBdev2 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:52.536 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.794 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.053 [ 00:15:53.053 { 00:15:53.053 "name": "BaseBdev2", 00:15:53.053 "aliases": [ 00:15:53.053 "2a560b94-1261-11ef-99fd-bfc7c66e2865" 00:15:53.053 ], 00:15:53.053 "product_name": "Malloc disk", 00:15:53.053 "block_size": 512, 00:15:53.053 "num_blocks": 65536, 00:15:53.053 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:53.053 "assigned_rate_limits": { 00:15:53.053 "rw_ios_per_sec": 0, 00:15:53.053 "rw_mbytes_per_sec": 0, 00:15:53.053 "r_mbytes_per_sec": 0, 00:15:53.053 "w_mbytes_per_sec": 0 00:15:53.053 }, 00:15:53.053 "claimed": true, 00:15:53.053 "claim_type": "exclusive_write", 00:15:53.053 "zoned": false, 00:15:53.053 "supported_io_types": { 00:15:53.053 "read": true, 00:15:53.053 "write": true, 00:15:53.053 "unmap": true, 00:15:53.053 "write_zeroes": true, 00:15:53.053 "flush": true, 00:15:53.053 "reset": true, 00:15:53.053 "compare": false, 00:15:53.053 "compare_and_write": false, 00:15:53.053 "abort": true, 00:15:53.053 "nvme_admin": false, 00:15:53.053 "nvme_io": false 00:15:53.053 }, 00:15:53.053 "memory_domains": [ 00:15:53.053 { 00:15:53.053 "dma_device_id": "system", 00:15:53.053 "dma_device_type": 1 00:15:53.053 }, 00:15:53.053 { 00:15:53.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.053 "dma_device_type": 2 00:15:53.053 } 00:15:53.053 ], 00:15:53.053 "driver_specific": {} 00:15:53.053 } 00:15:53.053 ] 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.053 02:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.621 02:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.621 "name": "Existed_Raid", 00:15:53.621 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:53.621 "strip_size_kb": 64, 00:15:53.621 "state": "configuring", 00:15:53.621 "raid_level": "concat", 00:15:53.621 "superblock": true, 00:15:53.621 "num_base_bdevs": 3, 00:15:53.621 "num_base_bdevs_discovered": 2, 00:15:53.621 "num_base_bdevs_operational": 3, 00:15:53.621 "base_bdevs_list": [ 00:15:53.621 { 00:15:53.621 "name": "BaseBdev1", 00:15:53.621 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:53.621 "is_configured": true, 00:15:53.621 "data_offset": 2048, 00:15:53.621 "data_size": 63488 00:15:53.621 }, 00:15:53.621 { 00:15:53.621 "name": "BaseBdev2", 00:15:53.621 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:53.621 "is_configured": true, 00:15:53.621 "data_offset": 2048, 00:15:53.621 "data_size": 63488 00:15:53.621 }, 00:15:53.621 { 00:15:53.621 "name": "BaseBdev3", 00:15:53.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.621 "is_configured": false, 00:15:53.621 "data_offset": 0, 00:15:53.621 "data_size": 0 00:15:53.621 } 00:15:53.621 ] 00:15:53.621 }' 00:15:53.621 02:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.621 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.878 02:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.137 [2024-05-15 02:16:41.942575] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.137 [2024-05-15 02:16:41.942672] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7a8a00 00:15:54.137 [2024-05-15 02:16:41.942692] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.137 [2024-05-15 02:16:41.942712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c80bec0 00:15:54.137 [2024-05-15 02:16:41.942755] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7a8a00 00:15:54.137 [2024-05-15 02:16:41.942759] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c7a8a00 00:15:54.137 [2024-05-15 02:16:41.942778] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.137 BaseBdev3 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:54.137 02:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.395 02:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.654 [ 00:15:54.654 { 00:15:54.654 "name": "BaseBdev3", 00:15:54.654 "aliases": [ 00:15:54.654 "2b4dd95e-1261-11ef-99fd-bfc7c66e2865" 00:15:54.654 ], 00:15:54.654 "product_name": "Malloc disk", 00:15:54.654 "block_size": 512, 00:15:54.654 "num_blocks": 65536, 00:15:54.654 "uuid": "2b4dd95e-1261-11ef-99fd-bfc7c66e2865", 00:15:54.654 "assigned_rate_limits": { 00:15:54.654 "rw_ios_per_sec": 0, 00:15:54.654 "rw_mbytes_per_sec": 0, 00:15:54.654 "r_mbytes_per_sec": 0, 00:15:54.654 "w_mbytes_per_sec": 0 00:15:54.654 }, 00:15:54.654 "claimed": true, 00:15:54.654 "claim_type": "exclusive_write", 00:15:54.654 "zoned": false, 00:15:54.654 "supported_io_types": { 00:15:54.654 "read": true, 00:15:54.654 "write": true, 00:15:54.654 "unmap": true, 00:15:54.654 "write_zeroes": true, 00:15:54.654 "flush": true, 00:15:54.654 "reset": true, 00:15:54.654 "compare": false, 00:15:54.654 "compare_and_write": false, 00:15:54.654 "abort": true, 00:15:54.654 "nvme_admin": false, 00:15:54.654 "nvme_io": false 00:15:54.654 }, 00:15:54.654 "memory_domains": [ 00:15:54.654 { 00:15:54.654 "dma_device_id": "system", 00:15:54.654 "dma_device_type": 1 00:15:54.654 }, 00:15:54.654 { 00:15:54.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.654 "dma_device_type": 2 00:15:54.654 } 00:15:54.654 ], 00:15:54.654 "driver_specific": {} 00:15:54.654 } 00:15:54.654 ] 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.654 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.912 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.912 "name": "Existed_Raid", 00:15:54.912 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:54.912 "strip_size_kb": 64, 00:15:54.912 "state": "online", 00:15:54.912 "raid_level": "concat", 00:15:54.912 "superblock": true, 00:15:54.912 "num_base_bdevs": 3, 00:15:54.912 "num_base_bdevs_discovered": 3, 00:15:54.912 "num_base_bdevs_operational": 3, 00:15:54.912 "base_bdevs_list": [ 00:15:54.912 { 00:15:54.912 "name": "BaseBdev1", 00:15:54.912 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 }, 00:15:54.912 { 00:15:54.912 "name": "BaseBdev2", 00:15:54.912 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 }, 00:15:54.912 { 00:15:54.912 "name": "BaseBdev3", 00:15:54.912 "uuid": "2b4dd95e-1261-11ef-99fd-bfc7c66e2865", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 } 00:15:54.912 ] 00:15:54.912 }' 00:15:54.912 02:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.912 02:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:55.170 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:55.428 [2024-05-15 02:16:43.334495] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:55.428 "name": "Existed_Raid", 00:15:55.428 "aliases": [ 00:15:55.428 "29de6abd-1261-11ef-99fd-bfc7c66e2865" 00:15:55.428 ], 00:15:55.428 "product_name": "Raid Volume", 00:15:55.428 "block_size": 512, 00:15:55.428 "num_blocks": 190464, 00:15:55.428 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:55.428 "assigned_rate_limits": { 00:15:55.428 "rw_ios_per_sec": 0, 00:15:55.428 "rw_mbytes_per_sec": 0, 00:15:55.428 "r_mbytes_per_sec": 0, 00:15:55.428 "w_mbytes_per_sec": 0 00:15:55.428 }, 00:15:55.428 "claimed": false, 00:15:55.428 "zoned": false, 00:15:55.428 "supported_io_types": { 00:15:55.428 "read": true, 00:15:55.428 "write": true, 00:15:55.428 "unmap": true, 00:15:55.428 "write_zeroes": true, 00:15:55.428 "flush": true, 00:15:55.428 "reset": true, 00:15:55.428 "compare": false, 00:15:55.428 "compare_and_write": false, 00:15:55.428 "abort": false, 00:15:55.428 "nvme_admin": false, 00:15:55.428 "nvme_io": false 00:15:55.428 }, 00:15:55.428 "memory_domains": [ 00:15:55.428 { 00:15:55.428 "dma_device_id": "system", 00:15:55.428 "dma_device_type": 1 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.428 "dma_device_type": 2 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "dma_device_id": "system", 00:15:55.428 "dma_device_type": 1 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.428 "dma_device_type": 2 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "dma_device_id": "system", 00:15:55.428 "dma_device_type": 1 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.428 "dma_device_type": 2 00:15:55.428 } 00:15:55.428 ], 00:15:55.428 "driver_specific": { 00:15:55.428 "raid": { 00:15:55.428 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:55.428 "strip_size_kb": 64, 00:15:55.428 "state": "online", 00:15:55.428 "raid_level": "concat", 00:15:55.428 "superblock": true, 00:15:55.428 "num_base_bdevs": 3, 00:15:55.428 "num_base_bdevs_discovered": 3, 00:15:55.428 "num_base_bdevs_operational": 3, 00:15:55.428 "base_bdevs_list": [ 00:15:55.428 { 00:15:55.428 "name": "BaseBdev1", 00:15:55.428 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:55.428 "is_configured": true, 00:15:55.428 "data_offset": 2048, 00:15:55.428 "data_size": 63488 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "name": "BaseBdev2", 00:15:55.428 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:55.428 "is_configured": true, 00:15:55.428 "data_offset": 2048, 00:15:55.428 "data_size": 63488 00:15:55.428 }, 00:15:55.428 { 00:15:55.428 "name": "BaseBdev3", 00:15:55.428 "uuid": "2b4dd95e-1261-11ef-99fd-bfc7c66e2865", 00:15:55.428 "is_configured": true, 00:15:55.428 "data_offset": 2048, 00:15:55.428 "data_size": 63488 00:15:55.428 } 00:15:55.428 ] 00:15:55.428 } 00:15:55.428 } 00:15:55.428 }' 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:55.428 BaseBdev2 00:15:55.428 BaseBdev3' 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:55.428 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:55.685 "name": "BaseBdev1", 00:15:55.685 "aliases": [ 00:15:55.685 "28d279a4-1261-11ef-99fd-bfc7c66e2865" 00:15:55.685 ], 00:15:55.685 "product_name": "Malloc disk", 00:15:55.685 "block_size": 512, 00:15:55.685 "num_blocks": 65536, 00:15:55.685 "uuid": "28d279a4-1261-11ef-99fd-bfc7c66e2865", 00:15:55.685 "assigned_rate_limits": { 00:15:55.685 "rw_ios_per_sec": 0, 00:15:55.685 "rw_mbytes_per_sec": 0, 00:15:55.685 "r_mbytes_per_sec": 0, 00:15:55.685 "w_mbytes_per_sec": 0 00:15:55.685 }, 00:15:55.685 "claimed": true, 00:15:55.685 "claim_type": "exclusive_write", 00:15:55.685 "zoned": false, 00:15:55.685 "supported_io_types": { 00:15:55.685 "read": true, 00:15:55.685 "write": true, 00:15:55.685 "unmap": true, 00:15:55.685 "write_zeroes": true, 00:15:55.685 "flush": true, 00:15:55.685 "reset": true, 00:15:55.685 "compare": false, 00:15:55.685 "compare_and_write": false, 00:15:55.685 "abort": true, 00:15:55.685 "nvme_admin": false, 00:15:55.685 "nvme_io": false 00:15:55.685 }, 00:15:55.685 "memory_domains": [ 00:15:55.685 { 00:15:55.685 "dma_device_id": "system", 00:15:55.685 "dma_device_type": 1 00:15:55.685 }, 00:15:55.685 { 00:15:55.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.685 "dma_device_type": 2 00:15:55.685 } 00:15:55.685 ], 00:15:55.685 "driver_specific": {} 00:15:55.685 }' 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:55.685 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:55.686 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:55.686 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:55.686 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:55.686 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:55.943 "name": "BaseBdev2", 00:15:55.943 "aliases": [ 00:15:55.943 "2a560b94-1261-11ef-99fd-bfc7c66e2865" 00:15:55.943 ], 00:15:55.943 "product_name": "Malloc disk", 00:15:55.943 "block_size": 512, 00:15:55.943 "num_blocks": 65536, 00:15:55.943 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:55.943 "assigned_rate_limits": { 00:15:55.943 "rw_ios_per_sec": 0, 00:15:55.943 "rw_mbytes_per_sec": 0, 00:15:55.943 "r_mbytes_per_sec": 0, 00:15:55.943 "w_mbytes_per_sec": 0 00:15:55.943 }, 00:15:55.943 "claimed": true, 00:15:55.943 "claim_type": "exclusive_write", 00:15:55.943 "zoned": false, 00:15:55.943 "supported_io_types": { 00:15:55.943 "read": true, 00:15:55.943 "write": true, 00:15:55.943 "unmap": true, 00:15:55.943 "write_zeroes": true, 00:15:55.943 "flush": true, 00:15:55.943 "reset": true, 00:15:55.943 "compare": false, 00:15:55.943 "compare_and_write": false, 00:15:55.943 "abort": true, 00:15:55.943 "nvme_admin": false, 00:15:55.943 "nvme_io": false 00:15:55.943 }, 00:15:55.943 "memory_domains": [ 00:15:55.943 { 00:15:55.943 "dma_device_id": "system", 00:15:55.943 "dma_device_type": 1 00:15:55.943 }, 00:15:55.943 { 00:15:55.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.943 "dma_device_type": 2 00:15:55.943 } 00:15:55.943 ], 00:15:55.943 "driver_specific": {} 00:15:55.943 }' 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:55.943 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:56.201 02:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:56.460 "name": "BaseBdev3", 00:15:56.460 "aliases": [ 00:15:56.460 "2b4dd95e-1261-11ef-99fd-bfc7c66e2865" 00:15:56.460 ], 00:15:56.460 "product_name": "Malloc disk", 00:15:56.460 "block_size": 512, 00:15:56.460 "num_blocks": 65536, 00:15:56.460 "uuid": "2b4dd95e-1261-11ef-99fd-bfc7c66e2865", 00:15:56.460 "assigned_rate_limits": { 00:15:56.460 "rw_ios_per_sec": 0, 00:15:56.460 "rw_mbytes_per_sec": 0, 00:15:56.460 "r_mbytes_per_sec": 0, 00:15:56.460 "w_mbytes_per_sec": 0 00:15:56.460 }, 00:15:56.460 "claimed": true, 00:15:56.460 "claim_type": "exclusive_write", 00:15:56.460 "zoned": false, 00:15:56.460 "supported_io_types": { 00:15:56.460 "read": true, 00:15:56.460 "write": true, 00:15:56.460 "unmap": true, 00:15:56.460 "write_zeroes": true, 00:15:56.460 "flush": true, 00:15:56.460 "reset": true, 00:15:56.460 "compare": false, 00:15:56.460 "compare_and_write": false, 00:15:56.460 "abort": true, 00:15:56.460 "nvme_admin": false, 00:15:56.460 "nvme_io": false 00:15:56.460 }, 00:15:56.460 "memory_domains": [ 00:15:56.460 { 00:15:56.460 "dma_device_id": "system", 00:15:56.460 "dma_device_type": 1 00:15:56.460 }, 00:15:56.460 { 00:15:56.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.460 "dma_device_type": 2 00:15:56.460 } 00:15:56.460 ], 00:15:56.460 "driver_specific": {} 00:15:56.460 }' 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:56.460 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:56.719 [2024-05-15 02:16:44.654521] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.719 [2024-05-15 02:16:44.654580] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.719 [2024-05-15 02:16:44.654632] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.719 02:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.285 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.286 "name": "Existed_Raid", 00:15:57.286 "uuid": "29de6abd-1261-11ef-99fd-bfc7c66e2865", 00:15:57.286 "strip_size_kb": 64, 00:15:57.286 "state": "offline", 00:15:57.286 "raid_level": "concat", 00:15:57.286 "superblock": true, 00:15:57.286 "num_base_bdevs": 3, 00:15:57.286 "num_base_bdevs_discovered": 2, 00:15:57.286 "num_base_bdevs_operational": 2, 00:15:57.286 "base_bdevs_list": [ 00:15:57.286 { 00:15:57.286 "name": null, 00:15:57.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.286 "is_configured": false, 00:15:57.286 "data_offset": 2048, 00:15:57.286 "data_size": 63488 00:15:57.286 }, 00:15:57.286 { 00:15:57.286 "name": "BaseBdev2", 00:15:57.286 "uuid": "2a560b94-1261-11ef-99fd-bfc7c66e2865", 00:15:57.286 "is_configured": true, 00:15:57.286 "data_offset": 2048, 00:15:57.286 "data_size": 63488 00:15:57.286 }, 00:15:57.286 { 00:15:57.286 "name": "BaseBdev3", 00:15:57.286 "uuid": "2b4dd95e-1261-11ef-99fd-bfc7c66e2865", 00:15:57.286 "is_configured": true, 00:15:57.286 "data_offset": 2048, 00:15:57.286 "data_size": 63488 00:15:57.286 } 00:15:57.286 ] 00:15:57.286 }' 00:15:57.286 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.286 02:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.543 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:57.543 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:57.543 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.543 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:57.801 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:57.801 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:57.801 02:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:58.059 [2024-05-15 02:16:46.052675] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.318 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:58.885 [2024-05-15 02:16:46.597548] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.885 [2024-05-15 02:16:46.597594] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7a8a00 name Existed_Raid, state offline 00:15:58.885 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.885 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.885 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.885 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:59.145 02:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.403 BaseBdev2 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:59.403 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.661 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.920 [ 00:15:59.920 { 00:15:59.920 "name": "BaseBdev2", 00:15:59.920 "aliases": [ 00:15:59.920 "2e7c0f17-1261-11ef-99fd-bfc7c66e2865" 00:15:59.920 ], 00:15:59.920 "product_name": "Malloc disk", 00:15:59.920 "block_size": 512, 00:15:59.920 "num_blocks": 65536, 00:15:59.920 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:15:59.920 "assigned_rate_limits": { 00:15:59.920 "rw_ios_per_sec": 0, 00:15:59.920 "rw_mbytes_per_sec": 0, 00:15:59.920 "r_mbytes_per_sec": 0, 00:15:59.920 "w_mbytes_per_sec": 0 00:15:59.920 }, 00:15:59.920 "claimed": false, 00:15:59.920 "zoned": false, 00:15:59.920 "supported_io_types": { 00:15:59.920 "read": true, 00:15:59.920 "write": true, 00:15:59.920 "unmap": true, 00:15:59.920 "write_zeroes": true, 00:15:59.920 "flush": true, 00:15:59.920 "reset": true, 00:15:59.920 "compare": false, 00:15:59.920 "compare_and_write": false, 00:15:59.920 "abort": true, 00:15:59.920 "nvme_admin": false, 00:15:59.920 "nvme_io": false 00:15:59.920 }, 00:15:59.920 "memory_domains": [ 00:15:59.920 { 00:15:59.920 "dma_device_id": "system", 00:15:59.920 "dma_device_type": 1 00:15:59.920 }, 00:15:59.920 { 00:15:59.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.920 "dma_device_type": 2 00:15:59.920 } 00:15:59.920 ], 00:15:59.920 "driver_specific": {} 00:15:59.920 } 00:15:59.920 ] 00:15:59.920 02:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:59.920 02:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:59.920 02:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:59.920 02:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:00.179 BaseBdev3 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:00.179 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.438 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:01.005 [ 00:16:01.005 { 00:16:01.005 "name": "BaseBdev3", 00:16:01.005 "aliases": [ 00:16:01.005 "2efcd7bf-1261-11ef-99fd-bfc7c66e2865" 00:16:01.005 ], 00:16:01.005 "product_name": "Malloc disk", 00:16:01.005 "block_size": 512, 00:16:01.005 "num_blocks": 65536, 00:16:01.005 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:01.005 "assigned_rate_limits": { 00:16:01.005 "rw_ios_per_sec": 0, 00:16:01.005 "rw_mbytes_per_sec": 0, 00:16:01.005 "r_mbytes_per_sec": 0, 00:16:01.005 "w_mbytes_per_sec": 0 00:16:01.005 }, 00:16:01.005 "claimed": false, 00:16:01.005 "zoned": false, 00:16:01.005 "supported_io_types": { 00:16:01.005 "read": true, 00:16:01.005 "write": true, 00:16:01.005 "unmap": true, 00:16:01.005 "write_zeroes": true, 00:16:01.005 "flush": true, 00:16:01.005 "reset": true, 00:16:01.005 "compare": false, 00:16:01.005 "compare_and_write": false, 00:16:01.005 "abort": true, 00:16:01.005 "nvme_admin": false, 00:16:01.005 "nvme_io": false 00:16:01.005 }, 00:16:01.005 "memory_domains": [ 00:16:01.005 { 00:16:01.005 "dma_device_id": "system", 00:16:01.005 "dma_device_type": 1 00:16:01.005 }, 00:16:01.005 { 00:16:01.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.005 "dma_device_type": 2 00:16:01.005 } 00:16:01.005 ], 00:16:01.005 "driver_specific": {} 00:16:01.005 } 00:16:01.005 ] 00:16:01.005 02:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:01.006 02:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:01.006 02:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:01.006 02:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:01.264 [2024-05-15 02:16:49.038516] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.264 [2024-05-15 02:16:49.038604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.264 [2024-05-15 02:16:49.038633] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.264 [2024-05-15 02:16:49.039450] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.264 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.523 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.523 "name": "Existed_Raid", 00:16:01.523 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:01.523 "strip_size_kb": 64, 00:16:01.523 "state": "configuring", 00:16:01.523 "raid_level": "concat", 00:16:01.523 "superblock": true, 00:16:01.523 "num_base_bdevs": 3, 00:16:01.523 "num_base_bdevs_discovered": 2, 00:16:01.523 "num_base_bdevs_operational": 3, 00:16:01.523 "base_bdevs_list": [ 00:16:01.523 { 00:16:01.523 "name": "BaseBdev1", 00:16:01.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.523 "is_configured": false, 00:16:01.523 "data_offset": 0, 00:16:01.523 "data_size": 0 00:16:01.523 }, 00:16:01.523 { 00:16:01.523 "name": "BaseBdev2", 00:16:01.523 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:01.523 "is_configured": true, 00:16:01.523 "data_offset": 2048, 00:16:01.523 "data_size": 63488 00:16:01.523 }, 00:16:01.523 { 00:16:01.523 "name": "BaseBdev3", 00:16:01.523 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:01.523 "is_configured": true, 00:16:01.523 "data_offset": 2048, 00:16:01.523 "data_size": 63488 00:16:01.523 } 00:16:01.523 ] 00:16:01.523 }' 00:16:01.523 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.523 02:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.781 02:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:02.039 [2024-05-15 02:16:50.006535] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.039 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.606 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.606 "name": "Existed_Raid", 00:16:02.606 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:02.606 "strip_size_kb": 64, 00:16:02.606 "state": "configuring", 00:16:02.606 "raid_level": "concat", 00:16:02.606 "superblock": true, 00:16:02.606 "num_base_bdevs": 3, 00:16:02.606 "num_base_bdevs_discovered": 1, 00:16:02.606 "num_base_bdevs_operational": 3, 00:16:02.606 "base_bdevs_list": [ 00:16:02.606 { 00:16:02.606 "name": "BaseBdev1", 00:16:02.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.606 "is_configured": false, 00:16:02.606 "data_offset": 0, 00:16:02.606 "data_size": 0 00:16:02.606 }, 00:16:02.606 { 00:16:02.606 "name": null, 00:16:02.606 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:02.606 "is_configured": false, 00:16:02.606 "data_offset": 2048, 00:16:02.606 "data_size": 63488 00:16:02.606 }, 00:16:02.606 { 00:16:02.606 "name": "BaseBdev3", 00:16:02.606 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:02.606 "is_configured": true, 00:16:02.606 "data_offset": 2048, 00:16:02.606 "data_size": 63488 00:16:02.606 } 00:16:02.606 ] 00:16:02.606 }' 00:16:02.606 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.606 02:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.864 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.864 02:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.122 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:16:03.122 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.379 [2024-05-15 02:16:51.306653] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.379 BaseBdev1 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:03.379 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.665 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.923 [ 00:16:03.923 { 00:16:03.923 "name": "BaseBdev1", 00:16:03.923 "aliases": [ 00:16:03.923 "30e2b146-1261-11ef-99fd-bfc7c66e2865" 00:16:03.923 ], 00:16:03.923 "product_name": "Malloc disk", 00:16:03.923 "block_size": 512, 00:16:03.923 "num_blocks": 65536, 00:16:03.923 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:03.923 "assigned_rate_limits": { 00:16:03.923 "rw_ios_per_sec": 0, 00:16:03.923 "rw_mbytes_per_sec": 0, 00:16:03.923 "r_mbytes_per_sec": 0, 00:16:03.923 "w_mbytes_per_sec": 0 00:16:03.923 }, 00:16:03.923 "claimed": true, 00:16:03.923 "claim_type": "exclusive_write", 00:16:03.923 "zoned": false, 00:16:03.923 "supported_io_types": { 00:16:03.923 "read": true, 00:16:03.923 "write": true, 00:16:03.923 "unmap": true, 00:16:03.923 "write_zeroes": true, 00:16:03.923 "flush": true, 00:16:03.923 "reset": true, 00:16:03.923 "compare": false, 00:16:03.923 "compare_and_write": false, 00:16:03.923 "abort": true, 00:16:03.923 "nvme_admin": false, 00:16:03.923 "nvme_io": false 00:16:03.924 }, 00:16:03.924 "memory_domains": [ 00:16:03.924 { 00:16:03.924 "dma_device_id": "system", 00:16:03.924 "dma_device_type": 1 00:16:03.924 }, 00:16:03.924 { 00:16:03.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.924 "dma_device_type": 2 00:16:03.924 } 00:16:03.924 ], 00:16:03.924 "driver_specific": {} 00:16:03.924 } 00:16:03.924 ] 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.183 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.184 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.184 02:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.184 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.184 "name": "Existed_Raid", 00:16:04.184 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:04.184 "strip_size_kb": 64, 00:16:04.184 "state": "configuring", 00:16:04.184 "raid_level": "concat", 00:16:04.184 "superblock": true, 00:16:04.184 "num_base_bdevs": 3, 00:16:04.184 "num_base_bdevs_discovered": 2, 00:16:04.184 "num_base_bdevs_operational": 3, 00:16:04.184 "base_bdevs_list": [ 00:16:04.184 { 00:16:04.184 "name": "BaseBdev1", 00:16:04.184 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:04.184 "is_configured": true, 00:16:04.184 "data_offset": 2048, 00:16:04.184 "data_size": 63488 00:16:04.184 }, 00:16:04.184 { 00:16:04.184 "name": null, 00:16:04.184 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:04.184 "is_configured": false, 00:16:04.184 "data_offset": 2048, 00:16:04.184 "data_size": 63488 00:16:04.184 }, 00:16:04.184 { 00:16:04.184 "name": "BaseBdev3", 00:16:04.184 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:04.184 "is_configured": true, 00:16:04.184 "data_offset": 2048, 00:16:04.184 "data_size": 63488 00:16:04.184 } 00:16:04.184 ] 00:16:04.184 }' 00:16:04.184 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.184 02:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.759 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.759 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:05.017 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:05.017 02:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:05.017 [2024-05-15 02:16:53.010547] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.017 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:05.017 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.017 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.017 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:05.017 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.018 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.586 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.586 "name": "Existed_Raid", 00:16:05.586 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:05.586 "strip_size_kb": 64, 00:16:05.586 "state": "configuring", 00:16:05.586 "raid_level": "concat", 00:16:05.586 "superblock": true, 00:16:05.586 "num_base_bdevs": 3, 00:16:05.586 "num_base_bdevs_discovered": 1, 00:16:05.586 "num_base_bdevs_operational": 3, 00:16:05.586 "base_bdevs_list": [ 00:16:05.586 { 00:16:05.586 "name": "BaseBdev1", 00:16:05.586 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:05.586 "is_configured": true, 00:16:05.586 "data_offset": 2048, 00:16:05.586 "data_size": 63488 00:16:05.586 }, 00:16:05.586 { 00:16:05.586 "name": null, 00:16:05.586 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:05.586 "is_configured": false, 00:16:05.586 "data_offset": 2048, 00:16:05.586 "data_size": 63488 00:16:05.586 }, 00:16:05.586 { 00:16:05.586 "name": null, 00:16:05.586 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:05.586 "is_configured": false, 00:16:05.586 "data_offset": 2048, 00:16:05.586 "data_size": 63488 00:16:05.586 } 00:16:05.586 ] 00:16:05.586 }' 00:16:05.586 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.586 02:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.854 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:05.854 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:05.854 02:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:06.116 [2024-05-15 02:16:54.074574] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.116 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.375 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.375 "name": "Existed_Raid", 00:16:06.375 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:06.375 "strip_size_kb": 64, 00:16:06.375 "state": "configuring", 00:16:06.375 "raid_level": "concat", 00:16:06.375 "superblock": true, 00:16:06.375 "num_base_bdevs": 3, 00:16:06.375 "num_base_bdevs_discovered": 2, 00:16:06.375 "num_base_bdevs_operational": 3, 00:16:06.375 "base_bdevs_list": [ 00:16:06.375 { 00:16:06.375 "name": "BaseBdev1", 00:16:06.375 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:06.375 "is_configured": true, 00:16:06.375 "data_offset": 2048, 00:16:06.375 "data_size": 63488 00:16:06.375 }, 00:16:06.375 { 00:16:06.375 "name": null, 00:16:06.375 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:06.375 "is_configured": false, 00:16:06.375 "data_offset": 2048, 00:16:06.375 "data_size": 63488 00:16:06.375 }, 00:16:06.375 { 00:16:06.375 "name": "BaseBdev3", 00:16:06.375 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:06.375 "is_configured": true, 00:16:06.375 "data_offset": 2048, 00:16:06.375 "data_size": 63488 00:16:06.375 } 00:16:06.375 ] 00:16:06.375 }' 00:16:06.375 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.375 02:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.968 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.968 02:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.227 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:07.227 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:07.486 [2024-05-15 02:16:55.390605] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.486 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.743 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.743 "name": "Existed_Raid", 00:16:07.744 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:07.744 "strip_size_kb": 64, 00:16:07.744 "state": "configuring", 00:16:07.744 "raid_level": "concat", 00:16:07.744 "superblock": true, 00:16:07.744 "num_base_bdevs": 3, 00:16:07.744 "num_base_bdevs_discovered": 1, 00:16:07.744 "num_base_bdevs_operational": 3, 00:16:07.744 "base_bdevs_list": [ 00:16:07.744 { 00:16:07.744 "name": null, 00:16:07.744 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:07.744 "is_configured": false, 00:16:07.744 "data_offset": 2048, 00:16:07.744 "data_size": 63488 00:16:07.744 }, 00:16:07.744 { 00:16:07.744 "name": null, 00:16:07.744 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:07.744 "is_configured": false, 00:16:07.744 "data_offset": 2048, 00:16:07.744 "data_size": 63488 00:16:07.744 }, 00:16:07.744 { 00:16:07.744 "name": "BaseBdev3", 00:16:07.744 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:07.744 "is_configured": true, 00:16:07.744 "data_offset": 2048, 00:16:07.744 "data_size": 63488 00:16:07.744 } 00:16:07.744 ] 00:16:07.744 }' 00:16:07.744 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.744 02:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.001 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.001 02:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:08.260 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:08.260 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:08.518 [2024-05-15 02:16:56.451459] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.518 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.084 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.084 "name": "Existed_Raid", 00:16:09.084 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:09.084 "strip_size_kb": 64, 00:16:09.084 "state": "configuring", 00:16:09.084 "raid_level": "concat", 00:16:09.084 "superblock": true, 00:16:09.084 "num_base_bdevs": 3, 00:16:09.084 "num_base_bdevs_discovered": 2, 00:16:09.084 "num_base_bdevs_operational": 3, 00:16:09.084 "base_bdevs_list": [ 00:16:09.084 { 00:16:09.084 "name": null, 00:16:09.084 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:09.084 "is_configured": false, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 }, 00:16:09.084 { 00:16:09.084 "name": "BaseBdev2", 00:16:09.084 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:09.084 "is_configured": true, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 }, 00:16:09.084 { 00:16:09.084 "name": "BaseBdev3", 00:16:09.084 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:09.084 "is_configured": true, 00:16:09.084 "data_offset": 2048, 00:16:09.084 "data_size": 63488 00:16:09.084 } 00:16:09.084 ] 00:16:09.084 }' 00:16:09.084 02:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.084 02:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.341 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.341 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.599 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:09.599 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.599 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:09.856 02:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 30e2b146-1261-11ef-99fd-bfc7c66e2865 00:16:10.115 [2024-05-15 02:16:58.051645] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:10.115 [2024-05-15 02:16:58.051722] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7a8a00 00:16:10.115 [2024-05-15 02:16:58.051729] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:10.115 [2024-05-15 02:16:58.051754] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c80be20 00:16:10.115 [2024-05-15 02:16:58.051801] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7a8a00 00:16:10.115 [2024-05-15 02:16:58.051806] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c7a8a00 00:16:10.115 [2024-05-15 02:16:58.051829] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.115 NewBaseBdev 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:10.115 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.679 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:10.936 [ 00:16:10.936 { 00:16:10.936 "name": "NewBaseBdev", 00:16:10.936 "aliases": [ 00:16:10.936 "30e2b146-1261-11ef-99fd-bfc7c66e2865" 00:16:10.936 ], 00:16:10.936 "product_name": "Malloc disk", 00:16:10.936 "block_size": 512, 00:16:10.936 "num_blocks": 65536, 00:16:10.936 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:10.936 "assigned_rate_limits": { 00:16:10.936 "rw_ios_per_sec": 0, 00:16:10.936 "rw_mbytes_per_sec": 0, 00:16:10.936 "r_mbytes_per_sec": 0, 00:16:10.936 "w_mbytes_per_sec": 0 00:16:10.936 }, 00:16:10.936 "claimed": true, 00:16:10.936 "claim_type": "exclusive_write", 00:16:10.936 "zoned": false, 00:16:10.936 "supported_io_types": { 00:16:10.936 "read": true, 00:16:10.936 "write": true, 00:16:10.936 "unmap": true, 00:16:10.936 "write_zeroes": true, 00:16:10.936 "flush": true, 00:16:10.936 "reset": true, 00:16:10.936 "compare": false, 00:16:10.936 "compare_and_write": false, 00:16:10.936 "abort": true, 00:16:10.936 "nvme_admin": false, 00:16:10.936 "nvme_io": false 00:16:10.936 }, 00:16:10.936 "memory_domains": [ 00:16:10.936 { 00:16:10.936 "dma_device_id": "system", 00:16:10.936 "dma_device_type": 1 00:16:10.936 }, 00:16:10.936 { 00:16:10.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.936 "dma_device_type": 2 00:16:10.936 } 00:16:10.936 ], 00:16:10.936 "driver_specific": {} 00:16:10.936 } 00:16:10.936 ] 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.936 02:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.194 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.194 "name": "Existed_Raid", 00:16:11.194 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:11.194 "strip_size_kb": 64, 00:16:11.194 "state": "online", 00:16:11.194 "raid_level": "concat", 00:16:11.194 "superblock": true, 00:16:11.194 "num_base_bdevs": 3, 00:16:11.194 "num_base_bdevs_discovered": 3, 00:16:11.194 "num_base_bdevs_operational": 3, 00:16:11.194 "base_bdevs_list": [ 00:16:11.194 { 00:16:11.194 "name": "NewBaseBdev", 00:16:11.194 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:11.194 "is_configured": true, 00:16:11.194 "data_offset": 2048, 00:16:11.194 "data_size": 63488 00:16:11.194 }, 00:16:11.194 { 00:16:11.194 "name": "BaseBdev2", 00:16:11.194 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:11.194 "is_configured": true, 00:16:11.194 "data_offset": 2048, 00:16:11.194 "data_size": 63488 00:16:11.194 }, 00:16:11.194 { 00:16:11.194 "name": "BaseBdev3", 00:16:11.194 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:11.194 "is_configured": true, 00:16:11.194 "data_offset": 2048, 00:16:11.194 "data_size": 63488 00:16:11.194 } 00:16:11.194 ] 00:16:11.194 }' 00:16:11.194 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.194 02:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:11.452 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:11.709 [2024-05-15 02:16:59.583561] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.709 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:11.709 "name": "Existed_Raid", 00:16:11.709 "aliases": [ 00:16:11.709 "2f889e14-1261-11ef-99fd-bfc7c66e2865" 00:16:11.709 ], 00:16:11.709 "product_name": "Raid Volume", 00:16:11.709 "block_size": 512, 00:16:11.709 "num_blocks": 190464, 00:16:11.709 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:11.709 "assigned_rate_limits": { 00:16:11.709 "rw_ios_per_sec": 0, 00:16:11.709 "rw_mbytes_per_sec": 0, 00:16:11.709 "r_mbytes_per_sec": 0, 00:16:11.709 "w_mbytes_per_sec": 0 00:16:11.709 }, 00:16:11.709 "claimed": false, 00:16:11.709 "zoned": false, 00:16:11.709 "supported_io_types": { 00:16:11.709 "read": true, 00:16:11.709 "write": true, 00:16:11.709 "unmap": true, 00:16:11.709 "write_zeroes": true, 00:16:11.709 "flush": true, 00:16:11.709 "reset": true, 00:16:11.709 "compare": false, 00:16:11.709 "compare_and_write": false, 00:16:11.709 "abort": false, 00:16:11.709 "nvme_admin": false, 00:16:11.709 "nvme_io": false 00:16:11.709 }, 00:16:11.709 "memory_domains": [ 00:16:11.709 { 00:16:11.709 "dma_device_id": "system", 00:16:11.709 "dma_device_type": 1 00:16:11.709 }, 00:16:11.709 { 00:16:11.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.709 "dma_device_type": 2 00:16:11.709 }, 00:16:11.709 { 00:16:11.709 "dma_device_id": "system", 00:16:11.709 "dma_device_type": 1 00:16:11.710 }, 00:16:11.710 { 00:16:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.710 "dma_device_type": 2 00:16:11.710 }, 00:16:11.710 { 00:16:11.710 "dma_device_id": "system", 00:16:11.710 "dma_device_type": 1 00:16:11.710 }, 00:16:11.710 { 00:16:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.710 "dma_device_type": 2 00:16:11.710 } 00:16:11.710 ], 00:16:11.710 "driver_specific": { 00:16:11.710 "raid": { 00:16:11.710 "uuid": "2f889e14-1261-11ef-99fd-bfc7c66e2865", 00:16:11.710 "strip_size_kb": 64, 00:16:11.710 "state": "online", 00:16:11.710 "raid_level": "concat", 00:16:11.710 "superblock": true, 00:16:11.710 "num_base_bdevs": 3, 00:16:11.710 "num_base_bdevs_discovered": 3, 00:16:11.710 "num_base_bdevs_operational": 3, 00:16:11.710 "base_bdevs_list": [ 00:16:11.710 { 00:16:11.710 "name": "NewBaseBdev", 00:16:11.710 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:11.710 "is_configured": true, 00:16:11.710 "data_offset": 2048, 00:16:11.710 "data_size": 63488 00:16:11.710 }, 00:16:11.710 { 00:16:11.710 "name": "BaseBdev2", 00:16:11.710 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:11.710 "is_configured": true, 00:16:11.710 "data_offset": 2048, 00:16:11.710 "data_size": 63488 00:16:11.710 }, 00:16:11.710 { 00:16:11.710 "name": "BaseBdev3", 00:16:11.710 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:11.710 "is_configured": true, 00:16:11.710 "data_offset": 2048, 00:16:11.710 "data_size": 63488 00:16:11.710 } 00:16:11.710 ] 00:16:11.710 } 00:16:11.710 } 00:16:11.710 }' 00:16:11.710 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.710 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:11.710 BaseBdev2 00:16:11.710 BaseBdev3' 00:16:11.710 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:11.710 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:11.710 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:11.968 "name": "NewBaseBdev", 00:16:11.968 "aliases": [ 00:16:11.968 "30e2b146-1261-11ef-99fd-bfc7c66e2865" 00:16:11.968 ], 00:16:11.968 "product_name": "Malloc disk", 00:16:11.968 "block_size": 512, 00:16:11.968 "num_blocks": 65536, 00:16:11.968 "uuid": "30e2b146-1261-11ef-99fd-bfc7c66e2865", 00:16:11.968 "assigned_rate_limits": { 00:16:11.968 "rw_ios_per_sec": 0, 00:16:11.968 "rw_mbytes_per_sec": 0, 00:16:11.968 "r_mbytes_per_sec": 0, 00:16:11.968 "w_mbytes_per_sec": 0 00:16:11.968 }, 00:16:11.968 "claimed": true, 00:16:11.968 "claim_type": "exclusive_write", 00:16:11.968 "zoned": false, 00:16:11.968 "supported_io_types": { 00:16:11.968 "read": true, 00:16:11.968 "write": true, 00:16:11.968 "unmap": true, 00:16:11.968 "write_zeroes": true, 00:16:11.968 "flush": true, 00:16:11.968 "reset": true, 00:16:11.968 "compare": false, 00:16:11.968 "compare_and_write": false, 00:16:11.968 "abort": true, 00:16:11.968 "nvme_admin": false, 00:16:11.968 "nvme_io": false 00:16:11.968 }, 00:16:11.968 "memory_domains": [ 00:16:11.968 { 00:16:11.968 "dma_device_id": "system", 00:16:11.968 "dma_device_type": 1 00:16:11.968 }, 00:16:11.968 { 00:16:11.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.968 "dma_device_type": 2 00:16:11.968 } 00:16:11.968 ], 00:16:11.968 "driver_specific": {} 00:16:11.968 }' 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:11.968 02:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:12.535 "name": "BaseBdev2", 00:16:12.535 "aliases": [ 00:16:12.535 "2e7c0f17-1261-11ef-99fd-bfc7c66e2865" 00:16:12.535 ], 00:16:12.535 "product_name": "Malloc disk", 00:16:12.535 "block_size": 512, 00:16:12.535 "num_blocks": 65536, 00:16:12.535 "uuid": "2e7c0f17-1261-11ef-99fd-bfc7c66e2865", 00:16:12.535 "assigned_rate_limits": { 00:16:12.535 "rw_ios_per_sec": 0, 00:16:12.535 "rw_mbytes_per_sec": 0, 00:16:12.535 "r_mbytes_per_sec": 0, 00:16:12.535 "w_mbytes_per_sec": 0 00:16:12.535 }, 00:16:12.535 "claimed": true, 00:16:12.535 "claim_type": "exclusive_write", 00:16:12.535 "zoned": false, 00:16:12.535 "supported_io_types": { 00:16:12.535 "read": true, 00:16:12.535 "write": true, 00:16:12.535 "unmap": true, 00:16:12.535 "write_zeroes": true, 00:16:12.535 "flush": true, 00:16:12.535 "reset": true, 00:16:12.535 "compare": false, 00:16:12.535 "compare_and_write": false, 00:16:12.535 "abort": true, 00:16:12.535 "nvme_admin": false, 00:16:12.535 "nvme_io": false 00:16:12.535 }, 00:16:12.535 "memory_domains": [ 00:16:12.535 { 00:16:12.535 "dma_device_id": "system", 00:16:12.535 "dma_device_type": 1 00:16:12.535 }, 00:16:12.535 { 00:16:12.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.535 "dma_device_type": 2 00:16:12.535 } 00:16:12.535 ], 00:16:12.535 "driver_specific": {} 00:16:12.535 }' 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:12.535 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:12.793 "name": "BaseBdev3", 00:16:12.793 "aliases": [ 00:16:12.793 "2efcd7bf-1261-11ef-99fd-bfc7c66e2865" 00:16:12.793 ], 00:16:12.793 "product_name": "Malloc disk", 00:16:12.793 "block_size": 512, 00:16:12.793 "num_blocks": 65536, 00:16:12.793 "uuid": "2efcd7bf-1261-11ef-99fd-bfc7c66e2865", 00:16:12.793 "assigned_rate_limits": { 00:16:12.793 "rw_ios_per_sec": 0, 00:16:12.793 "rw_mbytes_per_sec": 0, 00:16:12.793 "r_mbytes_per_sec": 0, 00:16:12.793 "w_mbytes_per_sec": 0 00:16:12.793 }, 00:16:12.793 "claimed": true, 00:16:12.793 "claim_type": "exclusive_write", 00:16:12.793 "zoned": false, 00:16:12.793 "supported_io_types": { 00:16:12.793 "read": true, 00:16:12.793 "write": true, 00:16:12.793 "unmap": true, 00:16:12.793 "write_zeroes": true, 00:16:12.793 "flush": true, 00:16:12.793 "reset": true, 00:16:12.793 "compare": false, 00:16:12.793 "compare_and_write": false, 00:16:12.793 "abort": true, 00:16:12.793 "nvme_admin": false, 00:16:12.793 "nvme_io": false 00:16:12.793 }, 00:16:12.793 "memory_domains": [ 00:16:12.793 { 00:16:12.793 "dma_device_id": "system", 00:16:12.793 "dma_device_type": 1 00:16:12.793 }, 00:16:12.793 { 00:16:12.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.793 "dma_device_type": 2 00:16:12.793 } 00:16:12.793 ], 00:16:12.793 "driver_specific": {} 00:16:12.793 }' 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:12.793 02:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.359 [2024-05-15 02:17:01.087578] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.359 [2024-05-15 02:17:01.087627] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.359 [2024-05-15 02:17:01.087663] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.359 [2024-05-15 02:17:01.087689] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.359 [2024-05-15 02:17:01.087697] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7a8a00 name Existed_Raid, state offline 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 54116 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 54116 ']' 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 54116 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 54116 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:16:13.359 killing process with pid 54116 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54116' 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 54116 00:16:13.359 [2024-05-15 02:17:01.128731] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 54116 00:16:13.359 [2024-05-15 02:17:01.144668] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:16:13.359 00:16:13.359 real 0m26.303s 00:16:13.359 user 0m48.441s 00:16:13.359 sys 0m3.335s 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.359 02:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.359 ************************************ 00:16:13.359 END TEST raid_state_function_test_sb 00:16:13.359 ************************************ 00:16:13.359 02:17:01 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:13.359 02:17:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:13.359 02:17:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.359 02:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.359 ************************************ 00:16:13.359 START TEST raid_superblock_test 00:16:13.359 ************************************ 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=54852 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 54852 /var/tmp/spdk-raid.sock 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 54852 ']' 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.359 02:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.359 [2024-05-15 02:17:01.357131] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:13.359 [2024-05-15 02:17:01.357383] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:13.926 EAL: TSC is not safe to use in SMP mode 00:16:13.926 EAL: TSC is not invariant 00:16:13.926 [2024-05-15 02:17:01.878630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.184 [2024-05-15 02:17:01.969958] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.184 [2024-05-15 02:17:01.972522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.184 [2024-05-15 02:17:01.973387] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.184 [2024-05-15 02:17:01.973411] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:14.814 malloc1 00:16:14.814 02:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.072 [2024-05-15 02:17:03.049697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.072 [2024-05-15 02:17:03.049781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.072 [2024-05-15 02:17:03.050680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b353780 00:16:15.072 [2024-05-15 02:17:03.050721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.072 [2024-05-15 02:17:03.051649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.072 [2024-05-15 02:17:03.051687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.072 pt1 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.072 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:15.638 malloc2 00:16:15.638 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.896 [2024-05-15 02:17:03.765712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.896 [2024-05-15 02:17:03.765795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.896 [2024-05-15 02:17:03.765827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b353c80 00:16:15.896 [2024-05-15 02:17:03.765836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.896 [2024-05-15 02:17:03.766437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.896 [2024-05-15 02:17:03.766470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.896 pt2 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.896 02:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:16.154 malloc3 00:16:16.154 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.410 [2024-05-15 02:17:04.325705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.410 [2024-05-15 02:17:04.325780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.410 [2024-05-15 02:17:04.325811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b354180 00:16:16.410 [2024-05-15 02:17:04.325819] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.410 [2024-05-15 02:17:04.326448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.410 [2024-05-15 02:17:04.326483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.410 pt3 00:16:16.410 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.410 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.410 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:16.669 [2024-05-15 02:17:04.585724] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.669 [2024-05-15 02:17:04.586264] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.669 [2024-05-15 02:17:04.586296] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.669 [2024-05-15 02:17:04.586350] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b354400 00:16:16.669 [2024-05-15 02:17:04.586355] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.669 [2024-05-15 02:17:04.586412] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b3b6e20 00:16:16.669 [2024-05-15 02:17:04.586497] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b354400 00:16:16.669 [2024-05-15 02:17:04.586507] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b354400 00:16:16.669 [2024-05-15 02:17:04.586548] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.669 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.927 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.927 "name": "raid_bdev1", 00:16:16.927 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:16.927 "strip_size_kb": 64, 00:16:16.927 "state": "online", 00:16:16.927 "raid_level": "concat", 00:16:16.927 "superblock": true, 00:16:16.927 "num_base_bdevs": 3, 00:16:16.927 "num_base_bdevs_discovered": 3, 00:16:16.927 "num_base_bdevs_operational": 3, 00:16:16.927 "base_bdevs_list": [ 00:16:16.927 { 00:16:16.927 "name": "pt1", 00:16:16.927 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:16.927 "is_configured": true, 00:16:16.927 "data_offset": 2048, 00:16:16.927 "data_size": 63488 00:16:16.927 }, 00:16:16.927 { 00:16:16.927 "name": "pt2", 00:16:16.927 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:16.927 "is_configured": true, 00:16:16.927 "data_offset": 2048, 00:16:16.927 "data_size": 63488 00:16:16.927 }, 00:16:16.927 { 00:16:16.927 "name": "pt3", 00:16:16.928 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:16.928 "is_configured": true, 00:16:16.928 "data_offset": 2048, 00:16:16.928 "data_size": 63488 00:16:16.928 } 00:16:16.928 ] 00:16:16.928 }' 00:16:16.928 02:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.928 02:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.199 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.199 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.200 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:17.765 [2024-05-15 02:17:05.485784] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.765 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:17.765 "name": "raid_bdev1", 00:16:17.765 "aliases": [ 00:16:17.765 "38ccef57-1261-11ef-99fd-bfc7c66e2865" 00:16:17.765 ], 00:16:17.765 "product_name": "Raid Volume", 00:16:17.765 "block_size": 512, 00:16:17.765 "num_blocks": 190464, 00:16:17.765 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:17.765 "assigned_rate_limits": { 00:16:17.765 "rw_ios_per_sec": 0, 00:16:17.765 "rw_mbytes_per_sec": 0, 00:16:17.765 "r_mbytes_per_sec": 0, 00:16:17.765 "w_mbytes_per_sec": 0 00:16:17.765 }, 00:16:17.765 "claimed": false, 00:16:17.765 "zoned": false, 00:16:17.765 "supported_io_types": { 00:16:17.765 "read": true, 00:16:17.765 "write": true, 00:16:17.765 "unmap": true, 00:16:17.765 "write_zeroes": true, 00:16:17.765 "flush": true, 00:16:17.765 "reset": true, 00:16:17.765 "compare": false, 00:16:17.765 "compare_and_write": false, 00:16:17.765 "abort": false, 00:16:17.765 "nvme_admin": false, 00:16:17.765 "nvme_io": false 00:16:17.765 }, 00:16:17.765 "memory_domains": [ 00:16:17.765 { 00:16:17.765 "dma_device_id": "system", 00:16:17.765 "dma_device_type": 1 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.765 "dma_device_type": 2 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "dma_device_id": "system", 00:16:17.765 "dma_device_type": 1 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.765 "dma_device_type": 2 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "dma_device_id": "system", 00:16:17.765 "dma_device_type": 1 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.765 "dma_device_type": 2 00:16:17.765 } 00:16:17.765 ], 00:16:17.765 "driver_specific": { 00:16:17.765 "raid": { 00:16:17.765 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:17.765 "strip_size_kb": 64, 00:16:17.765 "state": "online", 00:16:17.765 "raid_level": "concat", 00:16:17.765 "superblock": true, 00:16:17.765 "num_base_bdevs": 3, 00:16:17.765 "num_base_bdevs_discovered": 3, 00:16:17.765 "num_base_bdevs_operational": 3, 00:16:17.765 "base_bdevs_list": [ 00:16:17.765 { 00:16:17.765 "name": "pt1", 00:16:17.765 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:17.765 "is_configured": true, 00:16:17.765 "data_offset": 2048, 00:16:17.765 "data_size": 63488 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "name": "pt2", 00:16:17.765 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:17.765 "is_configured": true, 00:16:17.765 "data_offset": 2048, 00:16:17.765 "data_size": 63488 00:16:17.765 }, 00:16:17.765 { 00:16:17.765 "name": "pt3", 00:16:17.766 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:17.766 "is_configured": true, 00:16:17.766 "data_offset": 2048, 00:16:17.766 "data_size": 63488 00:16:17.766 } 00:16:17.766 ] 00:16:17.766 } 00:16:17.766 } 00:16:17.766 }' 00:16:17.766 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.766 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:17.766 pt2 00:16:17.766 pt3' 00:16:17.766 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:17.766 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:17.766 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:18.026 "name": "pt1", 00:16:18.026 "aliases": [ 00:16:18.026 "794ecda4-768f-e850-adb5-507ca497e8b1" 00:16:18.026 ], 00:16:18.026 "product_name": "passthru", 00:16:18.026 "block_size": 512, 00:16:18.026 "num_blocks": 65536, 00:16:18.026 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:18.026 "assigned_rate_limits": { 00:16:18.026 "rw_ios_per_sec": 0, 00:16:18.026 "rw_mbytes_per_sec": 0, 00:16:18.026 "r_mbytes_per_sec": 0, 00:16:18.026 "w_mbytes_per_sec": 0 00:16:18.026 }, 00:16:18.026 "claimed": true, 00:16:18.026 "claim_type": "exclusive_write", 00:16:18.026 "zoned": false, 00:16:18.026 "supported_io_types": { 00:16:18.026 "read": true, 00:16:18.026 "write": true, 00:16:18.026 "unmap": true, 00:16:18.026 "write_zeroes": true, 00:16:18.026 "flush": true, 00:16:18.026 "reset": true, 00:16:18.026 "compare": false, 00:16:18.026 "compare_and_write": false, 00:16:18.026 "abort": true, 00:16:18.026 "nvme_admin": false, 00:16:18.026 "nvme_io": false 00:16:18.026 }, 00:16:18.026 "memory_domains": [ 00:16:18.026 { 00:16:18.026 "dma_device_id": "system", 00:16:18.026 "dma_device_type": 1 00:16:18.026 }, 00:16:18.026 { 00:16:18.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.026 "dma_device_type": 2 00:16:18.026 } 00:16:18.026 ], 00:16:18.026 "driver_specific": { 00:16:18.026 "passthru": { 00:16:18.026 "name": "pt1", 00:16:18.026 "base_bdev_name": "malloc1" 00:16:18.026 } 00:16:18.026 } 00:16:18.026 }' 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:18.026 02:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:18.285 "name": "pt2", 00:16:18.285 "aliases": [ 00:16:18.285 "8fd92ef3-9da4-195a-a44a-1f3558dc2b11" 00:16:18.285 ], 00:16:18.285 "product_name": "passthru", 00:16:18.285 "block_size": 512, 00:16:18.285 "num_blocks": 65536, 00:16:18.285 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:18.285 "assigned_rate_limits": { 00:16:18.285 "rw_ios_per_sec": 0, 00:16:18.285 "rw_mbytes_per_sec": 0, 00:16:18.285 "r_mbytes_per_sec": 0, 00:16:18.285 "w_mbytes_per_sec": 0 00:16:18.285 }, 00:16:18.285 "claimed": true, 00:16:18.285 "claim_type": "exclusive_write", 00:16:18.285 "zoned": false, 00:16:18.285 "supported_io_types": { 00:16:18.285 "read": true, 00:16:18.285 "write": true, 00:16:18.285 "unmap": true, 00:16:18.285 "write_zeroes": true, 00:16:18.285 "flush": true, 00:16:18.285 "reset": true, 00:16:18.285 "compare": false, 00:16:18.285 "compare_and_write": false, 00:16:18.285 "abort": true, 00:16:18.285 "nvme_admin": false, 00:16:18.285 "nvme_io": false 00:16:18.285 }, 00:16:18.285 "memory_domains": [ 00:16:18.285 { 00:16:18.285 "dma_device_id": "system", 00:16:18.285 "dma_device_type": 1 00:16:18.285 }, 00:16:18.285 { 00:16:18.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.285 "dma_device_type": 2 00:16:18.285 } 00:16:18.285 ], 00:16:18.285 "driver_specific": { 00:16:18.285 "passthru": { 00:16:18.285 "name": "pt2", 00:16:18.285 "base_bdev_name": "malloc2" 00:16:18.285 } 00:16:18.285 } 00:16:18.285 }' 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.285 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.286 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:18.286 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:18.286 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:18.286 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:18.544 "name": "pt3", 00:16:18.544 "aliases": [ 00:16:18.544 "c62d88dc-e108-ff5f-a179-7b8c2e48e525" 00:16:18.544 ], 00:16:18.544 "product_name": "passthru", 00:16:18.544 "block_size": 512, 00:16:18.544 "num_blocks": 65536, 00:16:18.544 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:18.544 "assigned_rate_limits": { 00:16:18.544 "rw_ios_per_sec": 0, 00:16:18.544 "rw_mbytes_per_sec": 0, 00:16:18.544 "r_mbytes_per_sec": 0, 00:16:18.544 "w_mbytes_per_sec": 0 00:16:18.544 }, 00:16:18.544 "claimed": true, 00:16:18.544 "claim_type": "exclusive_write", 00:16:18.544 "zoned": false, 00:16:18.544 "supported_io_types": { 00:16:18.544 "read": true, 00:16:18.544 "write": true, 00:16:18.544 "unmap": true, 00:16:18.544 "write_zeroes": true, 00:16:18.544 "flush": true, 00:16:18.544 "reset": true, 00:16:18.544 "compare": false, 00:16:18.544 "compare_and_write": false, 00:16:18.544 "abort": true, 00:16:18.544 "nvme_admin": false, 00:16:18.544 "nvme_io": false 00:16:18.544 }, 00:16:18.544 "memory_domains": [ 00:16:18.544 { 00:16:18.544 "dma_device_id": "system", 00:16:18.544 "dma_device_type": 1 00:16:18.544 }, 00:16:18.544 { 00:16:18.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.544 "dma_device_type": 2 00:16:18.544 } 00:16:18.544 ], 00:16:18.544 "driver_specific": { 00:16:18.544 "passthru": { 00:16:18.544 "name": "pt3", 00:16:18.544 "base_bdev_name": "malloc3" 00:16:18.544 } 00:16:18.544 } 00:16:18.544 }' 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:18.544 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:19.112 [2024-05-15 02:17:06.885820] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.112 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=38ccef57-1261-11ef-99fd-bfc7c66e2865 00:16:19.112 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 38ccef57-1261-11ef-99fd-bfc7c66e2865 ']' 00:16:19.112 02:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.112 [2024-05-15 02:17:07.125792] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.112 [2024-05-15 02:17:07.125828] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.112 [2024-05-15 02:17:07.125853] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.112 [2024-05-15 02:17:07.125868] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.112 [2024-05-15 02:17:07.125873] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b354400 name raid_bdev1, state offline 00:16:19.371 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.371 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.629 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:19.887 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.887 02:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:20.167 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:20.167 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:20.736 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:20.995 [2024-05-15 02:17:08.757890] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:20.995 [2024-05-15 02:17:08.758363] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:20.995 [2024-05-15 02:17:08.758383] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:20.995 [2024-05-15 02:17:08.758414] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:20.995 [2024-05-15 02:17:08.758463] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:20.995 [2024-05-15 02:17:08.758474] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:20.995 [2024-05-15 02:17:08.758482] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.995 [2024-05-15 02:17:08.758487] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b354180 name raid_bdev1, state configuring 00:16:20.995 request: 00:16:20.995 { 00:16:20.995 "name": "raid_bdev1", 00:16:20.995 "raid_level": "concat", 00:16:20.995 "base_bdevs": [ 00:16:20.995 "malloc1", 00:16:20.995 "malloc2", 00:16:20.995 "malloc3" 00:16:20.995 ], 00:16:20.995 "superblock": false, 00:16:20.995 "strip_size_kb": 64, 00:16:20.995 "method": "bdev_raid_create", 00:16:20.995 "req_id": 1 00:16:20.995 } 00:16:20.995 Got JSON-RPC error response 00:16:20.995 response: 00:16:20.995 { 00:16:20.995 "code": -17, 00:16:20.995 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:20.995 } 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.995 02:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:21.255 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:21.255 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:21.255 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.518 [2024-05-15 02:17:09.461901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.518 [2024-05-15 02:17:09.461969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.518 [2024-05-15 02:17:09.462002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b353c80 00:16:21.518 [2024-05-15 02:17:09.462011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.518 [2024-05-15 02:17:09.462592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.518 [2024-05-15 02:17:09.462625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.518 [2024-05-15 02:17:09.462651] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.518 [2024-05-15 02:17:09.462663] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.518 pt1 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.518 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.776 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.776 "name": "raid_bdev1", 00:16:21.776 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:21.776 "strip_size_kb": 64, 00:16:21.776 "state": "configuring", 00:16:21.776 "raid_level": "concat", 00:16:21.776 "superblock": true, 00:16:21.776 "num_base_bdevs": 3, 00:16:21.776 "num_base_bdevs_discovered": 1, 00:16:21.776 "num_base_bdevs_operational": 3, 00:16:21.776 "base_bdevs_list": [ 00:16:21.776 { 00:16:21.776 "name": "pt1", 00:16:21.776 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:21.776 "is_configured": true, 00:16:21.776 "data_offset": 2048, 00:16:21.776 "data_size": 63488 00:16:21.776 }, 00:16:21.776 { 00:16:21.776 "name": null, 00:16:21.776 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:21.776 "is_configured": false, 00:16:21.776 "data_offset": 2048, 00:16:21.776 "data_size": 63488 00:16:21.776 }, 00:16:21.776 { 00:16:21.776 "name": null, 00:16:21.776 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:21.776 "is_configured": false, 00:16:21.776 "data_offset": 2048, 00:16:21.776 "data_size": 63488 00:16:21.776 } 00:16:21.776 ] 00:16:21.776 }' 00:16:21.776 02:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.776 02:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:22.345 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.604 [2024-05-15 02:17:10.501949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.604 [2024-05-15 02:17:10.502020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.604 [2024-05-15 02:17:10.502052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b354680 00:16:22.604 [2024-05-15 02:17:10.502061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.604 [2024-05-15 02:17:10.502166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.604 [2024-05-15 02:17:10.502175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.604 [2024-05-15 02:17:10.502199] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.604 [2024-05-15 02:17:10.502206] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.604 pt2 00:16:22.604 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:22.862 [2024-05-15 02:17:10.841968] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.862 02:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.430 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.430 "name": "raid_bdev1", 00:16:23.430 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:23.430 "strip_size_kb": 64, 00:16:23.430 "state": "configuring", 00:16:23.430 "raid_level": "concat", 00:16:23.430 "superblock": true, 00:16:23.430 "num_base_bdevs": 3, 00:16:23.430 "num_base_bdevs_discovered": 1, 00:16:23.430 "num_base_bdevs_operational": 3, 00:16:23.430 "base_bdevs_list": [ 00:16:23.430 { 00:16:23.430 "name": "pt1", 00:16:23.430 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:23.430 "is_configured": true, 00:16:23.430 "data_offset": 2048, 00:16:23.430 "data_size": 63488 00:16:23.430 }, 00:16:23.430 { 00:16:23.430 "name": null, 00:16:23.431 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:23.431 "is_configured": false, 00:16:23.431 "data_offset": 2048, 00:16:23.431 "data_size": 63488 00:16:23.431 }, 00:16:23.431 { 00:16:23.431 "name": null, 00:16:23.431 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:23.431 "is_configured": false, 00:16:23.431 "data_offset": 2048, 00:16:23.431 "data_size": 63488 00:16:23.431 } 00:16:23.431 ] 00:16:23.431 }' 00:16:23.431 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.431 02:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.689 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:23.689 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.689 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.950 [2024-05-15 02:17:11.882057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.950 [2024-05-15 02:17:11.882154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.950 [2024-05-15 02:17:11.882195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b354680 00:16:23.950 [2024-05-15 02:17:11.882212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.950 [2024-05-15 02:17:11.882334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.950 [2024-05-15 02:17:11.882355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.950 [2024-05-15 02:17:11.882457] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.950 [2024-05-15 02:17:11.882474] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.950 pt2 00:16:23.950 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:23.950 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.950 02:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.213 [2024-05-15 02:17:12.214095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.213 [2024-05-15 02:17:12.214211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.213 [2024-05-15 02:17:12.214262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b354400 00:16:24.213 [2024-05-15 02:17:12.214284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.213 [2024-05-15 02:17:12.214496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.213 [2024-05-15 02:17:12.214532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.213 [2024-05-15 02:17:12.214579] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:24.213 [2024-05-15 02:17:12.214598] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.213 [2024-05-15 02:17:12.214650] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b353780 00:16:24.213 [2024-05-15 02:17:12.214660] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.213 [2024-05-15 02:17:12.214705] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b3b6e20 00:16:24.213 [2024-05-15 02:17:12.214793] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b353780 00:16:24.213 [2024-05-15 02:17:12.214800] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b353780 00:16:24.213 [2024-05-15 02:17:12.214839] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.213 pt3 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.481 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.754 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.754 "name": "raid_bdev1", 00:16:24.754 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:24.754 "strip_size_kb": 64, 00:16:24.754 "state": "online", 00:16:24.754 "raid_level": "concat", 00:16:24.754 "superblock": true, 00:16:24.754 "num_base_bdevs": 3, 00:16:24.754 "num_base_bdevs_discovered": 3, 00:16:24.754 "num_base_bdevs_operational": 3, 00:16:24.754 "base_bdevs_list": [ 00:16:24.754 { 00:16:24.754 "name": "pt1", 00:16:24.754 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:24.754 "is_configured": true, 00:16:24.754 "data_offset": 2048, 00:16:24.754 "data_size": 63488 00:16:24.754 }, 00:16:24.754 { 00:16:24.754 "name": "pt2", 00:16:24.754 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:24.754 "is_configured": true, 00:16:24.754 "data_offset": 2048, 00:16:24.754 "data_size": 63488 00:16:24.754 }, 00:16:24.754 { 00:16:24.754 "name": "pt3", 00:16:24.754 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:24.754 "is_configured": true, 00:16:24.754 "data_offset": 2048, 00:16:24.754 "data_size": 63488 00:16:24.754 } 00:16:24.754 ] 00:16:24.754 }' 00:16:24.754 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.754 02:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.028 02:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:25.305 [2024-05-15 02:17:13.238130] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:25.305 "name": "raid_bdev1", 00:16:25.305 "aliases": [ 00:16:25.305 "38ccef57-1261-11ef-99fd-bfc7c66e2865" 00:16:25.305 ], 00:16:25.305 "product_name": "Raid Volume", 00:16:25.305 "block_size": 512, 00:16:25.305 "num_blocks": 190464, 00:16:25.305 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:25.305 "assigned_rate_limits": { 00:16:25.305 "rw_ios_per_sec": 0, 00:16:25.305 "rw_mbytes_per_sec": 0, 00:16:25.305 "r_mbytes_per_sec": 0, 00:16:25.305 "w_mbytes_per_sec": 0 00:16:25.305 }, 00:16:25.305 "claimed": false, 00:16:25.305 "zoned": false, 00:16:25.305 "supported_io_types": { 00:16:25.305 "read": true, 00:16:25.305 "write": true, 00:16:25.305 "unmap": true, 00:16:25.305 "write_zeroes": true, 00:16:25.305 "flush": true, 00:16:25.305 "reset": true, 00:16:25.305 "compare": false, 00:16:25.305 "compare_and_write": false, 00:16:25.305 "abort": false, 00:16:25.305 "nvme_admin": false, 00:16:25.305 "nvme_io": false 00:16:25.305 }, 00:16:25.305 "memory_domains": [ 00:16:25.305 { 00:16:25.305 "dma_device_id": "system", 00:16:25.305 "dma_device_type": 1 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.305 "dma_device_type": 2 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "dma_device_id": "system", 00:16:25.305 "dma_device_type": 1 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.305 "dma_device_type": 2 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "dma_device_id": "system", 00:16:25.305 "dma_device_type": 1 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.305 "dma_device_type": 2 00:16:25.305 } 00:16:25.305 ], 00:16:25.305 "driver_specific": { 00:16:25.305 "raid": { 00:16:25.305 "uuid": "38ccef57-1261-11ef-99fd-bfc7c66e2865", 00:16:25.305 "strip_size_kb": 64, 00:16:25.305 "state": "online", 00:16:25.305 "raid_level": "concat", 00:16:25.305 "superblock": true, 00:16:25.305 "num_base_bdevs": 3, 00:16:25.305 "num_base_bdevs_discovered": 3, 00:16:25.305 "num_base_bdevs_operational": 3, 00:16:25.305 "base_bdevs_list": [ 00:16:25.305 { 00:16:25.305 "name": "pt1", 00:16:25.305 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:25.305 "is_configured": true, 00:16:25.305 "data_offset": 2048, 00:16:25.305 "data_size": 63488 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "name": "pt2", 00:16:25.305 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:25.305 "is_configured": true, 00:16:25.305 "data_offset": 2048, 00:16:25.305 "data_size": 63488 00:16:25.305 }, 00:16:25.305 { 00:16:25.305 "name": "pt3", 00:16:25.305 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:25.305 "is_configured": true, 00:16:25.305 "data_offset": 2048, 00:16:25.305 "data_size": 63488 00:16:25.305 } 00:16:25.305 ] 00:16:25.305 } 00:16:25.305 } 00:16:25.305 }' 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:25.305 pt2 00:16:25.305 pt3' 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:25.305 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:25.590 "name": "pt1", 00:16:25.590 "aliases": [ 00:16:25.590 "794ecda4-768f-e850-adb5-507ca497e8b1" 00:16:25.590 ], 00:16:25.590 "product_name": "passthru", 00:16:25.590 "block_size": 512, 00:16:25.590 "num_blocks": 65536, 00:16:25.590 "uuid": "794ecda4-768f-e850-adb5-507ca497e8b1", 00:16:25.590 "assigned_rate_limits": { 00:16:25.590 "rw_ios_per_sec": 0, 00:16:25.590 "rw_mbytes_per_sec": 0, 00:16:25.590 "r_mbytes_per_sec": 0, 00:16:25.590 "w_mbytes_per_sec": 0 00:16:25.590 }, 00:16:25.590 "claimed": true, 00:16:25.590 "claim_type": "exclusive_write", 00:16:25.590 "zoned": false, 00:16:25.590 "supported_io_types": { 00:16:25.590 "read": true, 00:16:25.590 "write": true, 00:16:25.590 "unmap": true, 00:16:25.590 "write_zeroes": true, 00:16:25.590 "flush": true, 00:16:25.590 "reset": true, 00:16:25.590 "compare": false, 00:16:25.590 "compare_and_write": false, 00:16:25.590 "abort": true, 00:16:25.590 "nvme_admin": false, 00:16:25.590 "nvme_io": false 00:16:25.590 }, 00:16:25.590 "memory_domains": [ 00:16:25.590 { 00:16:25.590 "dma_device_id": "system", 00:16:25.590 "dma_device_type": 1 00:16:25.590 }, 00:16:25.590 { 00:16:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.590 "dma_device_type": 2 00:16:25.590 } 00:16:25.590 ], 00:16:25.590 "driver_specific": { 00:16:25.590 "passthru": { 00:16:25.590 "name": "pt1", 00:16:25.590 "base_bdev_name": "malloc1" 00:16:25.590 } 00:16:25.590 } 00:16:25.590 }' 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:25.590 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:25.874 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:25.874 "name": "pt2", 00:16:25.874 "aliases": [ 00:16:25.874 "8fd92ef3-9da4-195a-a44a-1f3558dc2b11" 00:16:25.874 ], 00:16:25.874 "product_name": "passthru", 00:16:25.874 "block_size": 512, 00:16:25.874 "num_blocks": 65536, 00:16:25.874 "uuid": "8fd92ef3-9da4-195a-a44a-1f3558dc2b11", 00:16:25.874 "assigned_rate_limits": { 00:16:25.874 "rw_ios_per_sec": 0, 00:16:25.874 "rw_mbytes_per_sec": 0, 00:16:25.874 "r_mbytes_per_sec": 0, 00:16:25.874 "w_mbytes_per_sec": 0 00:16:25.874 }, 00:16:25.874 "claimed": true, 00:16:25.874 "claim_type": "exclusive_write", 00:16:25.874 "zoned": false, 00:16:25.874 "supported_io_types": { 00:16:25.874 "read": true, 00:16:25.874 "write": true, 00:16:25.874 "unmap": true, 00:16:25.874 "write_zeroes": true, 00:16:25.874 "flush": true, 00:16:25.874 "reset": true, 00:16:25.874 "compare": false, 00:16:25.874 "compare_and_write": false, 00:16:25.874 "abort": true, 00:16:25.874 "nvme_admin": false, 00:16:25.874 "nvme_io": false 00:16:25.874 }, 00:16:25.874 "memory_domains": [ 00:16:25.874 { 00:16:25.874 "dma_device_id": "system", 00:16:25.874 "dma_device_type": 1 00:16:25.874 }, 00:16:25.874 { 00:16:25.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.874 "dma_device_type": 2 00:16:25.874 } 00:16:25.874 ], 00:16:25.874 "driver_specific": { 00:16:25.874 "passthru": { 00:16:25.874 "name": "pt2", 00:16:25.874 "base_bdev_name": "malloc2" 00:16:25.874 } 00:16:25.874 } 00:16:25.874 }' 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:25.875 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.139 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:26.139 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:26.139 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:26.139 02:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:26.139 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:26.139 "name": "pt3", 00:16:26.139 "aliases": [ 00:16:26.139 "c62d88dc-e108-ff5f-a179-7b8c2e48e525" 00:16:26.139 ], 00:16:26.139 "product_name": "passthru", 00:16:26.139 "block_size": 512, 00:16:26.139 "num_blocks": 65536, 00:16:26.139 "uuid": "c62d88dc-e108-ff5f-a179-7b8c2e48e525", 00:16:26.139 "assigned_rate_limits": { 00:16:26.139 "rw_ios_per_sec": 0, 00:16:26.139 "rw_mbytes_per_sec": 0, 00:16:26.139 "r_mbytes_per_sec": 0, 00:16:26.139 "w_mbytes_per_sec": 0 00:16:26.139 }, 00:16:26.139 "claimed": true, 00:16:26.139 "claim_type": "exclusive_write", 00:16:26.139 "zoned": false, 00:16:26.139 "supported_io_types": { 00:16:26.139 "read": true, 00:16:26.139 "write": true, 00:16:26.139 "unmap": true, 00:16:26.139 "write_zeroes": true, 00:16:26.139 "flush": true, 00:16:26.139 "reset": true, 00:16:26.139 "compare": false, 00:16:26.139 "compare_and_write": false, 00:16:26.139 "abort": true, 00:16:26.139 "nvme_admin": false, 00:16:26.139 "nvme_io": false 00:16:26.139 }, 00:16:26.139 "memory_domains": [ 00:16:26.139 { 00:16:26.139 "dma_device_id": "system", 00:16:26.139 "dma_device_type": 1 00:16:26.139 }, 00:16:26.139 { 00:16:26.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.139 "dma_device_type": 2 00:16:26.139 } 00:16:26.139 ], 00:16:26.139 "driver_specific": { 00:16:26.139 "passthru": { 00:16:26.139 "name": "pt3", 00:16:26.139 "base_bdev_name": "malloc3" 00:16:26.139 } 00:16:26.139 } 00:16:26.139 }' 00:16:26.139 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:26.140 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:26.140 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:26.140 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:26.398 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:26.656 [2024-05-15 02:17:14.470177] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 38ccef57-1261-11ef-99fd-bfc7c66e2865 '!=' 38ccef57-1261-11ef-99fd-bfc7c66e2865 ']' 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 54852 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 54852 ']' 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 54852 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 54852 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:16:26.656 killing process with pid 54852 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54852' 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 54852 00:16:26.656 [2024-05-15 02:17:14.514953] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.656 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 54852 00:16:26.656 [2024-05-15 02:17:14.515012] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.656 [2024-05-15 02:17:14.515040] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.656 [2024-05-15 02:17:14.515054] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b353780 name raid_bdev1, state offline 00:16:26.656 [2024-05-15 02:17:14.530791] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.915 02:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:26.915 ************************************ 00:16:26.915 END TEST raid_superblock_test 00:16:26.915 ************************************ 00:16:26.915 00:16:26.915 real 0m13.354s 00:16:26.915 user 0m23.842s 00:16:26.915 sys 0m2.083s 00:16:26.915 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.915 02:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.915 02:17:14 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:16:26.915 02:17:14 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:26.915 02:17:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:26.915 02:17:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.915 02:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.915 ************************************ 00:16:26.915 START TEST raid_state_function_test 00:16:26.915 ************************************ 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=55209 00:16:26.915 Process raid pid: 55209 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55209' 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 55209 /var/tmp/spdk-raid.sock 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 55209 ']' 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.915 02:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.915 [2024-05-15 02:17:14.753331] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:26.915 [2024-05-15 02:17:14.753584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:27.483 EAL: TSC is not safe to use in SMP mode 00:16:27.483 EAL: TSC is not invariant 00:16:27.483 [2024-05-15 02:17:15.244621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.483 [2024-05-15 02:17:15.340113] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:27.483 [2024-05-15 02:17:15.342313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.483 [2024-05-15 02:17:15.343069] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.483 [2024-05-15 02:17:15.343075] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.051 02:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.051 02:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:28.051 02:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:28.051 [2024-05-15 02:17:16.054575] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.051 [2024-05-15 02:17:16.054646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.051 [2024-05-15 02:17:16.054651] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.051 [2024-05-15 02:17:16.054660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.051 [2024-05-15 02:17:16.054672] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.051 [2024-05-15 02:17:16.054680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.309 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.568 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.568 "name": "Existed_Raid", 00:16:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.568 "strip_size_kb": 0, 00:16:28.568 "state": "configuring", 00:16:28.568 "raid_level": "raid1", 00:16:28.568 "superblock": false, 00:16:28.568 "num_base_bdevs": 3, 00:16:28.568 "num_base_bdevs_discovered": 0, 00:16:28.568 "num_base_bdevs_operational": 3, 00:16:28.568 "base_bdevs_list": [ 00:16:28.568 { 00:16:28.568 "name": "BaseBdev1", 00:16:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.568 "is_configured": false, 00:16:28.568 "data_offset": 0, 00:16:28.568 "data_size": 0 00:16:28.568 }, 00:16:28.568 { 00:16:28.568 "name": "BaseBdev2", 00:16:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.568 "is_configured": false, 00:16:28.568 "data_offset": 0, 00:16:28.568 "data_size": 0 00:16:28.568 }, 00:16:28.568 { 00:16:28.568 "name": "BaseBdev3", 00:16:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.568 "is_configured": false, 00:16:28.568 "data_offset": 0, 00:16:28.568 "data_size": 0 00:16:28.568 } 00:16:28.568 ] 00:16:28.568 }' 00:16:28.568 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.568 02:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.827 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.085 [2024-05-15 02:17:16.938622] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.085 [2024-05-15 02:17:16.938672] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b96f500 name Existed_Raid, state configuring 00:16:29.085 02:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:29.344 [2024-05-15 02:17:17.246661] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.344 [2024-05-15 02:17:17.246752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.344 [2024-05-15 02:17:17.246759] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.344 [2024-05-15 02:17:17.246771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.344 [2024-05-15 02:17:17.246777] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.344 [2024-05-15 02:17:17.246788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.344 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.603 [2024-05-15 02:17:17.491629] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.603 BaseBdev1 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:29.603 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:29.604 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.862 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.121 [ 00:16:30.121 { 00:16:30.121 "name": "BaseBdev1", 00:16:30.121 "aliases": [ 00:16:30.121 "407e1242-1261-11ef-99fd-bfc7c66e2865" 00:16:30.121 ], 00:16:30.121 "product_name": "Malloc disk", 00:16:30.121 "block_size": 512, 00:16:30.121 "num_blocks": 65536, 00:16:30.121 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:30.121 "assigned_rate_limits": { 00:16:30.121 "rw_ios_per_sec": 0, 00:16:30.121 "rw_mbytes_per_sec": 0, 00:16:30.121 "r_mbytes_per_sec": 0, 00:16:30.121 "w_mbytes_per_sec": 0 00:16:30.121 }, 00:16:30.121 "claimed": true, 00:16:30.121 "claim_type": "exclusive_write", 00:16:30.121 "zoned": false, 00:16:30.121 "supported_io_types": { 00:16:30.121 "read": true, 00:16:30.121 "write": true, 00:16:30.121 "unmap": true, 00:16:30.121 "write_zeroes": true, 00:16:30.121 "flush": true, 00:16:30.121 "reset": true, 00:16:30.121 "compare": false, 00:16:30.121 "compare_and_write": false, 00:16:30.121 "abort": true, 00:16:30.121 "nvme_admin": false, 00:16:30.121 "nvme_io": false 00:16:30.121 }, 00:16:30.121 "memory_domains": [ 00:16:30.121 { 00:16:30.121 "dma_device_id": "system", 00:16:30.121 "dma_device_type": 1 00:16:30.121 }, 00:16:30.121 { 00:16:30.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.121 "dma_device_type": 2 00:16:30.121 } 00:16:30.121 ], 00:16:30.121 "driver_specific": {} 00:16:30.121 } 00:16:30.121 ] 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.121 02:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.379 02:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.379 "name": "Existed_Raid", 00:16:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.379 "strip_size_kb": 0, 00:16:30.379 "state": "configuring", 00:16:30.379 "raid_level": "raid1", 00:16:30.379 "superblock": false, 00:16:30.379 "num_base_bdevs": 3, 00:16:30.379 "num_base_bdevs_discovered": 1, 00:16:30.379 "num_base_bdevs_operational": 3, 00:16:30.379 "base_bdevs_list": [ 00:16:30.379 { 00:16:30.379 "name": "BaseBdev1", 00:16:30.379 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:30.379 "is_configured": true, 00:16:30.379 "data_offset": 0, 00:16:30.379 "data_size": 65536 00:16:30.379 }, 00:16:30.379 { 00:16:30.379 "name": "BaseBdev2", 00:16:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.379 "is_configured": false, 00:16:30.379 "data_offset": 0, 00:16:30.379 "data_size": 0 00:16:30.379 }, 00:16:30.379 { 00:16:30.379 "name": "BaseBdev3", 00:16:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.379 "is_configured": false, 00:16:30.379 "data_offset": 0, 00:16:30.379 "data_size": 0 00:16:30.379 } 00:16:30.379 ] 00:16:30.379 }' 00:16:30.379 02:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.379 02:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.638 02:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.897 [2024-05-15 02:17:18.762678] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.897 [2024-05-15 02:17:18.762718] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b96f500 name Existed_Raid, state configuring 00:16:30.897 02:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.156 [2024-05-15 02:17:18.982734] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.156 [2024-05-15 02:17:18.983643] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.156 [2024-05-15 02:17:18.983712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.156 [2024-05-15 02:17:18.983720] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.156 [2024-05-15 02:17:18.983735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.156 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.416 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.416 "name": "Existed_Raid", 00:16:31.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.416 "strip_size_kb": 0, 00:16:31.416 "state": "configuring", 00:16:31.416 "raid_level": "raid1", 00:16:31.416 "superblock": false, 00:16:31.416 "num_base_bdevs": 3, 00:16:31.416 "num_base_bdevs_discovered": 1, 00:16:31.416 "num_base_bdevs_operational": 3, 00:16:31.416 "base_bdevs_list": [ 00:16:31.416 { 00:16:31.416 "name": "BaseBdev1", 00:16:31.416 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:31.416 "is_configured": true, 00:16:31.416 "data_offset": 0, 00:16:31.416 "data_size": 65536 00:16:31.416 }, 00:16:31.416 { 00:16:31.416 "name": "BaseBdev2", 00:16:31.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.416 "is_configured": false, 00:16:31.416 "data_offset": 0, 00:16:31.416 "data_size": 0 00:16:31.416 }, 00:16:31.416 { 00:16:31.416 "name": "BaseBdev3", 00:16:31.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.416 "is_configured": false, 00:16:31.416 "data_offset": 0, 00:16:31.416 "data_size": 0 00:16:31.416 } 00:16:31.416 ] 00:16:31.416 }' 00:16:31.416 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.416 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.675 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.937 [2024-05-15 02:17:19.854876] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.937 BaseBdev2 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:31.937 02:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.199 02:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.457 [ 00:16:32.457 { 00:16:32.457 "name": "BaseBdev2", 00:16:32.457 "aliases": [ 00:16:32.457 "41e6ce79-1261-11ef-99fd-bfc7c66e2865" 00:16:32.457 ], 00:16:32.457 "product_name": "Malloc disk", 00:16:32.457 "block_size": 512, 00:16:32.457 "num_blocks": 65536, 00:16:32.457 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:32.457 "assigned_rate_limits": { 00:16:32.457 "rw_ios_per_sec": 0, 00:16:32.457 "rw_mbytes_per_sec": 0, 00:16:32.457 "r_mbytes_per_sec": 0, 00:16:32.457 "w_mbytes_per_sec": 0 00:16:32.457 }, 00:16:32.457 "claimed": true, 00:16:32.457 "claim_type": "exclusive_write", 00:16:32.457 "zoned": false, 00:16:32.457 "supported_io_types": { 00:16:32.457 "read": true, 00:16:32.457 "write": true, 00:16:32.457 "unmap": true, 00:16:32.457 "write_zeroes": true, 00:16:32.457 "flush": true, 00:16:32.457 "reset": true, 00:16:32.457 "compare": false, 00:16:32.457 "compare_and_write": false, 00:16:32.457 "abort": true, 00:16:32.457 "nvme_admin": false, 00:16:32.457 "nvme_io": false 00:16:32.457 }, 00:16:32.457 "memory_domains": [ 00:16:32.457 { 00:16:32.457 "dma_device_id": "system", 00:16:32.457 "dma_device_type": 1 00:16:32.457 }, 00:16:32.457 { 00:16:32.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.457 "dma_device_type": 2 00:16:32.457 } 00:16:32.457 ], 00:16:32.457 "driver_specific": {} 00:16:32.457 } 00:16:32.457 ] 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.457 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.716 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.716 "name": "Existed_Raid", 00:16:32.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.716 "strip_size_kb": 0, 00:16:32.716 "state": "configuring", 00:16:32.716 "raid_level": "raid1", 00:16:32.716 "superblock": false, 00:16:32.716 "num_base_bdevs": 3, 00:16:32.716 "num_base_bdevs_discovered": 2, 00:16:32.716 "num_base_bdevs_operational": 3, 00:16:32.716 "base_bdevs_list": [ 00:16:32.716 { 00:16:32.716 "name": "BaseBdev1", 00:16:32.716 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 }, 00:16:32.716 { 00:16:32.716 "name": "BaseBdev2", 00:16:32.716 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 }, 00:16:32.716 { 00:16:32.716 "name": "BaseBdev3", 00:16:32.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.716 "is_configured": false, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 0 00:16:32.716 } 00:16:32.716 ] 00:16:32.716 }' 00:16:32.716 02:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.716 02:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.282 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.282 [2024-05-15 02:17:21.262952] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.283 [2024-05-15 02:17:21.262983] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b96fa00 00:16:33.283 [2024-05-15 02:17:21.262988] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:33.283 [2024-05-15 02:17:21.263009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b9d2ec0 00:16:33.283 [2024-05-15 02:17:21.263102] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b96fa00 00:16:33.283 [2024-05-15 02:17:21.263106] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b96fa00 00:16:33.283 [2024-05-15 02:17:21.263138] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.283 BaseBdev3 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:33.283 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.542 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.800 [ 00:16:33.800 { 00:16:33.800 "name": "BaseBdev3", 00:16:33.800 "aliases": [ 00:16:33.800 "42bda9ec-1261-11ef-99fd-bfc7c66e2865" 00:16:33.800 ], 00:16:33.800 "product_name": "Malloc disk", 00:16:33.800 "block_size": 512, 00:16:33.800 "num_blocks": 65536, 00:16:33.800 "uuid": "42bda9ec-1261-11ef-99fd-bfc7c66e2865", 00:16:33.800 "assigned_rate_limits": { 00:16:33.800 "rw_ios_per_sec": 0, 00:16:33.800 "rw_mbytes_per_sec": 0, 00:16:33.800 "r_mbytes_per_sec": 0, 00:16:33.800 "w_mbytes_per_sec": 0 00:16:33.800 }, 00:16:33.800 "claimed": true, 00:16:33.800 "claim_type": "exclusive_write", 00:16:33.800 "zoned": false, 00:16:33.800 "supported_io_types": { 00:16:33.800 "read": true, 00:16:33.800 "write": true, 00:16:33.800 "unmap": true, 00:16:33.800 "write_zeroes": true, 00:16:33.800 "flush": true, 00:16:33.800 "reset": true, 00:16:33.800 "compare": false, 00:16:33.800 "compare_and_write": false, 00:16:33.800 "abort": true, 00:16:33.800 "nvme_admin": false, 00:16:33.800 "nvme_io": false 00:16:33.800 }, 00:16:33.800 "memory_domains": [ 00:16:33.800 { 00:16:33.800 "dma_device_id": "system", 00:16:33.800 "dma_device_type": 1 00:16:33.800 }, 00:16:33.800 { 00:16:33.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.800 "dma_device_type": 2 00:16:33.800 } 00:16:33.800 ], 00:16:33.800 "driver_specific": {} 00:16:33.800 } 00:16:33.800 ] 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.800 02:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.368 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.368 "name": "Existed_Raid", 00:16:34.368 "uuid": "42bdafab-1261-11ef-99fd-bfc7c66e2865", 00:16:34.368 "strip_size_kb": 0, 00:16:34.368 "state": "online", 00:16:34.368 "raid_level": "raid1", 00:16:34.368 "superblock": false, 00:16:34.368 "num_base_bdevs": 3, 00:16:34.368 "num_base_bdevs_discovered": 3, 00:16:34.368 "num_base_bdevs_operational": 3, 00:16:34.368 "base_bdevs_list": [ 00:16:34.368 { 00:16:34.368 "name": "BaseBdev1", 00:16:34.368 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:34.368 "is_configured": true, 00:16:34.368 "data_offset": 0, 00:16:34.368 "data_size": 65536 00:16:34.368 }, 00:16:34.368 { 00:16:34.368 "name": "BaseBdev2", 00:16:34.368 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:34.368 "is_configured": true, 00:16:34.368 "data_offset": 0, 00:16:34.368 "data_size": 65536 00:16:34.368 }, 00:16:34.368 { 00:16:34.368 "name": "BaseBdev3", 00:16:34.368 "uuid": "42bda9ec-1261-11ef-99fd-bfc7c66e2865", 00:16:34.368 "is_configured": true, 00:16:34.368 "data_offset": 0, 00:16:34.368 "data_size": 65536 00:16:34.368 } 00:16:34.368 ] 00:16:34.368 }' 00:16:34.368 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.368 02:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:34.627 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:34.888 [2024-05-15 02:17:22.666939] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.888 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:34.888 "name": "Existed_Raid", 00:16:34.888 "aliases": [ 00:16:34.888 "42bdafab-1261-11ef-99fd-bfc7c66e2865" 00:16:34.888 ], 00:16:34.888 "product_name": "Raid Volume", 00:16:34.888 "block_size": 512, 00:16:34.888 "num_blocks": 65536, 00:16:34.888 "uuid": "42bdafab-1261-11ef-99fd-bfc7c66e2865", 00:16:34.888 "assigned_rate_limits": { 00:16:34.888 "rw_ios_per_sec": 0, 00:16:34.888 "rw_mbytes_per_sec": 0, 00:16:34.888 "r_mbytes_per_sec": 0, 00:16:34.888 "w_mbytes_per_sec": 0 00:16:34.888 }, 00:16:34.888 "claimed": false, 00:16:34.888 "zoned": false, 00:16:34.888 "supported_io_types": { 00:16:34.888 "read": true, 00:16:34.888 "write": true, 00:16:34.888 "unmap": false, 00:16:34.888 "write_zeroes": true, 00:16:34.888 "flush": false, 00:16:34.888 "reset": true, 00:16:34.888 "compare": false, 00:16:34.888 "compare_and_write": false, 00:16:34.888 "abort": false, 00:16:34.888 "nvme_admin": false, 00:16:34.888 "nvme_io": false 00:16:34.888 }, 00:16:34.888 "memory_domains": [ 00:16:34.888 { 00:16:34.888 "dma_device_id": "system", 00:16:34.888 "dma_device_type": 1 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.888 "dma_device_type": 2 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "dma_device_id": "system", 00:16:34.889 "dma_device_type": 1 00:16:34.889 }, 00:16:34.889 { 00:16:34.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.889 "dma_device_type": 2 00:16:34.889 }, 00:16:34.889 { 00:16:34.889 "dma_device_id": "system", 00:16:34.889 "dma_device_type": 1 00:16:34.889 }, 00:16:34.889 { 00:16:34.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.889 "dma_device_type": 2 00:16:34.889 } 00:16:34.889 ], 00:16:34.889 "driver_specific": { 00:16:34.889 "raid": { 00:16:34.889 "uuid": "42bdafab-1261-11ef-99fd-bfc7c66e2865", 00:16:34.889 "strip_size_kb": 0, 00:16:34.889 "state": "online", 00:16:34.889 "raid_level": "raid1", 00:16:34.889 "superblock": false, 00:16:34.889 "num_base_bdevs": 3, 00:16:34.889 "num_base_bdevs_discovered": 3, 00:16:34.889 "num_base_bdevs_operational": 3, 00:16:34.889 "base_bdevs_list": [ 00:16:34.889 { 00:16:34.889 "name": "BaseBdev1", 00:16:34.889 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:34.889 "is_configured": true, 00:16:34.889 "data_offset": 0, 00:16:34.889 "data_size": 65536 00:16:34.889 }, 00:16:34.889 { 00:16:34.889 "name": "BaseBdev2", 00:16:34.889 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:34.889 "is_configured": true, 00:16:34.889 "data_offset": 0, 00:16:34.889 "data_size": 65536 00:16:34.889 }, 00:16:34.889 { 00:16:34.889 "name": "BaseBdev3", 00:16:34.889 "uuid": "42bda9ec-1261-11ef-99fd-bfc7c66e2865", 00:16:34.889 "is_configured": true, 00:16:34.889 "data_offset": 0, 00:16:34.889 "data_size": 65536 00:16:34.889 } 00:16:34.889 ] 00:16:34.889 } 00:16:34.889 } 00:16:34.889 }' 00:16:34.889 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.889 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:16:34.889 BaseBdev2 00:16:34.889 BaseBdev3' 00:16:34.889 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:34.889 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:34.889 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.174 "name": "BaseBdev1", 00:16:35.174 "aliases": [ 00:16:35.174 "407e1242-1261-11ef-99fd-bfc7c66e2865" 00:16:35.174 ], 00:16:35.174 "product_name": "Malloc disk", 00:16:35.174 "block_size": 512, 00:16:35.174 "num_blocks": 65536, 00:16:35.174 "uuid": "407e1242-1261-11ef-99fd-bfc7c66e2865", 00:16:35.174 "assigned_rate_limits": { 00:16:35.174 "rw_ios_per_sec": 0, 00:16:35.174 "rw_mbytes_per_sec": 0, 00:16:35.174 "r_mbytes_per_sec": 0, 00:16:35.174 "w_mbytes_per_sec": 0 00:16:35.174 }, 00:16:35.174 "claimed": true, 00:16:35.174 "claim_type": "exclusive_write", 00:16:35.174 "zoned": false, 00:16:35.174 "supported_io_types": { 00:16:35.174 "read": true, 00:16:35.174 "write": true, 00:16:35.174 "unmap": true, 00:16:35.174 "write_zeroes": true, 00:16:35.174 "flush": true, 00:16:35.174 "reset": true, 00:16:35.174 "compare": false, 00:16:35.174 "compare_and_write": false, 00:16:35.174 "abort": true, 00:16:35.174 "nvme_admin": false, 00:16:35.174 "nvme_io": false 00:16:35.174 }, 00:16:35.174 "memory_domains": [ 00:16:35.174 { 00:16:35.174 "dma_device_id": "system", 00:16:35.174 "dma_device_type": 1 00:16:35.174 }, 00:16:35.174 { 00:16:35.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.174 "dma_device_type": 2 00:16:35.174 } 00:16:35.174 ], 00:16:35.174 "driver_specific": {} 00:16:35.174 }' 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.174 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.175 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.175 02:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:35.175 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.434 "name": "BaseBdev2", 00:16:35.434 "aliases": [ 00:16:35.434 "41e6ce79-1261-11ef-99fd-bfc7c66e2865" 00:16:35.434 ], 00:16:35.434 "product_name": "Malloc disk", 00:16:35.434 "block_size": 512, 00:16:35.434 "num_blocks": 65536, 00:16:35.434 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:35.434 "assigned_rate_limits": { 00:16:35.434 "rw_ios_per_sec": 0, 00:16:35.434 "rw_mbytes_per_sec": 0, 00:16:35.434 "r_mbytes_per_sec": 0, 00:16:35.434 "w_mbytes_per_sec": 0 00:16:35.434 }, 00:16:35.434 "claimed": true, 00:16:35.434 "claim_type": "exclusive_write", 00:16:35.434 "zoned": false, 00:16:35.434 "supported_io_types": { 00:16:35.434 "read": true, 00:16:35.434 "write": true, 00:16:35.434 "unmap": true, 00:16:35.434 "write_zeroes": true, 00:16:35.434 "flush": true, 00:16:35.434 "reset": true, 00:16:35.434 "compare": false, 00:16:35.434 "compare_and_write": false, 00:16:35.434 "abort": true, 00:16:35.434 "nvme_admin": false, 00:16:35.434 "nvme_io": false 00:16:35.434 }, 00:16:35.434 "memory_domains": [ 00:16:35.434 { 00:16:35.434 "dma_device_id": "system", 00:16:35.434 "dma_device_type": 1 00:16:35.434 }, 00:16:35.434 { 00:16:35.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.434 "dma_device_type": 2 00:16:35.434 } 00:16:35.434 ], 00:16:35.434 "driver_specific": {} 00:16:35.434 }' 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:35.434 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.701 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.701 "name": "BaseBdev3", 00:16:35.701 "aliases": [ 00:16:35.701 "42bda9ec-1261-11ef-99fd-bfc7c66e2865" 00:16:35.701 ], 00:16:35.701 "product_name": "Malloc disk", 00:16:35.701 "block_size": 512, 00:16:35.701 "num_blocks": 65536, 00:16:35.701 "uuid": "42bda9ec-1261-11ef-99fd-bfc7c66e2865", 00:16:35.701 "assigned_rate_limits": { 00:16:35.701 "rw_ios_per_sec": 0, 00:16:35.701 "rw_mbytes_per_sec": 0, 00:16:35.701 "r_mbytes_per_sec": 0, 00:16:35.701 "w_mbytes_per_sec": 0 00:16:35.701 }, 00:16:35.701 "claimed": true, 00:16:35.701 "claim_type": "exclusive_write", 00:16:35.701 "zoned": false, 00:16:35.701 "supported_io_types": { 00:16:35.701 "read": true, 00:16:35.701 "write": true, 00:16:35.701 "unmap": true, 00:16:35.701 "write_zeroes": true, 00:16:35.701 "flush": true, 00:16:35.701 "reset": true, 00:16:35.701 "compare": false, 00:16:35.701 "compare_and_write": false, 00:16:35.701 "abort": true, 00:16:35.701 "nvme_admin": false, 00:16:35.701 "nvme_io": false 00:16:35.701 }, 00:16:35.701 "memory_domains": [ 00:16:35.701 { 00:16:35.701 "dma_device_id": "system", 00:16:35.701 "dma_device_type": 1 00:16:35.701 }, 00:16:35.701 { 00:16:35.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.701 "dma_device_type": 2 00:16:35.701 } 00:16:35.701 ], 00:16:35.701 "driver_specific": {} 00:16:35.701 }' 00:16:35.701 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.960 02:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:36.219 [2024-05-15 02:17:24.210994] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.219 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.478 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.478 "name": "Existed_Raid", 00:16:36.478 "uuid": "42bdafab-1261-11ef-99fd-bfc7c66e2865", 00:16:36.478 "strip_size_kb": 0, 00:16:36.478 "state": "online", 00:16:36.478 "raid_level": "raid1", 00:16:36.478 "superblock": false, 00:16:36.478 "num_base_bdevs": 3, 00:16:36.478 "num_base_bdevs_discovered": 2, 00:16:36.478 "num_base_bdevs_operational": 2, 00:16:36.478 "base_bdevs_list": [ 00:16:36.478 { 00:16:36.478 "name": null, 00:16:36.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.478 "is_configured": false, 00:16:36.478 "data_offset": 0, 00:16:36.478 "data_size": 65536 00:16:36.478 }, 00:16:36.478 { 00:16:36.478 "name": "BaseBdev2", 00:16:36.478 "uuid": "41e6ce79-1261-11ef-99fd-bfc7c66e2865", 00:16:36.478 "is_configured": true, 00:16:36.478 "data_offset": 0, 00:16:36.478 "data_size": 65536 00:16:36.478 }, 00:16:36.478 { 00:16:36.478 "name": "BaseBdev3", 00:16:36.478 "uuid": "42bda9ec-1261-11ef-99fd-bfc7c66e2865", 00:16:36.478 "is_configured": true, 00:16:36.478 "data_offset": 0, 00:16:36.478 "data_size": 65536 00:16:36.478 } 00:16:36.478 ] 00:16:36.478 }' 00:16:36.478 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.478 02:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.045 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:37.045 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.045 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.045 02:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:37.303 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:37.303 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.303 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:37.561 [2024-05-15 02:17:25.553534] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.561 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.561 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.856 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.856 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:37.856 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:37.856 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.856 02:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:38.114 [2024-05-15 02:17:26.054512] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:38.114 [2024-05-15 02:17:26.054585] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.114 [2024-05-15 02:17:26.060412] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.114 [2024-05-15 02:17:26.060463] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.114 [2024-05-15 02:17:26.060468] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b96fa00 name Existed_Raid, state offline 00:16:38.114 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.114 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.114 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.114 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.681 BaseBdev2 00:16:38.681 02:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:38.940 02:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.198 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.457 [ 00:16:39.457 { 00:16:39.457 "name": "BaseBdev2", 00:16:39.457 "aliases": [ 00:16:39.457 "45f850be-1261-11ef-99fd-bfc7c66e2865" 00:16:39.457 ], 00:16:39.457 "product_name": "Malloc disk", 00:16:39.457 "block_size": 512, 00:16:39.457 "num_blocks": 65536, 00:16:39.457 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:39.457 "assigned_rate_limits": { 00:16:39.457 "rw_ios_per_sec": 0, 00:16:39.457 "rw_mbytes_per_sec": 0, 00:16:39.457 "r_mbytes_per_sec": 0, 00:16:39.457 "w_mbytes_per_sec": 0 00:16:39.457 }, 00:16:39.457 "claimed": false, 00:16:39.457 "zoned": false, 00:16:39.457 "supported_io_types": { 00:16:39.457 "read": true, 00:16:39.457 "write": true, 00:16:39.457 "unmap": true, 00:16:39.457 "write_zeroes": true, 00:16:39.457 "flush": true, 00:16:39.457 "reset": true, 00:16:39.457 "compare": false, 00:16:39.457 "compare_and_write": false, 00:16:39.457 "abort": true, 00:16:39.457 "nvme_admin": false, 00:16:39.457 "nvme_io": false 00:16:39.457 }, 00:16:39.457 "memory_domains": [ 00:16:39.457 { 00:16:39.457 "dma_device_id": "system", 00:16:39.457 "dma_device_type": 1 00:16:39.457 }, 00:16:39.457 { 00:16:39.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.457 "dma_device_type": 2 00:16:39.457 } 00:16:39.457 ], 00:16:39.457 "driver_specific": {} 00:16:39.457 } 00:16:39.457 ] 00:16:39.457 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:39.457 02:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:39.457 02:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:39.457 02:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.024 BaseBdev3 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.024 02:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.300 [ 00:16:40.300 { 00:16:40.300 "name": "BaseBdev3", 00:16:40.300 "aliases": [ 00:16:40.300 "469973f1-1261-11ef-99fd-bfc7c66e2865" 00:16:40.300 ], 00:16:40.300 "product_name": "Malloc disk", 00:16:40.300 "block_size": 512, 00:16:40.300 "num_blocks": 65536, 00:16:40.300 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:40.300 "assigned_rate_limits": { 00:16:40.300 "rw_ios_per_sec": 0, 00:16:40.300 "rw_mbytes_per_sec": 0, 00:16:40.300 "r_mbytes_per_sec": 0, 00:16:40.300 "w_mbytes_per_sec": 0 00:16:40.300 }, 00:16:40.300 "claimed": false, 00:16:40.300 "zoned": false, 00:16:40.300 "supported_io_types": { 00:16:40.300 "read": true, 00:16:40.300 "write": true, 00:16:40.300 "unmap": true, 00:16:40.300 "write_zeroes": true, 00:16:40.300 "flush": true, 00:16:40.300 "reset": true, 00:16:40.300 "compare": false, 00:16:40.300 "compare_and_write": false, 00:16:40.300 "abort": true, 00:16:40.300 "nvme_admin": false, 00:16:40.300 "nvme_io": false 00:16:40.300 }, 00:16:40.300 "memory_domains": [ 00:16:40.300 { 00:16:40.300 "dma_device_id": "system", 00:16:40.300 "dma_device_type": 1 00:16:40.300 }, 00:16:40.300 { 00:16:40.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.300 "dma_device_type": 2 00:16:40.300 } 00:16:40.300 ], 00:16:40.300 "driver_specific": {} 00:16:40.300 } 00:16:40.300 ] 00:16:40.300 02:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:40.300 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:40.300 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:40.300 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.559 [2024-05-15 02:17:28.444462] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.559 [2024-05-15 02:17:28.444531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.559 [2024-05-15 02:17:28.444541] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.559 [2024-05-15 02:17:28.445042] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.559 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.817 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.817 "name": "Existed_Raid", 00:16:40.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.817 "strip_size_kb": 0, 00:16:40.817 "state": "configuring", 00:16:40.817 "raid_level": "raid1", 00:16:40.817 "superblock": false, 00:16:40.817 "num_base_bdevs": 3, 00:16:40.817 "num_base_bdevs_discovered": 2, 00:16:40.817 "num_base_bdevs_operational": 3, 00:16:40.817 "base_bdevs_list": [ 00:16:40.817 { 00:16:40.817 "name": "BaseBdev1", 00:16:40.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.817 "is_configured": false, 00:16:40.817 "data_offset": 0, 00:16:40.817 "data_size": 0 00:16:40.817 }, 00:16:40.817 { 00:16:40.817 "name": "BaseBdev2", 00:16:40.817 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:40.817 "is_configured": true, 00:16:40.817 "data_offset": 0, 00:16:40.817 "data_size": 65536 00:16:40.817 }, 00:16:40.817 { 00:16:40.817 "name": "BaseBdev3", 00:16:40.817 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:40.817 "is_configured": true, 00:16:40.817 "data_offset": 0, 00:16:40.817 "data_size": 65536 00:16:40.817 } 00:16:40.817 ] 00:16:40.817 }' 00:16:40.817 02:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.817 02:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.075 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:41.334 [2024-05-15 02:17:29.276495] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.334 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.592 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.592 "name": "Existed_Raid", 00:16:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.592 "strip_size_kb": 0, 00:16:41.592 "state": "configuring", 00:16:41.592 "raid_level": "raid1", 00:16:41.592 "superblock": false, 00:16:41.592 "num_base_bdevs": 3, 00:16:41.592 "num_base_bdevs_discovered": 1, 00:16:41.592 "num_base_bdevs_operational": 3, 00:16:41.592 "base_bdevs_list": [ 00:16:41.592 { 00:16:41.592 "name": "BaseBdev1", 00:16:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.592 "is_configured": false, 00:16:41.592 "data_offset": 0, 00:16:41.592 "data_size": 0 00:16:41.592 }, 00:16:41.592 { 00:16:41.592 "name": null, 00:16:41.592 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:41.592 "is_configured": false, 00:16:41.592 "data_offset": 0, 00:16:41.592 "data_size": 65536 00:16:41.592 }, 00:16:41.592 { 00:16:41.592 "name": "BaseBdev3", 00:16:41.592 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:41.592 "is_configured": true, 00:16:41.592 "data_offset": 0, 00:16:41.592 "data_size": 65536 00:16:41.592 } 00:16:41.592 ] 00:16:41.592 }' 00:16:41.592 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.592 02:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.181 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.181 02:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:42.181 02:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:16:42.181 02:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.439 [2024-05-15 02:17:30.452666] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.439 BaseBdev1 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:42.697 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.955 02:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.214 [ 00:16:43.214 { 00:16:43.214 "name": "BaseBdev1", 00:16:43.214 "aliases": [ 00:16:43.214 "4837e6f0-1261-11ef-99fd-bfc7c66e2865" 00:16:43.214 ], 00:16:43.214 "product_name": "Malloc disk", 00:16:43.215 "block_size": 512, 00:16:43.215 "num_blocks": 65536, 00:16:43.215 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:43.215 "assigned_rate_limits": { 00:16:43.215 "rw_ios_per_sec": 0, 00:16:43.215 "rw_mbytes_per_sec": 0, 00:16:43.215 "r_mbytes_per_sec": 0, 00:16:43.215 "w_mbytes_per_sec": 0 00:16:43.215 }, 00:16:43.215 "claimed": true, 00:16:43.215 "claim_type": "exclusive_write", 00:16:43.215 "zoned": false, 00:16:43.215 "supported_io_types": { 00:16:43.215 "read": true, 00:16:43.215 "write": true, 00:16:43.215 "unmap": true, 00:16:43.215 "write_zeroes": true, 00:16:43.215 "flush": true, 00:16:43.215 "reset": true, 00:16:43.215 "compare": false, 00:16:43.215 "compare_and_write": false, 00:16:43.215 "abort": true, 00:16:43.215 "nvme_admin": false, 00:16:43.215 "nvme_io": false 00:16:43.215 }, 00:16:43.215 "memory_domains": [ 00:16:43.215 { 00:16:43.215 "dma_device_id": "system", 00:16:43.215 "dma_device_type": 1 00:16:43.215 }, 00:16:43.215 { 00:16:43.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.215 "dma_device_type": 2 00:16:43.215 } 00:16:43.215 ], 00:16:43.215 "driver_specific": {} 00:16:43.215 } 00:16:43.215 ] 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.215 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.498 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.498 "name": "Existed_Raid", 00:16:43.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.498 "strip_size_kb": 0, 00:16:43.498 "state": "configuring", 00:16:43.498 "raid_level": "raid1", 00:16:43.498 "superblock": false, 00:16:43.498 "num_base_bdevs": 3, 00:16:43.498 "num_base_bdevs_discovered": 2, 00:16:43.498 "num_base_bdevs_operational": 3, 00:16:43.498 "base_bdevs_list": [ 00:16:43.498 { 00:16:43.498 "name": "BaseBdev1", 00:16:43.498 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:43.498 "is_configured": true, 00:16:43.498 "data_offset": 0, 00:16:43.498 "data_size": 65536 00:16:43.498 }, 00:16:43.498 { 00:16:43.498 "name": null, 00:16:43.498 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:43.498 "is_configured": false, 00:16:43.498 "data_offset": 0, 00:16:43.498 "data_size": 65536 00:16:43.498 }, 00:16:43.498 { 00:16:43.498 "name": "BaseBdev3", 00:16:43.498 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:43.498 "is_configured": true, 00:16:43.498 "data_offset": 0, 00:16:43.498 "data_size": 65536 00:16:43.498 } 00:16:43.498 ] 00:16:43.498 }' 00:16:43.498 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.498 02:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.757 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.757 02:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.323 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:44.323 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:44.581 [2024-05-15 02:17:32.484673] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.581 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.839 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.839 "name": "Existed_Raid", 00:16:44.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.839 "strip_size_kb": 0, 00:16:44.839 "state": "configuring", 00:16:44.839 "raid_level": "raid1", 00:16:44.839 "superblock": false, 00:16:44.839 "num_base_bdevs": 3, 00:16:44.839 "num_base_bdevs_discovered": 1, 00:16:44.839 "num_base_bdevs_operational": 3, 00:16:44.839 "base_bdevs_list": [ 00:16:44.839 { 00:16:44.839 "name": "BaseBdev1", 00:16:44.839 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:44.839 "is_configured": true, 00:16:44.839 "data_offset": 0, 00:16:44.839 "data_size": 65536 00:16:44.839 }, 00:16:44.839 { 00:16:44.839 "name": null, 00:16:44.839 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:44.839 "is_configured": false, 00:16:44.839 "data_offset": 0, 00:16:44.839 "data_size": 65536 00:16:44.839 }, 00:16:44.839 { 00:16:44.839 "name": null, 00:16:44.839 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:44.839 "is_configured": false, 00:16:44.839 "data_offset": 0, 00:16:44.839 "data_size": 65536 00:16:44.839 } 00:16:44.839 ] 00:16:44.839 }' 00:16:44.839 02:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.839 02:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.097 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.097 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.662 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:45.662 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:45.662 [2024-05-15 02:17:33.680739] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.919 02:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.177 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.177 "name": "Existed_Raid", 00:16:46.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.177 "strip_size_kb": 0, 00:16:46.177 "state": "configuring", 00:16:46.177 "raid_level": "raid1", 00:16:46.177 "superblock": false, 00:16:46.177 "num_base_bdevs": 3, 00:16:46.177 "num_base_bdevs_discovered": 2, 00:16:46.177 "num_base_bdevs_operational": 3, 00:16:46.177 "base_bdevs_list": [ 00:16:46.177 { 00:16:46.177 "name": "BaseBdev1", 00:16:46.177 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:46.177 "is_configured": true, 00:16:46.177 "data_offset": 0, 00:16:46.177 "data_size": 65536 00:16:46.177 }, 00:16:46.177 { 00:16:46.177 "name": null, 00:16:46.177 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:46.177 "is_configured": false, 00:16:46.177 "data_offset": 0, 00:16:46.177 "data_size": 65536 00:16:46.177 }, 00:16:46.177 { 00:16:46.177 "name": "BaseBdev3", 00:16:46.177 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:46.177 "is_configured": true, 00:16:46.177 "data_offset": 0, 00:16:46.177 "data_size": 65536 00:16:46.177 } 00:16:46.177 ] 00:16:46.177 }' 00:16:46.177 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.177 02:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.435 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.435 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:46.694 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:46.694 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:46.953 [2024-05-15 02:17:34.864784] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.953 02:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.210 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.211 "name": "Existed_Raid", 00:16:47.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.211 "strip_size_kb": 0, 00:16:47.211 "state": "configuring", 00:16:47.211 "raid_level": "raid1", 00:16:47.211 "superblock": false, 00:16:47.211 "num_base_bdevs": 3, 00:16:47.211 "num_base_bdevs_discovered": 1, 00:16:47.211 "num_base_bdevs_operational": 3, 00:16:47.211 "base_bdevs_list": [ 00:16:47.211 { 00:16:47.211 "name": null, 00:16:47.211 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:47.211 "is_configured": false, 00:16:47.211 "data_offset": 0, 00:16:47.211 "data_size": 65536 00:16:47.211 }, 00:16:47.211 { 00:16:47.211 "name": null, 00:16:47.211 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:47.211 "is_configured": false, 00:16:47.211 "data_offset": 0, 00:16:47.211 "data_size": 65536 00:16:47.211 }, 00:16:47.211 { 00:16:47.211 "name": "BaseBdev3", 00:16:47.211 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:47.211 "is_configured": true, 00:16:47.211 "data_offset": 0, 00:16:47.211 "data_size": 65536 00:16:47.211 } 00:16:47.211 ] 00:16:47.211 }' 00:16:47.211 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.211 02:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.786 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.786 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:48.044 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:48.044 02:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:48.303 [2024-05-15 02:17:36.155184] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.303 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.561 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.561 "name": "Existed_Raid", 00:16:48.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.561 "strip_size_kb": 0, 00:16:48.561 "state": "configuring", 00:16:48.561 "raid_level": "raid1", 00:16:48.561 "superblock": false, 00:16:48.561 "num_base_bdevs": 3, 00:16:48.561 "num_base_bdevs_discovered": 2, 00:16:48.561 "num_base_bdevs_operational": 3, 00:16:48.561 "base_bdevs_list": [ 00:16:48.561 { 00:16:48.561 "name": null, 00:16:48.561 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:48.561 "is_configured": false, 00:16:48.561 "data_offset": 0, 00:16:48.561 "data_size": 65536 00:16:48.561 }, 00:16:48.561 { 00:16:48.561 "name": "BaseBdev2", 00:16:48.561 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:48.561 "is_configured": true, 00:16:48.561 "data_offset": 0, 00:16:48.561 "data_size": 65536 00:16:48.561 }, 00:16:48.561 { 00:16:48.561 "name": "BaseBdev3", 00:16:48.561 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:48.561 "is_configured": true, 00:16:48.561 "data_offset": 0, 00:16:48.561 "data_size": 65536 00:16:48.561 } 00:16:48.561 ] 00:16:48.561 }' 00:16:48.561 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.561 02:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.127 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.127 02:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:49.385 02:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:49.385 02:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.385 02:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:49.643 02:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4837e6f0-1261-11ef-99fd-bfc7c66e2865 00:16:49.902 [2024-05-15 02:17:37.767134] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:49.902 [2024-05-15 02:17:37.767164] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b96ff00 00:16:49.902 [2024-05-15 02:17:37.767168] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:49.902 [2024-05-15 02:17:37.767191] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b9d2e20 00:16:49.902 [2024-05-15 02:17:37.767253] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b96ff00 00:16:49.902 [2024-05-15 02:17:37.767257] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b96ff00 00:16:49.902 [2024-05-15 02:17:37.767290] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.902 NewBaseBdev 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:49.902 02:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.477 02:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.477 [ 00:16:50.477 { 00:16:50.477 "name": "NewBaseBdev", 00:16:50.477 "aliases": [ 00:16:50.477 "4837e6f0-1261-11ef-99fd-bfc7c66e2865" 00:16:50.477 ], 00:16:50.477 "product_name": "Malloc disk", 00:16:50.477 "block_size": 512, 00:16:50.477 "num_blocks": 65536, 00:16:50.477 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:50.477 "assigned_rate_limits": { 00:16:50.477 "rw_ios_per_sec": 0, 00:16:50.477 "rw_mbytes_per_sec": 0, 00:16:50.477 "r_mbytes_per_sec": 0, 00:16:50.477 "w_mbytes_per_sec": 0 00:16:50.477 }, 00:16:50.477 "claimed": true, 00:16:50.477 "claim_type": "exclusive_write", 00:16:50.477 "zoned": false, 00:16:50.477 "supported_io_types": { 00:16:50.477 "read": true, 00:16:50.477 "write": true, 00:16:50.477 "unmap": true, 00:16:50.477 "write_zeroes": true, 00:16:50.477 "flush": true, 00:16:50.477 "reset": true, 00:16:50.477 "compare": false, 00:16:50.477 "compare_and_write": false, 00:16:50.477 "abort": true, 00:16:50.477 "nvme_admin": false, 00:16:50.477 "nvme_io": false 00:16:50.477 }, 00:16:50.477 "memory_domains": [ 00:16:50.477 { 00:16:50.477 "dma_device_id": "system", 00:16:50.477 "dma_device_type": 1 00:16:50.477 }, 00:16:50.477 { 00:16:50.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.477 "dma_device_type": 2 00:16:50.477 } 00:16:50.477 ], 00:16:50.477 "driver_specific": {} 00:16:50.477 } 00:16:50.477 ] 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.735 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.995 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.995 "name": "Existed_Raid", 00:16:50.995 "uuid": "4c940648-1261-11ef-99fd-bfc7c66e2865", 00:16:50.995 "strip_size_kb": 0, 00:16:50.995 "state": "online", 00:16:50.995 "raid_level": "raid1", 00:16:50.995 "superblock": false, 00:16:50.995 "num_base_bdevs": 3, 00:16:50.995 "num_base_bdevs_discovered": 3, 00:16:50.995 "num_base_bdevs_operational": 3, 00:16:50.995 "base_bdevs_list": [ 00:16:50.995 { 00:16:50.995 "name": "NewBaseBdev", 00:16:50.995 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:50.995 "is_configured": true, 00:16:50.995 "data_offset": 0, 00:16:50.995 "data_size": 65536 00:16:50.995 }, 00:16:50.995 { 00:16:50.995 "name": "BaseBdev2", 00:16:50.995 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:50.995 "is_configured": true, 00:16:50.995 "data_offset": 0, 00:16:50.995 "data_size": 65536 00:16:50.995 }, 00:16:50.995 { 00:16:50.995 "name": "BaseBdev3", 00:16:50.995 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:50.995 "is_configured": true, 00:16:50.995 "data_offset": 0, 00:16:50.995 "data_size": 65536 00:16:50.995 } 00:16:50.995 ] 00:16:50.995 }' 00:16:50.995 02:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.995 02:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:51.254 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:51.821 [2024-05-15 02:17:39.586897] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.821 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:51.821 "name": "Existed_Raid", 00:16:51.821 "aliases": [ 00:16:51.821 "4c940648-1261-11ef-99fd-bfc7c66e2865" 00:16:51.821 ], 00:16:51.821 "product_name": "Raid Volume", 00:16:51.821 "block_size": 512, 00:16:51.821 "num_blocks": 65536, 00:16:51.821 "uuid": "4c940648-1261-11ef-99fd-bfc7c66e2865", 00:16:51.821 "assigned_rate_limits": { 00:16:51.821 "rw_ios_per_sec": 0, 00:16:51.821 "rw_mbytes_per_sec": 0, 00:16:51.821 "r_mbytes_per_sec": 0, 00:16:51.821 "w_mbytes_per_sec": 0 00:16:51.821 }, 00:16:51.821 "claimed": false, 00:16:51.821 "zoned": false, 00:16:51.821 "supported_io_types": { 00:16:51.821 "read": true, 00:16:51.821 "write": true, 00:16:51.821 "unmap": false, 00:16:51.821 "write_zeroes": true, 00:16:51.821 "flush": false, 00:16:51.821 "reset": true, 00:16:51.821 "compare": false, 00:16:51.821 "compare_and_write": false, 00:16:51.821 "abort": false, 00:16:51.821 "nvme_admin": false, 00:16:51.821 "nvme_io": false 00:16:51.821 }, 00:16:51.821 "memory_domains": [ 00:16:51.821 { 00:16:51.821 "dma_device_id": "system", 00:16:51.821 "dma_device_type": 1 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.821 "dma_device_type": 2 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "dma_device_id": "system", 00:16:51.821 "dma_device_type": 1 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.821 "dma_device_type": 2 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "dma_device_id": "system", 00:16:51.821 "dma_device_type": 1 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.821 "dma_device_type": 2 00:16:51.821 } 00:16:51.821 ], 00:16:51.821 "driver_specific": { 00:16:51.821 "raid": { 00:16:51.821 "uuid": "4c940648-1261-11ef-99fd-bfc7c66e2865", 00:16:51.821 "strip_size_kb": 0, 00:16:51.821 "state": "online", 00:16:51.821 "raid_level": "raid1", 00:16:51.821 "superblock": false, 00:16:51.821 "num_base_bdevs": 3, 00:16:51.821 "num_base_bdevs_discovered": 3, 00:16:51.821 "num_base_bdevs_operational": 3, 00:16:51.821 "base_bdevs_list": [ 00:16:51.821 { 00:16:51.821 "name": "NewBaseBdev", 00:16:51.821 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:51.821 "is_configured": true, 00:16:51.821 "data_offset": 0, 00:16:51.821 "data_size": 65536 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "name": "BaseBdev2", 00:16:51.821 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:51.821 "is_configured": true, 00:16:51.821 "data_offset": 0, 00:16:51.821 "data_size": 65536 00:16:51.821 }, 00:16:51.821 { 00:16:51.821 "name": "BaseBdev3", 00:16:51.821 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:51.821 "is_configured": true, 00:16:51.821 "data_offset": 0, 00:16:51.821 "data_size": 65536 00:16:51.821 } 00:16:51.821 ] 00:16:51.821 } 00:16:51.821 } 00:16:51.821 }' 00:16:51.821 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.822 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:51.822 BaseBdev2 00:16:51.822 BaseBdev3' 00:16:51.822 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:51.822 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:51.822 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:52.081 "name": "NewBaseBdev", 00:16:52.081 "aliases": [ 00:16:52.081 "4837e6f0-1261-11ef-99fd-bfc7c66e2865" 00:16:52.081 ], 00:16:52.081 "product_name": "Malloc disk", 00:16:52.081 "block_size": 512, 00:16:52.081 "num_blocks": 65536, 00:16:52.081 "uuid": "4837e6f0-1261-11ef-99fd-bfc7c66e2865", 00:16:52.081 "assigned_rate_limits": { 00:16:52.081 "rw_ios_per_sec": 0, 00:16:52.081 "rw_mbytes_per_sec": 0, 00:16:52.081 "r_mbytes_per_sec": 0, 00:16:52.081 "w_mbytes_per_sec": 0 00:16:52.081 }, 00:16:52.081 "claimed": true, 00:16:52.081 "claim_type": "exclusive_write", 00:16:52.081 "zoned": false, 00:16:52.081 "supported_io_types": { 00:16:52.081 "read": true, 00:16:52.081 "write": true, 00:16:52.081 "unmap": true, 00:16:52.081 "write_zeroes": true, 00:16:52.081 "flush": true, 00:16:52.081 "reset": true, 00:16:52.081 "compare": false, 00:16:52.081 "compare_and_write": false, 00:16:52.081 "abort": true, 00:16:52.081 "nvme_admin": false, 00:16:52.081 "nvme_io": false 00:16:52.081 }, 00:16:52.081 "memory_domains": [ 00:16:52.081 { 00:16:52.081 "dma_device_id": "system", 00:16:52.081 "dma_device_type": 1 00:16:52.081 }, 00:16:52.081 { 00:16:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.081 "dma_device_type": 2 00:16:52.081 } 00:16:52.081 ], 00:16:52.081 "driver_specific": {} 00:16:52.081 }' 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:52.081 02:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:52.339 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:52.339 "name": "BaseBdev2", 00:16:52.339 "aliases": [ 00:16:52.339 "45f850be-1261-11ef-99fd-bfc7c66e2865" 00:16:52.339 ], 00:16:52.339 "product_name": "Malloc disk", 00:16:52.339 "block_size": 512, 00:16:52.339 "num_blocks": 65536, 00:16:52.339 "uuid": "45f850be-1261-11ef-99fd-bfc7c66e2865", 00:16:52.339 "assigned_rate_limits": { 00:16:52.339 "rw_ios_per_sec": 0, 00:16:52.339 "rw_mbytes_per_sec": 0, 00:16:52.339 "r_mbytes_per_sec": 0, 00:16:52.339 "w_mbytes_per_sec": 0 00:16:52.339 }, 00:16:52.339 "claimed": true, 00:16:52.339 "claim_type": "exclusive_write", 00:16:52.339 "zoned": false, 00:16:52.339 "supported_io_types": { 00:16:52.339 "read": true, 00:16:52.339 "write": true, 00:16:52.339 "unmap": true, 00:16:52.339 "write_zeroes": true, 00:16:52.339 "flush": true, 00:16:52.339 "reset": true, 00:16:52.339 "compare": false, 00:16:52.339 "compare_and_write": false, 00:16:52.339 "abort": true, 00:16:52.339 "nvme_admin": false, 00:16:52.339 "nvme_io": false 00:16:52.339 }, 00:16:52.339 "memory_domains": [ 00:16:52.339 { 00:16:52.339 "dma_device_id": "system", 00:16:52.339 "dma_device_type": 1 00:16:52.339 }, 00:16:52.339 { 00:16:52.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.339 "dma_device_type": 2 00:16:52.339 } 00:16:52.340 ], 00:16:52.340 "driver_specific": {} 00:16:52.340 }' 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:52.340 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:52.598 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:52.598 "name": "BaseBdev3", 00:16:52.598 "aliases": [ 00:16:52.598 "469973f1-1261-11ef-99fd-bfc7c66e2865" 00:16:52.598 ], 00:16:52.598 "product_name": "Malloc disk", 00:16:52.598 "block_size": 512, 00:16:52.598 "num_blocks": 65536, 00:16:52.598 "uuid": "469973f1-1261-11ef-99fd-bfc7c66e2865", 00:16:52.598 "assigned_rate_limits": { 00:16:52.598 "rw_ios_per_sec": 0, 00:16:52.598 "rw_mbytes_per_sec": 0, 00:16:52.598 "r_mbytes_per_sec": 0, 00:16:52.598 "w_mbytes_per_sec": 0 00:16:52.598 }, 00:16:52.598 "claimed": true, 00:16:52.598 "claim_type": "exclusive_write", 00:16:52.598 "zoned": false, 00:16:52.598 "supported_io_types": { 00:16:52.598 "read": true, 00:16:52.598 "write": true, 00:16:52.598 "unmap": true, 00:16:52.599 "write_zeroes": true, 00:16:52.599 "flush": true, 00:16:52.599 "reset": true, 00:16:52.599 "compare": false, 00:16:52.599 "compare_and_write": false, 00:16:52.599 "abort": true, 00:16:52.599 "nvme_admin": false, 00:16:52.599 "nvme_io": false 00:16:52.599 }, 00:16:52.599 "memory_domains": [ 00:16:52.599 { 00:16:52.599 "dma_device_id": "system", 00:16:52.599 "dma_device_type": 1 00:16:52.599 }, 00:16:52.599 { 00:16:52.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.599 "dma_device_type": 2 00:16:52.599 } 00:16:52.599 ], 00:16:52.599 "driver_specific": {} 00:16:52.599 }' 00:16:52.599 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.599 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:52.599 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:52.599 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:52.938 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.197 [2024-05-15 02:17:40.958761] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.197 [2024-05-15 02:17:40.958800] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.197 [2024-05-15 02:17:40.958831] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.197 [2024-05-15 02:17:40.958932] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.197 [2024-05-15 02:17:40.958938] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b96ff00 name Existed_Raid, state offline 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 55209 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 55209 ']' 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 55209 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 55209 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55209' 00:16:53.197 killing process with pid 55209 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 55209 00:16:53.197 [2024-05-15 02:17:40.994238] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.197 02:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 55209 00:16:53.197 [2024-05-15 02:17:41.009903] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.197 02:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:16:53.197 00:16:53.197 real 0m26.437s 00:16:53.197 user 0m48.519s 00:16:53.197 sys 0m3.558s 00:16:53.197 02:17:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:53.197 ************************************ 00:16:53.197 END TEST raid_state_function_test 00:16:53.197 ************************************ 00:16:53.197 02:17:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.197 02:17:41 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:53.197 02:17:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:53.197 02:17:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.197 02:17:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.455 ************************************ 00:16:53.455 START TEST raid_state_function_test_sb 00:16:53.455 ************************************ 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=55946 00:16:53.455 Process raid pid: 55946 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55946' 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 55946 /var/tmp/spdk-raid.sock 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 55946 ']' 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.455 02:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.455 [2024-05-15 02:17:41.231597] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:16:53.455 [2024-05-15 02:17:41.231810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:54.022 EAL: TSC is not safe to use in SMP mode 00:16:54.022 EAL: TSC is not invariant 00:16:54.022 [2024-05-15 02:17:41.755181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.022 [2024-05-15 02:17:41.868551] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:54.022 [2024-05-15 02:17:41.871329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.022 [2024-05-15 02:17:41.872359] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.022 [2024-05-15 02:17:41.872384] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.587 02:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.587 02:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:54.587 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.845 [2024-05-15 02:17:42.788325] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.845 [2024-05-15 02:17:42.788433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.845 [2024-05-15 02:17:42.788438] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.845 [2024-05-15 02:17:42.788448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.845 [2024-05-15 02:17:42.788461] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.845 [2024-05-15 02:17:42.788469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.845 02:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.412 02:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.412 "name": "Existed_Raid", 00:16:55.412 "uuid": "4f923096-1261-11ef-99fd-bfc7c66e2865", 00:16:55.412 "strip_size_kb": 0, 00:16:55.412 "state": "configuring", 00:16:55.412 "raid_level": "raid1", 00:16:55.412 "superblock": true, 00:16:55.412 "num_base_bdevs": 3, 00:16:55.412 "num_base_bdevs_discovered": 0, 00:16:55.412 "num_base_bdevs_operational": 3, 00:16:55.412 "base_bdevs_list": [ 00:16:55.412 { 00:16:55.412 "name": "BaseBdev1", 00:16:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.412 "is_configured": false, 00:16:55.412 "data_offset": 0, 00:16:55.412 "data_size": 0 00:16:55.412 }, 00:16:55.412 { 00:16:55.412 "name": "BaseBdev2", 00:16:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.412 "is_configured": false, 00:16:55.412 "data_offset": 0, 00:16:55.412 "data_size": 0 00:16:55.412 }, 00:16:55.412 { 00:16:55.412 "name": "BaseBdev3", 00:16:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.412 "is_configured": false, 00:16:55.412 "data_offset": 0, 00:16:55.412 "data_size": 0 00:16:55.412 } 00:16:55.412 ] 00:16:55.412 }' 00:16:55.412 02:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.412 02:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 02:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.929 [2024-05-15 02:17:43.776228] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.929 [2024-05-15 02:17:43.776264] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d82b500 name Existed_Raid, state configuring 00:16:55.929 02:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.188 [2024-05-15 02:17:44.016218] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.188 [2024-05-15 02:17:44.016285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.188 [2024-05-15 02:17:44.016295] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.188 [2024-05-15 02:17:44.016308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.188 [2024-05-15 02:17:44.016313] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.188 [2024-05-15 02:17:44.016323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.188 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.447 [2024-05-15 02:17:44.293192] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.447 BaseBdev1 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:56.447 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.705 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.963 [ 00:16:56.963 { 00:16:56.963 "name": "BaseBdev1", 00:16:56.963 "aliases": [ 00:16:56.963 "5077aa0d-1261-11ef-99fd-bfc7c66e2865" 00:16:56.963 ], 00:16:56.963 "product_name": "Malloc disk", 00:16:56.963 "block_size": 512, 00:16:56.963 "num_blocks": 65536, 00:16:56.963 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:16:56.963 "assigned_rate_limits": { 00:16:56.963 "rw_ios_per_sec": 0, 00:16:56.963 "rw_mbytes_per_sec": 0, 00:16:56.963 "r_mbytes_per_sec": 0, 00:16:56.963 "w_mbytes_per_sec": 0 00:16:56.963 }, 00:16:56.963 "claimed": true, 00:16:56.963 "claim_type": "exclusive_write", 00:16:56.963 "zoned": false, 00:16:56.963 "supported_io_types": { 00:16:56.963 "read": true, 00:16:56.963 "write": true, 00:16:56.963 "unmap": true, 00:16:56.963 "write_zeroes": true, 00:16:56.963 "flush": true, 00:16:56.963 "reset": true, 00:16:56.963 "compare": false, 00:16:56.963 "compare_and_write": false, 00:16:56.963 "abort": true, 00:16:56.963 "nvme_admin": false, 00:16:56.963 "nvme_io": false 00:16:56.963 }, 00:16:56.963 "memory_domains": [ 00:16:56.963 { 00:16:56.963 "dma_device_id": "system", 00:16:56.963 "dma_device_type": 1 00:16:56.963 }, 00:16:56.963 { 00:16:56.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.963 "dma_device_type": 2 00:16:56.963 } 00:16:56.963 ], 00:16:56.963 "driver_specific": {} 00:16:56.963 } 00:16:56.963 ] 00:16:56.963 02:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:56.963 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.963 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.963 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.964 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.222 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.222 02:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.479 02:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.479 "name": "Existed_Raid", 00:16:57.479 "uuid": "504d8d3b-1261-11ef-99fd-bfc7c66e2865", 00:16:57.479 "strip_size_kb": 0, 00:16:57.479 "state": "configuring", 00:16:57.479 "raid_level": "raid1", 00:16:57.479 "superblock": true, 00:16:57.479 "num_base_bdevs": 3, 00:16:57.479 "num_base_bdevs_discovered": 1, 00:16:57.479 "num_base_bdevs_operational": 3, 00:16:57.480 "base_bdevs_list": [ 00:16:57.480 { 00:16:57.480 "name": "BaseBdev1", 00:16:57.480 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev2", 00:16:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.480 "is_configured": false, 00:16:57.480 "data_offset": 0, 00:16:57.480 "data_size": 0 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev3", 00:16:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.480 "is_configured": false, 00:16:57.480 "data_offset": 0, 00:16:57.480 "data_size": 0 00:16:57.480 } 00:16:57.480 ] 00:16:57.480 }' 00:16:57.480 02:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.480 02:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.775 02:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.056 [2024-05-15 02:17:46.072117] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.056 [2024-05-15 02:17:46.072178] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d82b500 name Existed_Raid, state configuring 00:16:58.314 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:58.572 [2024-05-15 02:17:46.424109] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.572 [2024-05-15 02:17:46.424900] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.572 [2024-05-15 02:17:46.424958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.572 [2024-05-15 02:17:46.424964] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.572 [2024-05-15 02:17:46.424972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.572 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.831 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.831 "name": "Existed_Raid", 00:16:58.831 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:16:58.831 "strip_size_kb": 0, 00:16:58.831 "state": "configuring", 00:16:58.831 "raid_level": "raid1", 00:16:58.831 "superblock": true, 00:16:58.831 "num_base_bdevs": 3, 00:16:58.831 "num_base_bdevs_discovered": 1, 00:16:58.831 "num_base_bdevs_operational": 3, 00:16:58.831 "base_bdevs_list": [ 00:16:58.831 { 00:16:58.831 "name": "BaseBdev1", 00:16:58.831 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:16:58.831 "is_configured": true, 00:16:58.831 "data_offset": 2048, 00:16:58.831 "data_size": 63488 00:16:58.831 }, 00:16:58.831 { 00:16:58.831 "name": "BaseBdev2", 00:16:58.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.831 "is_configured": false, 00:16:58.831 "data_offset": 0, 00:16:58.831 "data_size": 0 00:16:58.831 }, 00:16:58.831 { 00:16:58.831 "name": "BaseBdev3", 00:16:58.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.831 "is_configured": false, 00:16:58.831 "data_offset": 0, 00:16:58.831 "data_size": 0 00:16:58.831 } 00:16:58.831 ] 00:16:58.831 }' 00:16:58.831 02:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.831 02:17:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.089 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.348 [2024-05-15 02:17:47.344192] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.348 BaseBdev2 00:16:59.348 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:59.607 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.866 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.125 [ 00:17:00.125 { 00:17:00.125 "name": "BaseBdev2", 00:17:00.125 "aliases": [ 00:17:00.125 "52495730-1261-11ef-99fd-bfc7c66e2865" 00:17:00.125 ], 00:17:00.125 "product_name": "Malloc disk", 00:17:00.125 "block_size": 512, 00:17:00.125 "num_blocks": 65536, 00:17:00.125 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:00.125 "assigned_rate_limits": { 00:17:00.125 "rw_ios_per_sec": 0, 00:17:00.125 "rw_mbytes_per_sec": 0, 00:17:00.125 "r_mbytes_per_sec": 0, 00:17:00.125 "w_mbytes_per_sec": 0 00:17:00.125 }, 00:17:00.125 "claimed": true, 00:17:00.125 "claim_type": "exclusive_write", 00:17:00.125 "zoned": false, 00:17:00.125 "supported_io_types": { 00:17:00.125 "read": true, 00:17:00.125 "write": true, 00:17:00.125 "unmap": true, 00:17:00.125 "write_zeroes": true, 00:17:00.125 "flush": true, 00:17:00.125 "reset": true, 00:17:00.125 "compare": false, 00:17:00.125 "compare_and_write": false, 00:17:00.125 "abort": true, 00:17:00.125 "nvme_admin": false, 00:17:00.125 "nvme_io": false 00:17:00.125 }, 00:17:00.125 "memory_domains": [ 00:17:00.125 { 00:17:00.125 "dma_device_id": "system", 00:17:00.125 "dma_device_type": 1 00:17:00.125 }, 00:17:00.125 { 00:17:00.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.125 "dma_device_type": 2 00:17:00.125 } 00:17:00.125 ], 00:17:00.125 "driver_specific": {} 00:17:00.125 } 00:17:00.125 ] 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.125 02:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.385 02:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.385 "name": "Existed_Raid", 00:17:00.385 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:17:00.385 "strip_size_kb": 0, 00:17:00.385 "state": "configuring", 00:17:00.385 "raid_level": "raid1", 00:17:00.385 "superblock": true, 00:17:00.385 "num_base_bdevs": 3, 00:17:00.385 "num_base_bdevs_discovered": 2, 00:17:00.385 "num_base_bdevs_operational": 3, 00:17:00.385 "base_bdevs_list": [ 00:17:00.385 { 00:17:00.385 "name": "BaseBdev1", 00:17:00.385 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:17:00.385 "is_configured": true, 00:17:00.385 "data_offset": 2048, 00:17:00.385 "data_size": 63488 00:17:00.385 }, 00:17:00.385 { 00:17:00.385 "name": "BaseBdev2", 00:17:00.385 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:00.385 "is_configured": true, 00:17:00.385 "data_offset": 2048, 00:17:00.385 "data_size": 63488 00:17:00.385 }, 00:17:00.385 { 00:17:00.385 "name": "BaseBdev3", 00:17:00.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.385 "is_configured": false, 00:17:00.385 "data_offset": 0, 00:17:00.385 "data_size": 0 00:17:00.385 } 00:17:00.385 ] 00:17:00.385 }' 00:17:00.385 02:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.385 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.652 02:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:00.912 [2024-05-15 02:17:48.764105] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.912 [2024-05-15 02:17:48.764197] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d82ba00 00:17:00.912 [2024-05-15 02:17:48.764203] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:00.912 [2024-05-15 02:17:48.764222] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d88eec0 00:17:00.912 [2024-05-15 02:17:48.764263] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d82ba00 00:17:00.912 [2024-05-15 02:17:48.764267] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d82ba00 00:17:00.912 [2024-05-15 02:17:48.764285] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.912 BaseBdev3 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:00.912 02:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.171 02:17:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.430 [ 00:17:01.430 { 00:17:01.430 "name": "BaseBdev3", 00:17:01.430 "aliases": [ 00:17:01.430 "5322018e-1261-11ef-99fd-bfc7c66e2865" 00:17:01.430 ], 00:17:01.430 "product_name": "Malloc disk", 00:17:01.430 "block_size": 512, 00:17:01.430 "num_blocks": 65536, 00:17:01.430 "uuid": "5322018e-1261-11ef-99fd-bfc7c66e2865", 00:17:01.430 "assigned_rate_limits": { 00:17:01.430 "rw_ios_per_sec": 0, 00:17:01.430 "rw_mbytes_per_sec": 0, 00:17:01.430 "r_mbytes_per_sec": 0, 00:17:01.430 "w_mbytes_per_sec": 0 00:17:01.430 }, 00:17:01.430 "claimed": true, 00:17:01.430 "claim_type": "exclusive_write", 00:17:01.430 "zoned": false, 00:17:01.430 "supported_io_types": { 00:17:01.430 "read": true, 00:17:01.430 "write": true, 00:17:01.430 "unmap": true, 00:17:01.430 "write_zeroes": true, 00:17:01.430 "flush": true, 00:17:01.430 "reset": true, 00:17:01.430 "compare": false, 00:17:01.430 "compare_and_write": false, 00:17:01.430 "abort": true, 00:17:01.430 "nvme_admin": false, 00:17:01.430 "nvme_io": false 00:17:01.430 }, 00:17:01.430 "memory_domains": [ 00:17:01.430 { 00:17:01.430 "dma_device_id": "system", 00:17:01.430 "dma_device_type": 1 00:17:01.430 }, 00:17:01.430 { 00:17:01.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.430 "dma_device_type": 2 00:17:01.430 } 00:17:01.430 ], 00:17:01.430 "driver_specific": {} 00:17:01.430 } 00:17:01.430 ] 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.430 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.689 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.689 "name": "Existed_Raid", 00:17:01.689 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:17:01.689 "strip_size_kb": 0, 00:17:01.689 "state": "online", 00:17:01.689 "raid_level": "raid1", 00:17:01.689 "superblock": true, 00:17:01.689 "num_base_bdevs": 3, 00:17:01.689 "num_base_bdevs_discovered": 3, 00:17:01.689 "num_base_bdevs_operational": 3, 00:17:01.689 "base_bdevs_list": [ 00:17:01.689 { 00:17:01.689 "name": "BaseBdev1", 00:17:01.689 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:17:01.689 "is_configured": true, 00:17:01.689 "data_offset": 2048, 00:17:01.689 "data_size": 63488 00:17:01.689 }, 00:17:01.689 { 00:17:01.689 "name": "BaseBdev2", 00:17:01.689 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:01.689 "is_configured": true, 00:17:01.689 "data_offset": 2048, 00:17:01.689 "data_size": 63488 00:17:01.689 }, 00:17:01.689 { 00:17:01.689 "name": "BaseBdev3", 00:17:01.689 "uuid": "5322018e-1261-11ef-99fd-bfc7c66e2865", 00:17:01.689 "is_configured": true, 00:17:01.689 "data_offset": 2048, 00:17:01.689 "data_size": 63488 00:17:01.689 } 00:17:01.689 ] 00:17:01.689 }' 00:17:01.689 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.689 02:17:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:01.948 02:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:02.208 [2024-05-15 02:17:50.212005] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.467 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:02.467 "name": "Existed_Raid", 00:17:02.467 "aliases": [ 00:17:02.467 "51bcf73b-1261-11ef-99fd-bfc7c66e2865" 00:17:02.467 ], 00:17:02.467 "product_name": "Raid Volume", 00:17:02.467 "block_size": 512, 00:17:02.467 "num_blocks": 63488, 00:17:02.467 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:17:02.467 "assigned_rate_limits": { 00:17:02.467 "rw_ios_per_sec": 0, 00:17:02.467 "rw_mbytes_per_sec": 0, 00:17:02.467 "r_mbytes_per_sec": 0, 00:17:02.467 "w_mbytes_per_sec": 0 00:17:02.467 }, 00:17:02.467 "claimed": false, 00:17:02.467 "zoned": false, 00:17:02.467 "supported_io_types": { 00:17:02.467 "read": true, 00:17:02.467 "write": true, 00:17:02.467 "unmap": false, 00:17:02.467 "write_zeroes": true, 00:17:02.467 "flush": false, 00:17:02.467 "reset": true, 00:17:02.467 "compare": false, 00:17:02.467 "compare_and_write": false, 00:17:02.467 "abort": false, 00:17:02.467 "nvme_admin": false, 00:17:02.467 "nvme_io": false 00:17:02.467 }, 00:17:02.467 "memory_domains": [ 00:17:02.467 { 00:17:02.467 "dma_device_id": "system", 00:17:02.467 "dma_device_type": 1 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.467 "dma_device_type": 2 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "dma_device_id": "system", 00:17:02.467 "dma_device_type": 1 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.467 "dma_device_type": 2 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "dma_device_id": "system", 00:17:02.467 "dma_device_type": 1 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.467 "dma_device_type": 2 00:17:02.467 } 00:17:02.467 ], 00:17:02.467 "driver_specific": { 00:17:02.467 "raid": { 00:17:02.467 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:17:02.467 "strip_size_kb": 0, 00:17:02.467 "state": "online", 00:17:02.467 "raid_level": "raid1", 00:17:02.467 "superblock": true, 00:17:02.467 "num_base_bdevs": 3, 00:17:02.467 "num_base_bdevs_discovered": 3, 00:17:02.467 "num_base_bdevs_operational": 3, 00:17:02.467 "base_bdevs_list": [ 00:17:02.467 { 00:17:02.467 "name": "BaseBdev1", 00:17:02.467 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:17:02.467 "is_configured": true, 00:17:02.467 "data_offset": 2048, 00:17:02.467 "data_size": 63488 00:17:02.467 }, 00:17:02.467 { 00:17:02.467 "name": "BaseBdev2", 00:17:02.468 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:02.468 "is_configured": true, 00:17:02.468 "data_offset": 2048, 00:17:02.468 "data_size": 63488 00:17:02.468 }, 00:17:02.468 { 00:17:02.468 "name": "BaseBdev3", 00:17:02.468 "uuid": "5322018e-1261-11ef-99fd-bfc7c66e2865", 00:17:02.468 "is_configured": true, 00:17:02.468 "data_offset": 2048, 00:17:02.468 "data_size": 63488 00:17:02.468 } 00:17:02.468 ] 00:17:02.468 } 00:17:02.468 } 00:17:02.468 }' 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:02.468 BaseBdev2 00:17:02.468 BaseBdev3' 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:02.468 "name": "BaseBdev1", 00:17:02.468 "aliases": [ 00:17:02.468 "5077aa0d-1261-11ef-99fd-bfc7c66e2865" 00:17:02.468 ], 00:17:02.468 "product_name": "Malloc disk", 00:17:02.468 "block_size": 512, 00:17:02.468 "num_blocks": 65536, 00:17:02.468 "uuid": "5077aa0d-1261-11ef-99fd-bfc7c66e2865", 00:17:02.468 "assigned_rate_limits": { 00:17:02.468 "rw_ios_per_sec": 0, 00:17:02.468 "rw_mbytes_per_sec": 0, 00:17:02.468 "r_mbytes_per_sec": 0, 00:17:02.468 "w_mbytes_per_sec": 0 00:17:02.468 }, 00:17:02.468 "claimed": true, 00:17:02.468 "claim_type": "exclusive_write", 00:17:02.468 "zoned": false, 00:17:02.468 "supported_io_types": { 00:17:02.468 "read": true, 00:17:02.468 "write": true, 00:17:02.468 "unmap": true, 00:17:02.468 "write_zeroes": true, 00:17:02.468 "flush": true, 00:17:02.468 "reset": true, 00:17:02.468 "compare": false, 00:17:02.468 "compare_and_write": false, 00:17:02.468 "abort": true, 00:17:02.468 "nvme_admin": false, 00:17:02.468 "nvme_io": false 00:17:02.468 }, 00:17:02.468 "memory_domains": [ 00:17:02.468 { 00:17:02.468 "dma_device_id": "system", 00:17:02.468 "dma_device_type": 1 00:17:02.468 }, 00:17:02.468 { 00:17:02.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.468 "dma_device_type": 2 00:17:02.468 } 00:17:02.468 ], 00:17:02.468 "driver_specific": {} 00:17:02.468 }' 00:17:02.468 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:02.727 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:02.986 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:02.986 "name": "BaseBdev2", 00:17:02.986 "aliases": [ 00:17:02.986 "52495730-1261-11ef-99fd-bfc7c66e2865" 00:17:02.986 ], 00:17:02.986 "product_name": "Malloc disk", 00:17:02.986 "block_size": 512, 00:17:02.986 "num_blocks": 65536, 00:17:02.986 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:02.986 "assigned_rate_limits": { 00:17:02.986 "rw_ios_per_sec": 0, 00:17:02.986 "rw_mbytes_per_sec": 0, 00:17:02.986 "r_mbytes_per_sec": 0, 00:17:02.986 "w_mbytes_per_sec": 0 00:17:02.986 }, 00:17:02.986 "claimed": true, 00:17:02.986 "claim_type": "exclusive_write", 00:17:02.986 "zoned": false, 00:17:02.986 "supported_io_types": { 00:17:02.986 "read": true, 00:17:02.986 "write": true, 00:17:02.986 "unmap": true, 00:17:02.986 "write_zeroes": true, 00:17:02.986 "flush": true, 00:17:02.986 "reset": true, 00:17:02.986 "compare": false, 00:17:02.986 "compare_and_write": false, 00:17:02.986 "abort": true, 00:17:02.986 "nvme_admin": false, 00:17:02.986 "nvme_io": false 00:17:02.986 }, 00:17:02.986 "memory_domains": [ 00:17:02.986 { 00:17:02.986 "dma_device_id": "system", 00:17:02.986 "dma_device_type": 1 00:17:02.986 }, 00:17:02.987 { 00:17:02.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.987 "dma_device_type": 2 00:17:02.987 } 00:17:02.987 ], 00:17:02.987 "driver_specific": {} 00:17:02.987 }' 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:02.987 02:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:03.287 "name": "BaseBdev3", 00:17:03.287 "aliases": [ 00:17:03.287 "5322018e-1261-11ef-99fd-bfc7c66e2865" 00:17:03.287 ], 00:17:03.287 "product_name": "Malloc disk", 00:17:03.287 "block_size": 512, 00:17:03.287 "num_blocks": 65536, 00:17:03.287 "uuid": "5322018e-1261-11ef-99fd-bfc7c66e2865", 00:17:03.287 "assigned_rate_limits": { 00:17:03.287 "rw_ios_per_sec": 0, 00:17:03.287 "rw_mbytes_per_sec": 0, 00:17:03.287 "r_mbytes_per_sec": 0, 00:17:03.287 "w_mbytes_per_sec": 0 00:17:03.287 }, 00:17:03.287 "claimed": true, 00:17:03.287 "claim_type": "exclusive_write", 00:17:03.287 "zoned": false, 00:17:03.287 "supported_io_types": { 00:17:03.287 "read": true, 00:17:03.287 "write": true, 00:17:03.287 "unmap": true, 00:17:03.287 "write_zeroes": true, 00:17:03.287 "flush": true, 00:17:03.287 "reset": true, 00:17:03.287 "compare": false, 00:17:03.287 "compare_and_write": false, 00:17:03.287 "abort": true, 00:17:03.287 "nvme_admin": false, 00:17:03.287 "nvme_io": false 00:17:03.287 }, 00:17:03.287 "memory_domains": [ 00:17:03.287 { 00:17:03.287 "dma_device_id": "system", 00:17:03.287 "dma_device_type": 1 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.287 "dma_device_type": 2 00:17:03.287 } 00:17:03.287 ], 00:17:03.287 "driver_specific": {} 00:17:03.287 }' 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:03.287 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:03.288 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.288 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:03.288 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:03.288 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:03.288 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:03.564 [2024-05-15 02:17:51.371873] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.564 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.822 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.822 "name": "Existed_Raid", 00:17:03.822 "uuid": "51bcf73b-1261-11ef-99fd-bfc7c66e2865", 00:17:03.822 "strip_size_kb": 0, 00:17:03.822 "state": "online", 00:17:03.822 "raid_level": "raid1", 00:17:03.822 "superblock": true, 00:17:03.822 "num_base_bdevs": 3, 00:17:03.822 "num_base_bdevs_discovered": 2, 00:17:03.822 "num_base_bdevs_operational": 2, 00:17:03.822 "base_bdevs_list": [ 00:17:03.822 { 00:17:03.822 "name": null, 00:17:03.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.822 "is_configured": false, 00:17:03.822 "data_offset": 2048, 00:17:03.822 "data_size": 63488 00:17:03.822 }, 00:17:03.822 { 00:17:03.822 "name": "BaseBdev2", 00:17:03.822 "uuid": "52495730-1261-11ef-99fd-bfc7c66e2865", 00:17:03.822 "is_configured": true, 00:17:03.822 "data_offset": 2048, 00:17:03.822 "data_size": 63488 00:17:03.822 }, 00:17:03.822 { 00:17:03.822 "name": "BaseBdev3", 00:17:03.822 "uuid": "5322018e-1261-11ef-99fd-bfc7c66e2865", 00:17:03.822 "is_configured": true, 00:17:03.822 "data_offset": 2048, 00:17:03.822 "data_size": 63488 00:17:03.822 } 00:17:03.822 ] 00:17:03.822 }' 00:17:03.822 02:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.822 02:17:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.080 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:04.080 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.080 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.080 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:04.647 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:04.647 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.647 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:04.905 [2024-05-15 02:17:52.760800] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.905 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:04.905 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.905 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.905 02:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:05.162 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:05.162 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.162 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:05.421 [2024-05-15 02:17:53.386510] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.421 [2024-05-15 02:17:53.386569] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.421 [2024-05-15 02:17:53.391639] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.421 [2024-05-15 02:17:53.391683] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.421 [2024-05-15 02:17:53.391688] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d82ba00 name Existed_Raid, state offline 00:17:05.421 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.421 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.421 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.421 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:05.679 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.965 BaseBdev2 00:17:05.965 02:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:17:05.965 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:05.966 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:05.966 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:05.966 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:05.966 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:05.966 02:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.224 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.482 [ 00:17:06.482 { 00:17:06.482 "name": "BaseBdev2", 00:17:06.482 "aliases": [ 00:17:06.482 "562cc108-1261-11ef-99fd-bfc7c66e2865" 00:17:06.482 ], 00:17:06.482 "product_name": "Malloc disk", 00:17:06.482 "block_size": 512, 00:17:06.482 "num_blocks": 65536, 00:17:06.482 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:06.482 "assigned_rate_limits": { 00:17:06.482 "rw_ios_per_sec": 0, 00:17:06.482 "rw_mbytes_per_sec": 0, 00:17:06.482 "r_mbytes_per_sec": 0, 00:17:06.482 "w_mbytes_per_sec": 0 00:17:06.482 }, 00:17:06.482 "claimed": false, 00:17:06.482 "zoned": false, 00:17:06.482 "supported_io_types": { 00:17:06.482 "read": true, 00:17:06.482 "write": true, 00:17:06.482 "unmap": true, 00:17:06.482 "write_zeroes": true, 00:17:06.482 "flush": true, 00:17:06.482 "reset": true, 00:17:06.482 "compare": false, 00:17:06.482 "compare_and_write": false, 00:17:06.482 "abort": true, 00:17:06.482 "nvme_admin": false, 00:17:06.482 "nvme_io": false 00:17:06.482 }, 00:17:06.482 "memory_domains": [ 00:17:06.482 { 00:17:06.482 "dma_device_id": "system", 00:17:06.482 "dma_device_type": 1 00:17:06.482 }, 00:17:06.482 { 00:17:06.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.482 "dma_device_type": 2 00:17:06.482 } 00:17:06.482 ], 00:17:06.482 "driver_specific": {} 00:17:06.482 } 00:17:06.482 ] 00:17:06.482 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:06.482 02:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:06.482 02:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:06.482 02:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.740 BaseBdev3 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:06.998 02:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.256 02:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.515 [ 00:17:07.515 { 00:17:07.515 "name": "BaseBdev3", 00:17:07.515 "aliases": [ 00:17:07.515 "56b3063a-1261-11ef-99fd-bfc7c66e2865" 00:17:07.515 ], 00:17:07.515 "product_name": "Malloc disk", 00:17:07.515 "block_size": 512, 00:17:07.515 "num_blocks": 65536, 00:17:07.515 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:07.515 "assigned_rate_limits": { 00:17:07.515 "rw_ios_per_sec": 0, 00:17:07.515 "rw_mbytes_per_sec": 0, 00:17:07.515 "r_mbytes_per_sec": 0, 00:17:07.515 "w_mbytes_per_sec": 0 00:17:07.515 }, 00:17:07.515 "claimed": false, 00:17:07.515 "zoned": false, 00:17:07.515 "supported_io_types": { 00:17:07.515 "read": true, 00:17:07.515 "write": true, 00:17:07.515 "unmap": true, 00:17:07.515 "write_zeroes": true, 00:17:07.515 "flush": true, 00:17:07.515 "reset": true, 00:17:07.515 "compare": false, 00:17:07.515 "compare_and_write": false, 00:17:07.515 "abort": true, 00:17:07.515 "nvme_admin": false, 00:17:07.515 "nvme_io": false 00:17:07.515 }, 00:17:07.515 "memory_domains": [ 00:17:07.515 { 00:17:07.515 "dma_device_id": "system", 00:17:07.515 "dma_device_type": 1 00:17:07.515 }, 00:17:07.515 { 00:17:07.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.515 "dma_device_type": 2 00:17:07.515 } 00:17:07.515 ], 00:17:07.515 "driver_specific": {} 00:17:07.515 } 00:17:07.515 ] 00:17:07.515 02:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:07.515 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:07.515 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:07.515 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.772 [2024-05-15 02:17:55.587525] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.772 [2024-05-15 02:17:55.587611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.772 [2024-05-15 02:17:55.587622] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.772 [2024-05-15 02:17:55.588071] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.772 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.773 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.030 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.030 "name": "Existed_Raid", 00:17:08.030 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:08.030 "strip_size_kb": 0, 00:17:08.030 "state": "configuring", 00:17:08.030 "raid_level": "raid1", 00:17:08.030 "superblock": true, 00:17:08.030 "num_base_bdevs": 3, 00:17:08.030 "num_base_bdevs_discovered": 2, 00:17:08.030 "num_base_bdevs_operational": 3, 00:17:08.030 "base_bdevs_list": [ 00:17:08.030 { 00:17:08.030 "name": "BaseBdev1", 00:17:08.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.030 "is_configured": false, 00:17:08.030 "data_offset": 0, 00:17:08.030 "data_size": 0 00:17:08.030 }, 00:17:08.030 { 00:17:08.030 "name": "BaseBdev2", 00:17:08.030 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:08.030 "is_configured": true, 00:17:08.030 "data_offset": 2048, 00:17:08.030 "data_size": 63488 00:17:08.030 }, 00:17:08.030 { 00:17:08.030 "name": "BaseBdev3", 00:17:08.030 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:08.030 "is_configured": true, 00:17:08.030 "data_offset": 2048, 00:17:08.030 "data_size": 63488 00:17:08.030 } 00:17:08.030 ] 00:17:08.030 }' 00:17:08.030 02:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.030 02:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.287 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:08.749 [2024-05-15 02:17:56.447484] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.750 "name": "Existed_Raid", 00:17:08.750 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:08.750 "strip_size_kb": 0, 00:17:08.750 "state": "configuring", 00:17:08.750 "raid_level": "raid1", 00:17:08.750 "superblock": true, 00:17:08.750 "num_base_bdevs": 3, 00:17:08.750 "num_base_bdevs_discovered": 1, 00:17:08.750 "num_base_bdevs_operational": 3, 00:17:08.750 "base_bdevs_list": [ 00:17:08.750 { 00:17:08.750 "name": "BaseBdev1", 00:17:08.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.750 "is_configured": false, 00:17:08.750 "data_offset": 0, 00:17:08.750 "data_size": 0 00:17:08.750 }, 00:17:08.750 { 00:17:08.750 "name": null, 00:17:08.750 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:08.750 "is_configured": false, 00:17:08.750 "data_offset": 2048, 00:17:08.750 "data_size": 63488 00:17:08.750 }, 00:17:08.750 { 00:17:08.750 "name": "BaseBdev3", 00:17:08.750 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:08.750 "is_configured": true, 00:17:08.750 "data_offset": 2048, 00:17:08.750 "data_size": 63488 00:17:08.750 } 00:17:08.750 ] 00:17:08.750 }' 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.750 02:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.322 02:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.322 02:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.322 02:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:17:09.322 02:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.580 [2024-05-15 02:17:57.595577] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.839 BaseBdev1 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:09.839 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.097 02:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:10.356 [ 00:17:10.356 { 00:17:10.356 "name": "BaseBdev1", 00:17:10.356 "aliases": [ 00:17:10.356 "58659540-1261-11ef-99fd-bfc7c66e2865" 00:17:10.356 ], 00:17:10.356 "product_name": "Malloc disk", 00:17:10.356 "block_size": 512, 00:17:10.356 "num_blocks": 65536, 00:17:10.356 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:10.356 "assigned_rate_limits": { 00:17:10.356 "rw_ios_per_sec": 0, 00:17:10.356 "rw_mbytes_per_sec": 0, 00:17:10.356 "r_mbytes_per_sec": 0, 00:17:10.356 "w_mbytes_per_sec": 0 00:17:10.356 }, 00:17:10.356 "claimed": true, 00:17:10.356 "claim_type": "exclusive_write", 00:17:10.356 "zoned": false, 00:17:10.356 "supported_io_types": { 00:17:10.356 "read": true, 00:17:10.356 "write": true, 00:17:10.356 "unmap": true, 00:17:10.356 "write_zeroes": true, 00:17:10.356 "flush": true, 00:17:10.356 "reset": true, 00:17:10.356 "compare": false, 00:17:10.356 "compare_and_write": false, 00:17:10.356 "abort": true, 00:17:10.356 "nvme_admin": false, 00:17:10.356 "nvme_io": false 00:17:10.356 }, 00:17:10.356 "memory_domains": [ 00:17:10.356 { 00:17:10.356 "dma_device_id": "system", 00:17:10.356 "dma_device_type": 1 00:17:10.356 }, 00:17:10.356 { 00:17:10.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.356 "dma_device_type": 2 00:17:10.356 } 00:17:10.356 ], 00:17:10.356 "driver_specific": {} 00:17:10.356 } 00:17:10.356 ] 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.356 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.613 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.613 "name": "Existed_Raid", 00:17:10.613 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:10.613 "strip_size_kb": 0, 00:17:10.613 "state": "configuring", 00:17:10.613 "raid_level": "raid1", 00:17:10.613 "superblock": true, 00:17:10.613 "num_base_bdevs": 3, 00:17:10.613 "num_base_bdevs_discovered": 2, 00:17:10.613 "num_base_bdevs_operational": 3, 00:17:10.613 "base_bdevs_list": [ 00:17:10.613 { 00:17:10.613 "name": "BaseBdev1", 00:17:10.614 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:10.614 "is_configured": true, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 }, 00:17:10.614 { 00:17:10.614 "name": null, 00:17:10.614 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:10.614 "is_configured": false, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 }, 00:17:10.614 { 00:17:10.614 "name": "BaseBdev3", 00:17:10.614 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:10.614 "is_configured": true, 00:17:10.614 "data_offset": 2048, 00:17:10.614 "data_size": 63488 00:17:10.614 } 00:17:10.614 ] 00:17:10.614 }' 00:17:10.614 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.614 02:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.892 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.892 02:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:11.167 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:11.168 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:11.426 [2024-05-15 02:17:59.351427] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.426 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.684 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.684 "name": "Existed_Raid", 00:17:11.684 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:11.684 "strip_size_kb": 0, 00:17:11.684 "state": "configuring", 00:17:11.684 "raid_level": "raid1", 00:17:11.684 "superblock": true, 00:17:11.684 "num_base_bdevs": 3, 00:17:11.684 "num_base_bdevs_discovered": 1, 00:17:11.684 "num_base_bdevs_operational": 3, 00:17:11.684 "base_bdevs_list": [ 00:17:11.684 { 00:17:11.684 "name": "BaseBdev1", 00:17:11.684 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:11.684 "is_configured": true, 00:17:11.684 "data_offset": 2048, 00:17:11.684 "data_size": 63488 00:17:11.684 }, 00:17:11.684 { 00:17:11.684 "name": null, 00:17:11.684 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:11.684 "is_configured": false, 00:17:11.684 "data_offset": 2048, 00:17:11.684 "data_size": 63488 00:17:11.684 }, 00:17:11.684 { 00:17:11.685 "name": null, 00:17:11.685 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:11.685 "is_configured": false, 00:17:11.685 "data_offset": 2048, 00:17:11.685 "data_size": 63488 00:17:11.685 } 00:17:11.685 ] 00:17:11.685 }' 00:17:11.685 02:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.685 02:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.250 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.250 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.507 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:17:12.507 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:12.765 [2024-05-15 02:18:00.659447] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.765 02:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.023 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.023 "name": "Existed_Raid", 00:17:13.023 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:13.023 "strip_size_kb": 0, 00:17:13.023 "state": "configuring", 00:17:13.023 "raid_level": "raid1", 00:17:13.023 "superblock": true, 00:17:13.023 "num_base_bdevs": 3, 00:17:13.023 "num_base_bdevs_discovered": 2, 00:17:13.023 "num_base_bdevs_operational": 3, 00:17:13.024 "base_bdevs_list": [ 00:17:13.024 { 00:17:13.024 "name": "BaseBdev1", 00:17:13.024 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:13.024 "is_configured": true, 00:17:13.024 "data_offset": 2048, 00:17:13.024 "data_size": 63488 00:17:13.024 }, 00:17:13.024 { 00:17:13.024 "name": null, 00:17:13.024 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:13.024 "is_configured": false, 00:17:13.024 "data_offset": 2048, 00:17:13.024 "data_size": 63488 00:17:13.024 }, 00:17:13.024 { 00:17:13.024 "name": "BaseBdev3", 00:17:13.024 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:13.024 "is_configured": true, 00:17:13.024 "data_offset": 2048, 00:17:13.024 "data_size": 63488 00:17:13.024 } 00:17:13.024 ] 00:17:13.024 }' 00:17:13.024 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.024 02:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.587 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.845 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:17:13.845 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.103 [2024-05-15 02:18:01.895461] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.103 02:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.361 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.361 "name": "Existed_Raid", 00:17:14.361 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:14.361 "strip_size_kb": 0, 00:17:14.361 "state": "configuring", 00:17:14.361 "raid_level": "raid1", 00:17:14.361 "superblock": true, 00:17:14.361 "num_base_bdevs": 3, 00:17:14.361 "num_base_bdevs_discovered": 1, 00:17:14.361 "num_base_bdevs_operational": 3, 00:17:14.361 "base_bdevs_list": [ 00:17:14.361 { 00:17:14.361 "name": null, 00:17:14.361 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:14.361 "is_configured": false, 00:17:14.361 "data_offset": 2048, 00:17:14.361 "data_size": 63488 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "name": null, 00:17:14.361 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:14.361 "is_configured": false, 00:17:14.361 "data_offset": 2048, 00:17:14.361 "data_size": 63488 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "name": "BaseBdev3", 00:17:14.361 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:14.361 "is_configured": true, 00:17:14.361 "data_offset": 2048, 00:17:14.361 "data_size": 63488 00:17:14.361 } 00:17:14.361 ] 00:17:14.361 }' 00:17:14.361 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.361 02:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.620 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.620 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:15.257 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:17:15.257 02:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:15.257 [2024-05-15 02:18:03.220243] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.257 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.514 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.514 "name": "Existed_Raid", 00:17:15.514 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:15.514 "strip_size_kb": 0, 00:17:15.514 "state": "configuring", 00:17:15.514 "raid_level": "raid1", 00:17:15.514 "superblock": true, 00:17:15.514 "num_base_bdevs": 3, 00:17:15.514 "num_base_bdevs_discovered": 2, 00:17:15.514 "num_base_bdevs_operational": 3, 00:17:15.514 "base_bdevs_list": [ 00:17:15.514 { 00:17:15.514 "name": null, 00:17:15.514 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:15.514 "is_configured": false, 00:17:15.514 "data_offset": 2048, 00:17:15.514 "data_size": 63488 00:17:15.514 }, 00:17:15.514 { 00:17:15.514 "name": "BaseBdev2", 00:17:15.514 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:15.514 "is_configured": true, 00:17:15.514 "data_offset": 2048, 00:17:15.514 "data_size": 63488 00:17:15.514 }, 00:17:15.514 { 00:17:15.514 "name": "BaseBdev3", 00:17:15.514 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:15.514 "is_configured": true, 00:17:15.514 "data_offset": 2048, 00:17:15.514 "data_size": 63488 00:17:15.514 } 00:17:15.514 ] 00:17:15.514 }' 00:17:15.514 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.514 02:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.077 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.077 02:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:16.335 02:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:17:16.335 02:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.335 02:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:16.592 02:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 58659540-1261-11ef-99fd-bfc7c66e2865 00:17:16.849 [2024-05-15 02:18:04.656434] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:16.849 [2024-05-15 02:18:04.656549] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d82bf00 00:17:16.849 [2024-05-15 02:18:04.656561] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.849 [2024-05-15 02:18:04.656603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d88ee20 00:17:16.849 [2024-05-15 02:18:04.656674] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d82bf00 00:17:16.849 [2024-05-15 02:18:04.656683] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d82bf00 00:17:16.849 [2024-05-15 02:18:04.656721] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.849 NewBaseBdev 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:16.849 02:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.105 02:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:17.363 [ 00:17:17.363 { 00:17:17.363 "name": "NewBaseBdev", 00:17:17.363 "aliases": [ 00:17:17.363 "58659540-1261-11ef-99fd-bfc7c66e2865" 00:17:17.363 ], 00:17:17.363 "product_name": "Malloc disk", 00:17:17.363 "block_size": 512, 00:17:17.363 "num_blocks": 65536, 00:17:17.363 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:17.363 "assigned_rate_limits": { 00:17:17.363 "rw_ios_per_sec": 0, 00:17:17.363 "rw_mbytes_per_sec": 0, 00:17:17.363 "r_mbytes_per_sec": 0, 00:17:17.363 "w_mbytes_per_sec": 0 00:17:17.363 }, 00:17:17.363 "claimed": true, 00:17:17.363 "claim_type": "exclusive_write", 00:17:17.363 "zoned": false, 00:17:17.363 "supported_io_types": { 00:17:17.363 "read": true, 00:17:17.363 "write": true, 00:17:17.363 "unmap": true, 00:17:17.364 "write_zeroes": true, 00:17:17.364 "flush": true, 00:17:17.364 "reset": true, 00:17:17.364 "compare": false, 00:17:17.364 "compare_and_write": false, 00:17:17.364 "abort": true, 00:17:17.364 "nvme_admin": false, 00:17:17.364 "nvme_io": false 00:17:17.364 }, 00:17:17.364 "memory_domains": [ 00:17:17.364 { 00:17:17.364 "dma_device_id": "system", 00:17:17.364 "dma_device_type": 1 00:17:17.364 }, 00:17:17.364 { 00:17:17.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.364 "dma_device_type": 2 00:17:17.364 } 00:17:17.364 ], 00:17:17.364 "driver_specific": {} 00:17:17.364 } 00:17:17.364 ] 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.364 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.621 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.621 "name": "Existed_Raid", 00:17:17.621 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:17.621 "strip_size_kb": 0, 00:17:17.621 "state": "online", 00:17:17.621 "raid_level": "raid1", 00:17:17.621 "superblock": true, 00:17:17.621 "num_base_bdevs": 3, 00:17:17.621 "num_base_bdevs_discovered": 3, 00:17:17.621 "num_base_bdevs_operational": 3, 00:17:17.621 "base_bdevs_list": [ 00:17:17.621 { 00:17:17.621 "name": "NewBaseBdev", 00:17:17.621 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:17.621 "is_configured": true, 00:17:17.621 "data_offset": 2048, 00:17:17.621 "data_size": 63488 00:17:17.621 }, 00:17:17.621 { 00:17:17.621 "name": "BaseBdev2", 00:17:17.621 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:17.621 "is_configured": true, 00:17:17.621 "data_offset": 2048, 00:17:17.621 "data_size": 63488 00:17:17.621 }, 00:17:17.621 { 00:17:17.621 "name": "BaseBdev3", 00:17:17.621 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:17.621 "is_configured": true, 00:17:17.621 "data_offset": 2048, 00:17:17.621 "data_size": 63488 00:17:17.621 } 00:17:17.621 ] 00:17:17.621 }' 00:17:17.621 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.621 02:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:18.183 02:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:18.445 [2024-05-15 02:18:06.212264] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:18.445 "name": "Existed_Raid", 00:17:18.445 "aliases": [ 00:17:18.445 "57333169-1261-11ef-99fd-bfc7c66e2865" 00:17:18.445 ], 00:17:18.445 "product_name": "Raid Volume", 00:17:18.445 "block_size": 512, 00:17:18.445 "num_blocks": 63488, 00:17:18.445 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:18.445 "assigned_rate_limits": { 00:17:18.445 "rw_ios_per_sec": 0, 00:17:18.445 "rw_mbytes_per_sec": 0, 00:17:18.445 "r_mbytes_per_sec": 0, 00:17:18.445 "w_mbytes_per_sec": 0 00:17:18.445 }, 00:17:18.445 "claimed": false, 00:17:18.445 "zoned": false, 00:17:18.445 "supported_io_types": { 00:17:18.445 "read": true, 00:17:18.445 "write": true, 00:17:18.445 "unmap": false, 00:17:18.445 "write_zeroes": true, 00:17:18.445 "flush": false, 00:17:18.445 "reset": true, 00:17:18.445 "compare": false, 00:17:18.445 "compare_and_write": false, 00:17:18.445 "abort": false, 00:17:18.445 "nvme_admin": false, 00:17:18.445 "nvme_io": false 00:17:18.445 }, 00:17:18.445 "memory_domains": [ 00:17:18.445 { 00:17:18.445 "dma_device_id": "system", 00:17:18.445 "dma_device_type": 1 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.445 "dma_device_type": 2 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "dma_device_id": "system", 00:17:18.445 "dma_device_type": 1 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.445 "dma_device_type": 2 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "dma_device_id": "system", 00:17:18.445 "dma_device_type": 1 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.445 "dma_device_type": 2 00:17:18.445 } 00:17:18.445 ], 00:17:18.445 "driver_specific": { 00:17:18.445 "raid": { 00:17:18.445 "uuid": "57333169-1261-11ef-99fd-bfc7c66e2865", 00:17:18.445 "strip_size_kb": 0, 00:17:18.445 "state": "online", 00:17:18.445 "raid_level": "raid1", 00:17:18.445 "superblock": true, 00:17:18.445 "num_base_bdevs": 3, 00:17:18.445 "num_base_bdevs_discovered": 3, 00:17:18.445 "num_base_bdevs_operational": 3, 00:17:18.445 "base_bdevs_list": [ 00:17:18.445 { 00:17:18.445 "name": "NewBaseBdev", 00:17:18.445 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:18.445 "is_configured": true, 00:17:18.445 "data_offset": 2048, 00:17:18.445 "data_size": 63488 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "name": "BaseBdev2", 00:17:18.445 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:18.445 "is_configured": true, 00:17:18.445 "data_offset": 2048, 00:17:18.445 "data_size": 63488 00:17:18.445 }, 00:17:18.445 { 00:17:18.445 "name": "BaseBdev3", 00:17:18.445 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:18.445 "is_configured": true, 00:17:18.445 "data_offset": 2048, 00:17:18.445 "data_size": 63488 00:17:18.445 } 00:17:18.445 ] 00:17:18.445 } 00:17:18.445 } 00:17:18.445 }' 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:17:18.445 BaseBdev2 00:17:18.445 BaseBdev3' 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:18.445 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:18.706 "name": "NewBaseBdev", 00:17:18.706 "aliases": [ 00:17:18.706 "58659540-1261-11ef-99fd-bfc7c66e2865" 00:17:18.706 ], 00:17:18.706 "product_name": "Malloc disk", 00:17:18.706 "block_size": 512, 00:17:18.706 "num_blocks": 65536, 00:17:18.706 "uuid": "58659540-1261-11ef-99fd-bfc7c66e2865", 00:17:18.706 "assigned_rate_limits": { 00:17:18.706 "rw_ios_per_sec": 0, 00:17:18.706 "rw_mbytes_per_sec": 0, 00:17:18.706 "r_mbytes_per_sec": 0, 00:17:18.706 "w_mbytes_per_sec": 0 00:17:18.706 }, 00:17:18.706 "claimed": true, 00:17:18.706 "claim_type": "exclusive_write", 00:17:18.706 "zoned": false, 00:17:18.706 "supported_io_types": { 00:17:18.706 "read": true, 00:17:18.706 "write": true, 00:17:18.706 "unmap": true, 00:17:18.706 "write_zeroes": true, 00:17:18.706 "flush": true, 00:17:18.706 "reset": true, 00:17:18.706 "compare": false, 00:17:18.706 "compare_and_write": false, 00:17:18.706 "abort": true, 00:17:18.706 "nvme_admin": false, 00:17:18.706 "nvme_io": false 00:17:18.706 }, 00:17:18.706 "memory_domains": [ 00:17:18.706 { 00:17:18.706 "dma_device_id": "system", 00:17:18.706 "dma_device_type": 1 00:17:18.706 }, 00:17:18.706 { 00:17:18.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.706 "dma_device_type": 2 00:17:18.706 } 00:17:18.706 ], 00:17:18.706 "driver_specific": {} 00:17:18.706 }' 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:18.706 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:18.964 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:18.964 "name": "BaseBdev2", 00:17:18.964 "aliases": [ 00:17:18.964 "562cc108-1261-11ef-99fd-bfc7c66e2865" 00:17:18.964 ], 00:17:18.964 "product_name": "Malloc disk", 00:17:18.964 "block_size": 512, 00:17:18.964 "num_blocks": 65536, 00:17:18.964 "uuid": "562cc108-1261-11ef-99fd-bfc7c66e2865", 00:17:18.964 "assigned_rate_limits": { 00:17:18.964 "rw_ios_per_sec": 0, 00:17:18.964 "rw_mbytes_per_sec": 0, 00:17:18.964 "r_mbytes_per_sec": 0, 00:17:18.964 "w_mbytes_per_sec": 0 00:17:18.964 }, 00:17:18.964 "claimed": true, 00:17:18.964 "claim_type": "exclusive_write", 00:17:18.964 "zoned": false, 00:17:18.965 "supported_io_types": { 00:17:18.965 "read": true, 00:17:18.965 "write": true, 00:17:18.965 "unmap": true, 00:17:18.965 "write_zeroes": true, 00:17:18.965 "flush": true, 00:17:18.965 "reset": true, 00:17:18.965 "compare": false, 00:17:18.965 "compare_and_write": false, 00:17:18.965 "abort": true, 00:17:18.965 "nvme_admin": false, 00:17:18.965 "nvme_io": false 00:17:18.965 }, 00:17:18.965 "memory_domains": [ 00:17:18.965 { 00:17:18.965 "dma_device_id": "system", 00:17:18.965 "dma_device_type": 1 00:17:18.965 }, 00:17:18.965 { 00:17:18.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.965 "dma_device_type": 2 00:17:18.965 } 00:17:18.965 ], 00:17:18.965 "driver_specific": {} 00:17:18.965 }' 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:18.965 02:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:19.223 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:19.223 "name": "BaseBdev3", 00:17:19.223 "aliases": [ 00:17:19.223 "56b3063a-1261-11ef-99fd-bfc7c66e2865" 00:17:19.223 ], 00:17:19.223 "product_name": "Malloc disk", 00:17:19.223 "block_size": 512, 00:17:19.223 "num_blocks": 65536, 00:17:19.223 "uuid": "56b3063a-1261-11ef-99fd-bfc7c66e2865", 00:17:19.223 "assigned_rate_limits": { 00:17:19.223 "rw_ios_per_sec": 0, 00:17:19.223 "rw_mbytes_per_sec": 0, 00:17:19.223 "r_mbytes_per_sec": 0, 00:17:19.223 "w_mbytes_per_sec": 0 00:17:19.223 }, 00:17:19.223 "claimed": true, 00:17:19.223 "claim_type": "exclusive_write", 00:17:19.223 "zoned": false, 00:17:19.223 "supported_io_types": { 00:17:19.223 "read": true, 00:17:19.223 "write": true, 00:17:19.223 "unmap": true, 00:17:19.223 "write_zeroes": true, 00:17:19.223 "flush": true, 00:17:19.223 "reset": true, 00:17:19.223 "compare": false, 00:17:19.223 "compare_and_write": false, 00:17:19.223 "abort": true, 00:17:19.223 "nvme_admin": false, 00:17:19.223 "nvme_io": false 00:17:19.223 }, 00:17:19.223 "memory_domains": [ 00:17:19.223 { 00:17:19.223 "dma_device_id": "system", 00:17:19.223 "dma_device_type": 1 00:17:19.223 }, 00:17:19.223 { 00:17:19.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.223 "dma_device_type": 2 00:17:19.223 } 00:17:19.223 ], 00:17:19.223 "driver_specific": {} 00:17:19.223 }' 00:17:19.223 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:19.223 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:19.481 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.740 [2024-05-15 02:18:07.596224] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.740 [2024-05-15 02:18:07.596255] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.740 [2024-05-15 02:18:07.596278] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.740 [2024-05-15 02:18:07.596359] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.740 [2024-05-15 02:18:07.596364] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d82bf00 name Existed_Raid, state offline 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 55946 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 55946 ']' 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 55946 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 55946 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:19.740 killing process with pid 55946 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55946' 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 55946 00:17:19.740 [2024-05-15 02:18:07.638129] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.740 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 55946 00:17:19.740 [2024-05-15 02:18:07.653487] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.999 02:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:17:19.999 00:17:19.999 real 0m26.595s 00:17:19.999 user 0m48.839s 00:17:19.999 sys 0m3.421s 00:17:19.999 ************************************ 00:17:20.000 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:20.000 02:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.000 END TEST raid_state_function_test_sb 00:17:20.000 ************************************ 00:17:20.000 02:18:07 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:20.000 02:18:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:20.000 02:18:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:20.000 02:18:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.000 ************************************ 00:17:20.000 START TEST raid_superblock_test 00:17:20.000 ************************************ 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=56682 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 56682 /var/tmp/spdk-raid.sock 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 56682 ']' 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:20.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:20.000 02:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.000 [2024-05-15 02:18:07.862022] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:20.000 [2024-05-15 02:18:07.862269] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:20.567 EAL: TSC is not safe to use in SMP mode 00:17:20.567 EAL: TSC is not invariant 00:17:20.567 [2024-05-15 02:18:08.382394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.567 [2024-05-15 02:18:08.477661] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:20.567 [2024-05-15 02:18:08.479997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.567 [2024-05-15 02:18:08.480778] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.567 [2024-05-15 02:18:08.480793] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:21.132 02:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:21.390 malloc1 00:17:21.390 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:21.649 [2024-05-15 02:18:09.493547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:21.649 [2024-05-15 02:18:09.493621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.649 [2024-05-15 02:18:09.494222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7c780 00:17:21.649 [2024-05-15 02:18:09.494251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.649 [2024-05-15 02:18:09.495094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.649 [2024-05-15 02:18:09.495132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:21.649 pt1 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:21.649 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:21.908 malloc2 00:17:21.908 02:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.474 [2024-05-15 02:18:10.197566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.474 [2024-05-15 02:18:10.197641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.474 [2024-05-15 02:18:10.197672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7cc80 00:17:22.474 [2024-05-15 02:18:10.197682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.474 [2024-05-15 02:18:10.198256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.474 [2024-05-15 02:18:10.198288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.474 pt2 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.474 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:22.732 malloc3 00:17:22.732 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:22.990 [2024-05-15 02:18:10.793571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:22.990 [2024-05-15 02:18:10.793650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.990 [2024-05-15 02:18:10.793683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7d180 00:17:22.990 [2024-05-15 02:18:10.793700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.990 [2024-05-15 02:18:10.794255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.990 [2024-05-15 02:18:10.794286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:22.990 pt3 00:17:22.990 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.990 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.990 02:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:23.248 [2024-05-15 02:18:11.097589] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.248 [2024-05-15 02:18:11.098121] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.248 [2024-05-15 02:18:11.098139] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:23.248 [2024-05-15 02:18:11.098197] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa7d400 00:17:23.248 [2024-05-15 02:18:11.098202] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:23.248 [2024-05-15 02:18:11.098238] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aadfe20 00:17:23.248 [2024-05-15 02:18:11.098302] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa7d400 00:17:23.248 [2024-05-15 02:18:11.098306] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa7d400 00:17:23.248 [2024-05-15 02:18:11.098330] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.248 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.507 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.507 "name": "raid_bdev1", 00:17:23.507 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:23.507 "strip_size_kb": 0, 00:17:23.507 "state": "online", 00:17:23.507 "raid_level": "raid1", 00:17:23.507 "superblock": true, 00:17:23.507 "num_base_bdevs": 3, 00:17:23.507 "num_base_bdevs_discovered": 3, 00:17:23.507 "num_base_bdevs_operational": 3, 00:17:23.507 "base_bdevs_list": [ 00:17:23.507 { 00:17:23.507 "name": "pt1", 00:17:23.507 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:23.507 "is_configured": true, 00:17:23.507 "data_offset": 2048, 00:17:23.507 "data_size": 63488 00:17:23.507 }, 00:17:23.507 { 00:17:23.507 "name": "pt2", 00:17:23.507 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:23.507 "is_configured": true, 00:17:23.507 "data_offset": 2048, 00:17:23.507 "data_size": 63488 00:17:23.507 }, 00:17:23.507 { 00:17:23.507 "name": "pt3", 00:17:23.507 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:23.507 "is_configured": true, 00:17:23.507 "data_offset": 2048, 00:17:23.507 "data_size": 63488 00:17:23.507 } 00:17:23.507 ] 00:17:23.507 }' 00:17:23.507 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.507 02:18:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.767 02:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:24.025 [2024-05-15 02:18:11.977630] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:24.025 "name": "raid_bdev1", 00:17:24.025 "aliases": [ 00:17:24.025 "6071d6fa-1261-11ef-99fd-bfc7c66e2865" 00:17:24.025 ], 00:17:24.025 "product_name": "Raid Volume", 00:17:24.025 "block_size": 512, 00:17:24.025 "num_blocks": 63488, 00:17:24.025 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:24.025 "assigned_rate_limits": { 00:17:24.025 "rw_ios_per_sec": 0, 00:17:24.025 "rw_mbytes_per_sec": 0, 00:17:24.025 "r_mbytes_per_sec": 0, 00:17:24.025 "w_mbytes_per_sec": 0 00:17:24.025 }, 00:17:24.025 "claimed": false, 00:17:24.025 "zoned": false, 00:17:24.025 "supported_io_types": { 00:17:24.025 "read": true, 00:17:24.025 "write": true, 00:17:24.025 "unmap": false, 00:17:24.025 "write_zeroes": true, 00:17:24.025 "flush": false, 00:17:24.025 "reset": true, 00:17:24.025 "compare": false, 00:17:24.025 "compare_and_write": false, 00:17:24.025 "abort": false, 00:17:24.025 "nvme_admin": false, 00:17:24.025 "nvme_io": false 00:17:24.025 }, 00:17:24.025 "memory_domains": [ 00:17:24.025 { 00:17:24.025 "dma_device_id": "system", 00:17:24.025 "dma_device_type": 1 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.025 "dma_device_type": 2 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "dma_device_id": "system", 00:17:24.025 "dma_device_type": 1 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.025 "dma_device_type": 2 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "dma_device_id": "system", 00:17:24.025 "dma_device_type": 1 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.025 "dma_device_type": 2 00:17:24.025 } 00:17:24.025 ], 00:17:24.025 "driver_specific": { 00:17:24.025 "raid": { 00:17:24.025 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:24.025 "strip_size_kb": 0, 00:17:24.025 "state": "online", 00:17:24.025 "raid_level": "raid1", 00:17:24.025 "superblock": true, 00:17:24.025 "num_base_bdevs": 3, 00:17:24.025 "num_base_bdevs_discovered": 3, 00:17:24.025 "num_base_bdevs_operational": 3, 00:17:24.025 "base_bdevs_list": [ 00:17:24.025 { 00:17:24.025 "name": "pt1", 00:17:24.025 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:24.025 "is_configured": true, 00:17:24.025 "data_offset": 2048, 00:17:24.025 "data_size": 63488 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "name": "pt2", 00:17:24.025 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:24.025 "is_configured": true, 00:17:24.025 "data_offset": 2048, 00:17:24.025 "data_size": 63488 00:17:24.025 }, 00:17:24.025 { 00:17:24.025 "name": "pt3", 00:17:24.025 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:24.025 "is_configured": true, 00:17:24.025 "data_offset": 2048, 00:17:24.025 "data_size": 63488 00:17:24.025 } 00:17:24.025 ] 00:17:24.025 } 00:17:24.025 } 00:17:24.025 }' 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:24.025 pt2 00:17:24.025 pt3' 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:24.025 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:24.593 "name": "pt1", 00:17:24.593 "aliases": [ 00:17:24.593 "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026" 00:17:24.593 ], 00:17:24.593 "product_name": "passthru", 00:17:24.593 "block_size": 512, 00:17:24.593 "num_blocks": 65536, 00:17:24.593 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:24.593 "assigned_rate_limits": { 00:17:24.593 "rw_ios_per_sec": 0, 00:17:24.593 "rw_mbytes_per_sec": 0, 00:17:24.593 "r_mbytes_per_sec": 0, 00:17:24.593 "w_mbytes_per_sec": 0 00:17:24.593 }, 00:17:24.593 "claimed": true, 00:17:24.593 "claim_type": "exclusive_write", 00:17:24.593 "zoned": false, 00:17:24.593 "supported_io_types": { 00:17:24.593 "read": true, 00:17:24.593 "write": true, 00:17:24.593 "unmap": true, 00:17:24.593 "write_zeroes": true, 00:17:24.593 "flush": true, 00:17:24.593 "reset": true, 00:17:24.593 "compare": false, 00:17:24.593 "compare_and_write": false, 00:17:24.593 "abort": true, 00:17:24.593 "nvme_admin": false, 00:17:24.593 "nvme_io": false 00:17:24.593 }, 00:17:24.593 "memory_domains": [ 00:17:24.593 { 00:17:24.593 "dma_device_id": "system", 00:17:24.593 "dma_device_type": 1 00:17:24.593 }, 00:17:24.593 { 00:17:24.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.593 "dma_device_type": 2 00:17:24.593 } 00:17:24.593 ], 00:17:24.593 "driver_specific": { 00:17:24.593 "passthru": { 00:17:24.593 "name": "pt1", 00:17:24.593 "base_bdev_name": "malloc1" 00:17:24.593 } 00:17:24.593 } 00:17:24.593 }' 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:24.593 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:24.851 "name": "pt2", 00:17:24.851 "aliases": [ 00:17:24.851 "35ce540c-dfbb-2259-9e5a-9315ec16934b" 00:17:24.851 ], 00:17:24.851 "product_name": "passthru", 00:17:24.851 "block_size": 512, 00:17:24.851 "num_blocks": 65536, 00:17:24.851 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:24.851 "assigned_rate_limits": { 00:17:24.851 "rw_ios_per_sec": 0, 00:17:24.851 "rw_mbytes_per_sec": 0, 00:17:24.851 "r_mbytes_per_sec": 0, 00:17:24.851 "w_mbytes_per_sec": 0 00:17:24.851 }, 00:17:24.851 "claimed": true, 00:17:24.851 "claim_type": "exclusive_write", 00:17:24.851 "zoned": false, 00:17:24.851 "supported_io_types": { 00:17:24.851 "read": true, 00:17:24.851 "write": true, 00:17:24.851 "unmap": true, 00:17:24.851 "write_zeroes": true, 00:17:24.851 "flush": true, 00:17:24.851 "reset": true, 00:17:24.851 "compare": false, 00:17:24.851 "compare_and_write": false, 00:17:24.851 "abort": true, 00:17:24.851 "nvme_admin": false, 00:17:24.851 "nvme_io": false 00:17:24.851 }, 00:17:24.851 "memory_domains": [ 00:17:24.851 { 00:17:24.851 "dma_device_id": "system", 00:17:24.851 "dma_device_type": 1 00:17:24.851 }, 00:17:24.851 { 00:17:24.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.851 "dma_device_type": 2 00:17:24.851 } 00:17:24.851 ], 00:17:24.851 "driver_specific": { 00:17:24.851 "passthru": { 00:17:24.851 "name": "pt2", 00:17:24.851 "base_bdev_name": "malloc2" 00:17:24.851 } 00:17:24.851 } 00:17:24.851 }' 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:24.851 02:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:25.110 "name": "pt3", 00:17:25.110 "aliases": [ 00:17:25.110 "6735eb69-932b-1852-8e88-e4f7062e7d73" 00:17:25.110 ], 00:17:25.110 "product_name": "passthru", 00:17:25.110 "block_size": 512, 00:17:25.110 "num_blocks": 65536, 00:17:25.110 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:25.110 "assigned_rate_limits": { 00:17:25.110 "rw_ios_per_sec": 0, 00:17:25.110 "rw_mbytes_per_sec": 0, 00:17:25.110 "r_mbytes_per_sec": 0, 00:17:25.110 "w_mbytes_per_sec": 0 00:17:25.110 }, 00:17:25.110 "claimed": true, 00:17:25.110 "claim_type": "exclusive_write", 00:17:25.110 "zoned": false, 00:17:25.110 "supported_io_types": { 00:17:25.110 "read": true, 00:17:25.110 "write": true, 00:17:25.110 "unmap": true, 00:17:25.110 "write_zeroes": true, 00:17:25.110 "flush": true, 00:17:25.110 "reset": true, 00:17:25.110 "compare": false, 00:17:25.110 "compare_and_write": false, 00:17:25.110 "abort": true, 00:17:25.110 "nvme_admin": false, 00:17:25.110 "nvme_io": false 00:17:25.110 }, 00:17:25.110 "memory_domains": [ 00:17:25.110 { 00:17:25.110 "dma_device_id": "system", 00:17:25.110 "dma_device_type": 1 00:17:25.110 }, 00:17:25.110 { 00:17:25.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.110 "dma_device_type": 2 00:17:25.110 } 00:17:25.110 ], 00:17:25.110 "driver_specific": { 00:17:25.110 "passthru": { 00:17:25.110 "name": "pt3", 00:17:25.110 "base_bdev_name": "malloc3" 00:17:25.110 } 00:17:25.110 } 00:17:25.110 }' 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:25.110 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:25.368 [2024-05-15 02:18:13.305639] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.368 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6071d6fa-1261-11ef-99fd-bfc7c66e2865 00:17:25.368 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6071d6fa-1261-11ef-99fd-bfc7c66e2865 ']' 00:17:25.368 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:25.639 [2024-05-15 02:18:13.653604] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.639 [2024-05-15 02:18:13.653638] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.639 [2024-05-15 02:18:13.653664] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.639 [2024-05-15 02:18:13.653683] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.639 [2024-05-15 02:18:13.653688] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7d400 name raid_bdev1, state offline 00:17:25.898 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:25.898 02:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.158 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:26.158 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:26.158 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.158 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:26.416 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.416 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.675 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.675 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:26.933 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:26.933 02:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:27.191 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.449 [2024-05-15 02:18:15.329648] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:27.449 [2024-05-15 02:18:15.330134] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:27.449 [2024-05-15 02:18:15.330149] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:27.449 [2024-05-15 02:18:15.330164] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:27.449 [2024-05-15 02:18:15.330211] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:27.449 [2024-05-15 02:18:15.330227] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:27.449 [2024-05-15 02:18:15.330243] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.449 [2024-05-15 02:18:15.330254] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7d180 name raid_bdev1, state configuring 00:17:27.449 request: 00:17:27.449 { 00:17:27.449 "name": "raid_bdev1", 00:17:27.449 "raid_level": "raid1", 00:17:27.449 "base_bdevs": [ 00:17:27.449 "malloc1", 00:17:27.449 "malloc2", 00:17:27.449 "malloc3" 00:17:27.449 ], 00:17:27.449 "superblock": false, 00:17:27.449 "method": "bdev_raid_create", 00:17:27.449 "req_id": 1 00:17:27.449 } 00:17:27.449 Got JSON-RPC error response 00:17:27.449 response: 00:17:27.449 { 00:17:27.449 "code": -17, 00:17:27.449 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:27.449 } 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.449 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:27.707 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:27.707 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:27.707 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.965 [2024-05-15 02:18:15.917654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.965 [2024-05-15 02:18:15.917729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.965 [2024-05-15 02:18:15.917764] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7cc80 00:17:27.965 [2024-05-15 02:18:15.917774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.965 [2024-05-15 02:18:15.918326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.965 [2024-05-15 02:18:15.918364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.965 [2024-05-15 02:18:15.918393] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.965 [2024-05-15 02:18:15.918405] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.965 pt1 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.965 02:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.223 02:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.223 "name": "raid_bdev1", 00:17:28.223 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:28.223 "strip_size_kb": 0, 00:17:28.223 "state": "configuring", 00:17:28.223 "raid_level": "raid1", 00:17:28.223 "superblock": true, 00:17:28.223 "num_base_bdevs": 3, 00:17:28.223 "num_base_bdevs_discovered": 1, 00:17:28.223 "num_base_bdevs_operational": 3, 00:17:28.223 "base_bdevs_list": [ 00:17:28.223 { 00:17:28.223 "name": "pt1", 00:17:28.223 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:28.223 "is_configured": true, 00:17:28.223 "data_offset": 2048, 00:17:28.223 "data_size": 63488 00:17:28.223 }, 00:17:28.223 { 00:17:28.223 "name": null, 00:17:28.223 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:28.223 "is_configured": false, 00:17:28.223 "data_offset": 2048, 00:17:28.223 "data_size": 63488 00:17:28.223 }, 00:17:28.223 { 00:17:28.223 "name": null, 00:17:28.223 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:28.223 "is_configured": false, 00:17:28.223 "data_offset": 2048, 00:17:28.223 "data_size": 63488 00:17:28.223 } 00:17:28.223 ] 00:17:28.223 }' 00:17:28.223 02:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.223 02:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.482 02:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:28.482 02:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.050 [2024-05-15 02:18:16.773665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.050 [2024-05-15 02:18:16.773744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.050 [2024-05-15 02:18:16.773783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7d680 00:17:29.050 [2024-05-15 02:18:16.773793] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.050 [2024-05-15 02:18:16.773906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.050 [2024-05-15 02:18:16.773917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.050 [2024-05-15 02:18:16.773942] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.050 [2024-05-15 02:18:16.773951] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.050 pt2 00:17:29.050 02:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:29.050 [2024-05-15 02:18:17.001664] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.050 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.309 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.309 "name": "raid_bdev1", 00:17:29.309 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:29.309 "strip_size_kb": 0, 00:17:29.309 "state": "configuring", 00:17:29.309 "raid_level": "raid1", 00:17:29.309 "superblock": true, 00:17:29.309 "num_base_bdevs": 3, 00:17:29.309 "num_base_bdevs_discovered": 1, 00:17:29.310 "num_base_bdevs_operational": 3, 00:17:29.310 "base_bdevs_list": [ 00:17:29.310 { 00:17:29.310 "name": "pt1", 00:17:29.310 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:29.310 "is_configured": true, 00:17:29.310 "data_offset": 2048, 00:17:29.310 "data_size": 63488 00:17:29.310 }, 00:17:29.310 { 00:17:29.310 "name": null, 00:17:29.310 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:29.310 "is_configured": false, 00:17:29.310 "data_offset": 2048, 00:17:29.310 "data_size": 63488 00:17:29.310 }, 00:17:29.310 { 00:17:29.310 "name": null, 00:17:29.310 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:29.310 "is_configured": false, 00:17:29.310 "data_offset": 2048, 00:17:29.310 "data_size": 63488 00:17:29.310 } 00:17:29.310 ] 00:17:29.310 }' 00:17:29.310 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.310 02:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.876 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:29.876 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:29.876 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.135 [2024-05-15 02:18:17.929677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.135 [2024-05-15 02:18:17.929761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.135 [2024-05-15 02:18:17.929812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7d680 00:17:30.135 [2024-05-15 02:18:17.929822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.135 [2024-05-15 02:18:17.929932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.135 [2024-05-15 02:18:17.929942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.135 [2024-05-15 02:18:17.929966] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.135 [2024-05-15 02:18:17.929975] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.135 pt2 00:17:30.135 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:30.135 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.135 02:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:30.394 [2024-05-15 02:18:18.229705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:30.394 [2024-05-15 02:18:18.229784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.394 [2024-05-15 02:18:18.229817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7d400 00:17:30.394 [2024-05-15 02:18:18.229827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.394 [2024-05-15 02:18:18.229948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.394 [2024-05-15 02:18:18.229959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:30.394 [2024-05-15 02:18:18.229983] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:30.394 [2024-05-15 02:18:18.229992] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:30.394 [2024-05-15 02:18:18.230022] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa7c780 00:17:30.394 [2024-05-15 02:18:18.230033] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:30.394 [2024-05-15 02:18:18.230064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aadfe20 00:17:30.394 [2024-05-15 02:18:18.230115] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa7c780 00:17:30.394 [2024-05-15 02:18:18.230120] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa7c780 00:17:30.394 [2024-05-15 02:18:18.230139] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.394 pt3 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.394 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.652 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.652 "name": "raid_bdev1", 00:17:30.652 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:30.652 "strip_size_kb": 0, 00:17:30.652 "state": "online", 00:17:30.652 "raid_level": "raid1", 00:17:30.652 "superblock": true, 00:17:30.652 "num_base_bdevs": 3, 00:17:30.652 "num_base_bdevs_discovered": 3, 00:17:30.652 "num_base_bdevs_operational": 3, 00:17:30.652 "base_bdevs_list": [ 00:17:30.652 { 00:17:30.652 "name": "pt1", 00:17:30.652 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:30.652 "is_configured": true, 00:17:30.652 "data_offset": 2048, 00:17:30.652 "data_size": 63488 00:17:30.652 }, 00:17:30.652 { 00:17:30.652 "name": "pt2", 00:17:30.652 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:30.652 "is_configured": true, 00:17:30.652 "data_offset": 2048, 00:17:30.653 "data_size": 63488 00:17:30.653 }, 00:17:30.653 { 00:17:30.653 "name": "pt3", 00:17:30.653 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:30.653 "is_configured": true, 00:17:30.653 "data_offset": 2048, 00:17:30.653 "data_size": 63488 00:17:30.653 } 00:17:30.653 ] 00:17:30.653 }' 00:17:30.653 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.653 02:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.910 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.910 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:30.910 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:30.910 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:30.910 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:30.911 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:30.911 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:30.911 02:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:31.478 [2024-05-15 02:18:19.217770] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:31.478 "name": "raid_bdev1", 00:17:31.478 "aliases": [ 00:17:31.478 "6071d6fa-1261-11ef-99fd-bfc7c66e2865" 00:17:31.478 ], 00:17:31.478 "product_name": "Raid Volume", 00:17:31.478 "block_size": 512, 00:17:31.478 "num_blocks": 63488, 00:17:31.478 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:31.478 "assigned_rate_limits": { 00:17:31.478 "rw_ios_per_sec": 0, 00:17:31.478 "rw_mbytes_per_sec": 0, 00:17:31.478 "r_mbytes_per_sec": 0, 00:17:31.478 "w_mbytes_per_sec": 0 00:17:31.478 }, 00:17:31.478 "claimed": false, 00:17:31.478 "zoned": false, 00:17:31.478 "supported_io_types": { 00:17:31.478 "read": true, 00:17:31.478 "write": true, 00:17:31.478 "unmap": false, 00:17:31.478 "write_zeroes": true, 00:17:31.478 "flush": false, 00:17:31.478 "reset": true, 00:17:31.478 "compare": false, 00:17:31.478 "compare_and_write": false, 00:17:31.478 "abort": false, 00:17:31.478 "nvme_admin": false, 00:17:31.478 "nvme_io": false 00:17:31.478 }, 00:17:31.478 "memory_domains": [ 00:17:31.478 { 00:17:31.478 "dma_device_id": "system", 00:17:31.478 "dma_device_type": 1 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.478 "dma_device_type": 2 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "system", 00:17:31.478 "dma_device_type": 1 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.478 "dma_device_type": 2 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "system", 00:17:31.478 "dma_device_type": 1 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.478 "dma_device_type": 2 00:17:31.478 } 00:17:31.478 ], 00:17:31.478 "driver_specific": { 00:17:31.478 "raid": { 00:17:31.478 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:31.478 "strip_size_kb": 0, 00:17:31.478 "state": "online", 00:17:31.478 "raid_level": "raid1", 00:17:31.478 "superblock": true, 00:17:31.478 "num_base_bdevs": 3, 00:17:31.478 "num_base_bdevs_discovered": 3, 00:17:31.478 "num_base_bdevs_operational": 3, 00:17:31.478 "base_bdevs_list": [ 00:17:31.478 { 00:17:31.478 "name": "pt1", 00:17:31.478 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:31.478 "is_configured": true, 00:17:31.478 "data_offset": 2048, 00:17:31.478 "data_size": 63488 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "name": "pt2", 00:17:31.478 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:31.478 "is_configured": true, 00:17:31.478 "data_offset": 2048, 00:17:31.478 "data_size": 63488 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "name": "pt3", 00:17:31.478 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:31.478 "is_configured": true, 00:17:31.478 "data_offset": 2048, 00:17:31.478 "data_size": 63488 00:17:31.478 } 00:17:31.478 ] 00:17:31.478 } 00:17:31.478 } 00:17:31.478 }' 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:31.478 pt2 00:17:31.478 pt3' 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:31.478 "name": "pt1", 00:17:31.478 "aliases": [ 00:17:31.478 "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026" 00:17:31.478 ], 00:17:31.478 "product_name": "passthru", 00:17:31.478 "block_size": 512, 00:17:31.478 "num_blocks": 65536, 00:17:31.478 "uuid": "bdf6cd3c-9ff7-cd52-92ba-a134b3b9a026", 00:17:31.478 "assigned_rate_limits": { 00:17:31.478 "rw_ios_per_sec": 0, 00:17:31.478 "rw_mbytes_per_sec": 0, 00:17:31.478 "r_mbytes_per_sec": 0, 00:17:31.478 "w_mbytes_per_sec": 0 00:17:31.478 }, 00:17:31.478 "claimed": true, 00:17:31.478 "claim_type": "exclusive_write", 00:17:31.478 "zoned": false, 00:17:31.478 "supported_io_types": { 00:17:31.478 "read": true, 00:17:31.478 "write": true, 00:17:31.478 "unmap": true, 00:17:31.478 "write_zeroes": true, 00:17:31.478 "flush": true, 00:17:31.478 "reset": true, 00:17:31.478 "compare": false, 00:17:31.478 "compare_and_write": false, 00:17:31.478 "abort": true, 00:17:31.478 "nvme_admin": false, 00:17:31.478 "nvme_io": false 00:17:31.478 }, 00:17:31.478 "memory_domains": [ 00:17:31.478 { 00:17:31.478 "dma_device_id": "system", 00:17:31.478 "dma_device_type": 1 00:17:31.478 }, 00:17:31.478 { 00:17:31.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.478 "dma_device_type": 2 00:17:31.478 } 00:17:31.478 ], 00:17:31.478 "driver_specific": { 00:17:31.478 "passthru": { 00:17:31.478 "name": "pt1", 00:17:31.478 "base_bdev_name": "malloc1" 00:17:31.478 } 00:17:31.478 } 00:17:31.478 }' 00:17:31.478 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:31.737 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:31.996 "name": "pt2", 00:17:31.996 "aliases": [ 00:17:31.996 "35ce540c-dfbb-2259-9e5a-9315ec16934b" 00:17:31.996 ], 00:17:31.996 "product_name": "passthru", 00:17:31.996 "block_size": 512, 00:17:31.996 "num_blocks": 65536, 00:17:31.996 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:31.996 "assigned_rate_limits": { 00:17:31.996 "rw_ios_per_sec": 0, 00:17:31.996 "rw_mbytes_per_sec": 0, 00:17:31.996 "r_mbytes_per_sec": 0, 00:17:31.996 "w_mbytes_per_sec": 0 00:17:31.996 }, 00:17:31.996 "claimed": true, 00:17:31.996 "claim_type": "exclusive_write", 00:17:31.996 "zoned": false, 00:17:31.996 "supported_io_types": { 00:17:31.996 "read": true, 00:17:31.996 "write": true, 00:17:31.996 "unmap": true, 00:17:31.996 "write_zeroes": true, 00:17:31.996 "flush": true, 00:17:31.996 "reset": true, 00:17:31.996 "compare": false, 00:17:31.996 "compare_and_write": false, 00:17:31.996 "abort": true, 00:17:31.996 "nvme_admin": false, 00:17:31.996 "nvme_io": false 00:17:31.996 }, 00:17:31.996 "memory_domains": [ 00:17:31.996 { 00:17:31.996 "dma_device_id": "system", 00:17:31.996 "dma_device_type": 1 00:17:31.996 }, 00:17:31.996 { 00:17:31.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.996 "dma_device_type": 2 00:17:31.996 } 00:17:31.996 ], 00:17:31.996 "driver_specific": { 00:17:31.996 "passthru": { 00:17:31.996 "name": "pt2", 00:17:31.996 "base_bdev_name": "malloc2" 00:17:31.996 } 00:17:31.996 } 00:17:31.996 }' 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:31.996 02:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:32.254 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:32.254 "name": "pt3", 00:17:32.254 "aliases": [ 00:17:32.255 "6735eb69-932b-1852-8e88-e4f7062e7d73" 00:17:32.255 ], 00:17:32.255 "product_name": "passthru", 00:17:32.255 "block_size": 512, 00:17:32.255 "num_blocks": 65536, 00:17:32.255 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:32.255 "assigned_rate_limits": { 00:17:32.255 "rw_ios_per_sec": 0, 00:17:32.255 "rw_mbytes_per_sec": 0, 00:17:32.255 "r_mbytes_per_sec": 0, 00:17:32.255 "w_mbytes_per_sec": 0 00:17:32.255 }, 00:17:32.255 "claimed": true, 00:17:32.255 "claim_type": "exclusive_write", 00:17:32.255 "zoned": false, 00:17:32.255 "supported_io_types": { 00:17:32.255 "read": true, 00:17:32.255 "write": true, 00:17:32.255 "unmap": true, 00:17:32.255 "write_zeroes": true, 00:17:32.255 "flush": true, 00:17:32.255 "reset": true, 00:17:32.255 "compare": false, 00:17:32.255 "compare_and_write": false, 00:17:32.255 "abort": true, 00:17:32.255 "nvme_admin": false, 00:17:32.255 "nvme_io": false 00:17:32.255 }, 00:17:32.255 "memory_domains": [ 00:17:32.255 { 00:17:32.255 "dma_device_id": "system", 00:17:32.255 "dma_device_type": 1 00:17:32.255 }, 00:17:32.255 { 00:17:32.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.255 "dma_device_type": 2 00:17:32.255 } 00:17:32.255 ], 00:17:32.255 "driver_specific": { 00:17:32.255 "passthru": { 00:17:32.255 "name": "pt3", 00:17:32.255 "base_bdev_name": "malloc3" 00:17:32.255 } 00:17:32.255 } 00:17:32.255 }' 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.255 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:32.514 [2024-05-15 02:18:20.525791] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6071d6fa-1261-11ef-99fd-bfc7c66e2865 '!=' 6071d6fa-1261-11ef-99fd-bfc7c66e2865 ']' 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:32.802 [2024-05-15 02:18:20.793791] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.802 02:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.368 02:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.368 "name": "raid_bdev1", 00:17:33.368 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:33.368 "strip_size_kb": 0, 00:17:33.368 "state": "online", 00:17:33.368 "raid_level": "raid1", 00:17:33.368 "superblock": true, 00:17:33.369 "num_base_bdevs": 3, 00:17:33.369 "num_base_bdevs_discovered": 2, 00:17:33.369 "num_base_bdevs_operational": 2, 00:17:33.369 "base_bdevs_list": [ 00:17:33.369 { 00:17:33.369 "name": null, 00:17:33.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.369 "is_configured": false, 00:17:33.369 "data_offset": 2048, 00:17:33.369 "data_size": 63488 00:17:33.369 }, 00:17:33.369 { 00:17:33.369 "name": "pt2", 00:17:33.369 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:33.369 "is_configured": true, 00:17:33.369 "data_offset": 2048, 00:17:33.369 "data_size": 63488 00:17:33.369 }, 00:17:33.369 { 00:17:33.369 "name": "pt3", 00:17:33.369 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:33.369 "is_configured": true, 00:17:33.369 "data_offset": 2048, 00:17:33.369 "data_size": 63488 00:17:33.369 } 00:17:33.369 ] 00:17:33.369 }' 00:17:33.369 02:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.369 02:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.627 02:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.886 [2024-05-15 02:18:21.737771] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.886 [2024-05-15 02:18:21.737808] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.886 [2024-05-15 02:18:21.737832] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.886 [2024-05-15 02:18:21.737847] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.886 [2024-05-15 02:18:21.737852] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7c780 name raid_bdev1, state offline 00:17:33.886 02:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.886 02:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:34.144 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:34.144 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:34.144 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:34.144 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:34.144 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.402 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:34.402 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:34.402 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.661 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:34.661 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:34.661 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:34.661 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:34.661 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.920 [2024-05-15 02:18:22.777796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.920 [2024-05-15 02:18:22.777873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.920 [2024-05-15 02:18:22.777905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7d400 00:17:34.920 [2024-05-15 02:18:22.777931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.920 [2024-05-15 02:18:22.778531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.920 [2024-05-15 02:18:22.778575] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.920 [2024-05-15 02:18:22.778611] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.920 [2024-05-15 02:18:22.778628] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.920 pt2 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.920 02:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.179 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.179 "name": "raid_bdev1", 00:17:35.179 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:35.179 "strip_size_kb": 0, 00:17:35.179 "state": "configuring", 00:17:35.179 "raid_level": "raid1", 00:17:35.179 "superblock": true, 00:17:35.179 "num_base_bdevs": 3, 00:17:35.179 "num_base_bdevs_discovered": 1, 00:17:35.179 "num_base_bdevs_operational": 2, 00:17:35.179 "base_bdevs_list": [ 00:17:35.179 { 00:17:35.179 "name": null, 00:17:35.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.179 "is_configured": false, 00:17:35.179 "data_offset": 2048, 00:17:35.179 "data_size": 63488 00:17:35.179 }, 00:17:35.179 { 00:17:35.179 "name": "pt2", 00:17:35.179 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:35.179 "is_configured": true, 00:17:35.179 "data_offset": 2048, 00:17:35.179 "data_size": 63488 00:17:35.179 }, 00:17:35.179 { 00:17:35.179 "name": null, 00:17:35.179 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:35.179 "is_configured": false, 00:17:35.179 "data_offset": 2048, 00:17:35.179 "data_size": 63488 00:17:35.179 } 00:17:35.179 ] 00:17:35.179 }' 00:17:35.179 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.179 02:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.437 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:35.437 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.437 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:35.437 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:35.695 [2024-05-15 02:18:23.649815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:35.695 [2024-05-15 02:18:23.649887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.695 [2024-05-15 02:18:23.649918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7c780 00:17:35.695 [2024-05-15 02:18:23.649927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.695 [2024-05-15 02:18:23.650031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.695 [2024-05-15 02:18:23.650046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:35.695 [2024-05-15 02:18:23.650069] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:35.695 [2024-05-15 02:18:23.650077] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:35.695 [2024-05-15 02:18:23.650103] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa7d180 00:17:35.695 [2024-05-15 02:18:23.650106] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:35.695 [2024-05-15 02:18:23.650125] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aadfe20 00:17:35.695 [2024-05-15 02:18:23.650163] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa7d180 00:17:35.695 [2024-05-15 02:18:23.650167] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa7d180 00:17:35.695 [2024-05-15 02:18:23.650185] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.695 pt3 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.695 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.953 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.953 "name": "raid_bdev1", 00:17:35.953 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:35.953 "strip_size_kb": 0, 00:17:35.953 "state": "online", 00:17:35.953 "raid_level": "raid1", 00:17:35.953 "superblock": true, 00:17:35.953 "num_base_bdevs": 3, 00:17:35.953 "num_base_bdevs_discovered": 2, 00:17:35.953 "num_base_bdevs_operational": 2, 00:17:35.953 "base_bdevs_list": [ 00:17:35.953 { 00:17:35.953 "name": null, 00:17:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.953 "is_configured": false, 00:17:35.953 "data_offset": 2048, 00:17:35.953 "data_size": 63488 00:17:35.953 }, 00:17:35.953 { 00:17:35.953 "name": "pt2", 00:17:35.953 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:35.953 "is_configured": true, 00:17:35.953 "data_offset": 2048, 00:17:35.953 "data_size": 63488 00:17:35.953 }, 00:17:35.953 { 00:17:35.953 "name": "pt3", 00:17:35.953 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:35.953 "is_configured": true, 00:17:35.953 "data_offset": 2048, 00:17:35.953 "data_size": 63488 00:17:35.953 } 00:17:35.953 ] 00:17:35.953 }' 00:17:35.953 02:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.953 02:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.212 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:36.471 [2024-05-15 02:18:24.465825] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.471 [2024-05-15 02:18:24.465857] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.471 [2024-05-15 02:18:24.465879] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.471 [2024-05-15 02:18:24.465894] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.471 [2024-05-15 02:18:24.465899] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7d180 name raid_bdev1, state offline 00:17:36.729 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.729 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:36.988 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:36.988 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:36.988 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:36.988 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:36.988 02:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:37.247 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.505 [2024-05-15 02:18:25.317854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.505 [2024-05-15 02:18:25.317918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.505 [2024-05-15 02:18:25.317947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7c780 00:17:37.505 [2024-05-15 02:18:25.317956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.506 [2024-05-15 02:18:25.318528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.506 [2024-05-15 02:18:25.318563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.506 [2024-05-15 02:18:25.318589] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:37.506 [2024-05-15 02:18:25.318599] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.506 [2024-05-15 02:18:25.318625] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:37.506 [2024-05-15 02:18:25.318629] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.506 [2024-05-15 02:18:25.318633] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7d180 name raid_bdev1, state configuring 00:17:37.506 [2024-05-15 02:18:25.318641] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.506 pt1 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.506 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.764 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.764 "name": "raid_bdev1", 00:17:37.764 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:37.764 "strip_size_kb": 0, 00:17:37.764 "state": "configuring", 00:17:37.764 "raid_level": "raid1", 00:17:37.764 "superblock": true, 00:17:37.764 "num_base_bdevs": 3, 00:17:37.764 "num_base_bdevs_discovered": 1, 00:17:37.764 "num_base_bdevs_operational": 2, 00:17:37.764 "base_bdevs_list": [ 00:17:37.764 { 00:17:37.764 "name": null, 00:17:37.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.764 "is_configured": false, 00:17:37.764 "data_offset": 2048, 00:17:37.764 "data_size": 63488 00:17:37.764 }, 00:17:37.764 { 00:17:37.764 "name": "pt2", 00:17:37.764 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:37.764 "is_configured": true, 00:17:37.764 "data_offset": 2048, 00:17:37.764 "data_size": 63488 00:17:37.764 }, 00:17:37.764 { 00:17:37.764 "name": null, 00:17:37.764 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:37.764 "is_configured": false, 00:17:37.764 "data_offset": 2048, 00:17:37.764 "data_size": 63488 00:17:37.764 } 00:17:37.764 ] 00:17:37.764 }' 00:17:37.764 02:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.764 02:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.330 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:38.330 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.588 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:38.588 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:38.588 [2024-05-15 02:18:26.593894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:38.588 [2024-05-15 02:18:26.593962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.588 [2024-05-15 02:18:26.593991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa7cc80 00:17:38.588 [2024-05-15 02:18:26.594000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.588 [2024-05-15 02:18:26.594101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.588 [2024-05-15 02:18:26.594110] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:38.588 [2024-05-15 02:18:26.594133] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:38.588 [2024-05-15 02:18:26.594141] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:38.588 [2024-05-15 02:18:26.594165] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa7d180 00:17:38.588 [2024-05-15 02:18:26.594169] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:38.588 [2024-05-15 02:18:26.594188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aadfe20 00:17:38.588 [2024-05-15 02:18:26.594223] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa7d180 00:17:38.588 [2024-05-15 02:18:26.594227] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa7d180 00:17:38.588 [2024-05-15 02:18:26.594246] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.588 pt3 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.847 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.104 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.104 "name": "raid_bdev1", 00:17:39.104 "uuid": "6071d6fa-1261-11ef-99fd-bfc7c66e2865", 00:17:39.104 "strip_size_kb": 0, 00:17:39.104 "state": "online", 00:17:39.104 "raid_level": "raid1", 00:17:39.104 "superblock": true, 00:17:39.104 "num_base_bdevs": 3, 00:17:39.104 "num_base_bdevs_discovered": 2, 00:17:39.104 "num_base_bdevs_operational": 2, 00:17:39.104 "base_bdevs_list": [ 00:17:39.104 { 00:17:39.104 "name": null, 00:17:39.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.104 "is_configured": false, 00:17:39.104 "data_offset": 2048, 00:17:39.104 "data_size": 63488 00:17:39.104 }, 00:17:39.104 { 00:17:39.104 "name": "pt2", 00:17:39.104 "uuid": "35ce540c-dfbb-2259-9e5a-9315ec16934b", 00:17:39.104 "is_configured": true, 00:17:39.104 "data_offset": 2048, 00:17:39.104 "data_size": 63488 00:17:39.104 }, 00:17:39.104 { 00:17:39.104 "name": "pt3", 00:17:39.104 "uuid": "6735eb69-932b-1852-8e88-e4f7062e7d73", 00:17:39.104 "is_configured": true, 00:17:39.104 "data_offset": 2048, 00:17:39.104 "data_size": 63488 00:17:39.104 } 00:17:39.104 ] 00:17:39.104 }' 00:17:39.104 02:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.104 02:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.671 02:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:39.671 02:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:39.671 02:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:39.930 02:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.930 02:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:40.188 [2024-05-15 02:18:28.005967] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6071d6fa-1261-11ef-99fd-bfc7c66e2865 '!=' 6071d6fa-1261-11ef-99fd-bfc7c66e2865 ']' 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 56682 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 56682 ']' 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 56682 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 56682 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:40.188 killing process with pid 56682 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56682' 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 56682 00:17:40.188 [2024-05-15 02:18:28.036317] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.188 [2024-05-15 02:18:28.036360] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.188 [2024-05-15 02:18:28.036377] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.188 [2024-05-15 02:18:28.036382] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa7d180 name raid_bdev1, state offline 00:17:40.188 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 56682 00:17:40.188 [2024-05-15 02:18:28.051284] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.446 02:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.446 00:17:40.446 real 0m20.354s 00:17:40.446 user 0m37.102s 00:17:40.446 sys 0m2.819s 00:17:40.446 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:40.446 02:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.446 ************************************ 00:17:40.446 END TEST raid_superblock_test 00:17:40.446 ************************************ 00:17:40.446 02:18:28 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:17:40.446 02:18:28 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:17:40.446 02:18:28 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:40.446 02:18:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:40.446 02:18:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.446 02:18:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.446 ************************************ 00:17:40.446 START TEST raid_state_function_test 00:17:40.447 ************************************ 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=57242 00:17:40.447 Process raid pid: 57242 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57242' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 57242 /var/tmp/spdk-raid.sock 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 57242 ']' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.447 02:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.447 [2024-05-15 02:18:28.262879] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:40.447 [2024-05-15 02:18:28.263148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:41.013 EAL: TSC is not safe to use in SMP mode 00:17:41.013 EAL: TSC is not invariant 00:17:41.013 [2024-05-15 02:18:28.746546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.013 [2024-05-15 02:18:28.851041] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:41.013 [2024-05-15 02:18:28.853348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.013 [2024-05-15 02:18:28.854104] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.013 [2024-05-15 02:18:28.854119] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.579 02:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:41.579 02:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:17:41.579 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:41.836 [2024-05-15 02:18:29.722607] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.836 [2024-05-15 02:18:29.722677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.836 [2024-05-15 02:18:29.722683] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.836 [2024-05-15 02:18:29.722693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.836 [2024-05-15 02:18:29.722696] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.836 [2024-05-15 02:18:29.722704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.836 [2024-05-15 02:18:29.722708] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:41.836 [2024-05-15 02:18:29.722716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.836 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.094 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.094 "name": "Existed_Raid", 00:17:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.094 "strip_size_kb": 64, 00:17:42.094 "state": "configuring", 00:17:42.094 "raid_level": "raid0", 00:17:42.094 "superblock": false, 00:17:42.094 "num_base_bdevs": 4, 00:17:42.094 "num_base_bdevs_discovered": 0, 00:17:42.094 "num_base_bdevs_operational": 4, 00:17:42.094 "base_bdevs_list": [ 00:17:42.094 { 00:17:42.094 "name": "BaseBdev1", 00:17:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.094 "is_configured": false, 00:17:42.094 "data_offset": 0, 00:17:42.094 "data_size": 0 00:17:42.094 }, 00:17:42.094 { 00:17:42.094 "name": "BaseBdev2", 00:17:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.094 "is_configured": false, 00:17:42.094 "data_offset": 0, 00:17:42.094 "data_size": 0 00:17:42.094 }, 00:17:42.094 { 00:17:42.094 "name": "BaseBdev3", 00:17:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.094 "is_configured": false, 00:17:42.094 "data_offset": 0, 00:17:42.094 "data_size": 0 00:17:42.094 }, 00:17:42.094 { 00:17:42.094 "name": "BaseBdev4", 00:17:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.094 "is_configured": false, 00:17:42.094 "data_offset": 0, 00:17:42.094 "data_size": 0 00:17:42.094 } 00:17:42.094 ] 00:17:42.094 }' 00:17:42.094 02:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.094 02:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.353 02:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.611 [2024-05-15 02:18:30.610614] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.611 [2024-05-15 02:18:30.610653] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7f7500 name Existed_Raid, state configuring 00:17:42.869 02:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.869 [2024-05-15 02:18:30.862639] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.869 [2024-05-15 02:18:30.862721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.869 [2024-05-15 02:18:30.862727] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.869 [2024-05-15 02:18:30.862736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.869 [2024-05-15 02:18:30.862740] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.869 [2024-05-15 02:18:30.862748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.869 [2024-05-15 02:18:30.862764] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.869 [2024-05-15 02:18:30.862773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.869 02:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.127 [2024-05-15 02:18:31.107679] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.127 BaseBdev1 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:43.127 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.693 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.951 [ 00:17:43.951 { 00:17:43.951 "name": "BaseBdev1", 00:17:43.951 "aliases": [ 00:17:43.951 "6c5efc15-1261-11ef-99fd-bfc7c66e2865" 00:17:43.951 ], 00:17:43.951 "product_name": "Malloc disk", 00:17:43.951 "block_size": 512, 00:17:43.951 "num_blocks": 65536, 00:17:43.951 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:43.951 "assigned_rate_limits": { 00:17:43.951 "rw_ios_per_sec": 0, 00:17:43.951 "rw_mbytes_per_sec": 0, 00:17:43.951 "r_mbytes_per_sec": 0, 00:17:43.951 "w_mbytes_per_sec": 0 00:17:43.951 }, 00:17:43.951 "claimed": true, 00:17:43.951 "claim_type": "exclusive_write", 00:17:43.951 "zoned": false, 00:17:43.951 "supported_io_types": { 00:17:43.951 "read": true, 00:17:43.951 "write": true, 00:17:43.951 "unmap": true, 00:17:43.951 "write_zeroes": true, 00:17:43.951 "flush": true, 00:17:43.951 "reset": true, 00:17:43.952 "compare": false, 00:17:43.952 "compare_and_write": false, 00:17:43.952 "abort": true, 00:17:43.952 "nvme_admin": false, 00:17:43.952 "nvme_io": false 00:17:43.952 }, 00:17:43.952 "memory_domains": [ 00:17:43.952 { 00:17:43.952 "dma_device_id": "system", 00:17:43.952 "dma_device_type": 1 00:17:43.952 }, 00:17:43.952 { 00:17:43.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.952 "dma_device_type": 2 00:17:43.952 } 00:17:43.952 ], 00:17:43.952 "driver_specific": {} 00:17:43.952 } 00:17:43.952 ] 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.952 02:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.209 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.210 "name": "Existed_Raid", 00:17:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.210 "strip_size_kb": 64, 00:17:44.210 "state": "configuring", 00:17:44.210 "raid_level": "raid0", 00:17:44.210 "superblock": false, 00:17:44.210 "num_base_bdevs": 4, 00:17:44.210 "num_base_bdevs_discovered": 1, 00:17:44.210 "num_base_bdevs_operational": 4, 00:17:44.210 "base_bdevs_list": [ 00:17:44.210 { 00:17:44.210 "name": "BaseBdev1", 00:17:44.210 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:44.210 "is_configured": true, 00:17:44.210 "data_offset": 0, 00:17:44.210 "data_size": 65536 00:17:44.210 }, 00:17:44.210 { 00:17:44.210 "name": "BaseBdev2", 00:17:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.210 "is_configured": false, 00:17:44.210 "data_offset": 0, 00:17:44.210 "data_size": 0 00:17:44.210 }, 00:17:44.210 { 00:17:44.210 "name": "BaseBdev3", 00:17:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.210 "is_configured": false, 00:17:44.210 "data_offset": 0, 00:17:44.210 "data_size": 0 00:17:44.210 }, 00:17:44.210 { 00:17:44.210 "name": "BaseBdev4", 00:17:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.210 "is_configured": false, 00:17:44.210 "data_offset": 0, 00:17:44.210 "data_size": 0 00:17:44.210 } 00:17:44.210 ] 00:17:44.210 }' 00:17:44.210 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.210 02:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.467 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.727 [2024-05-15 02:18:32.690696] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.727 [2024-05-15 02:18:32.690740] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7f7500 name Existed_Raid, state configuring 00:17:44.727 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:44.987 [2024-05-15 02:18:32.930755] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.987 [2024-05-15 02:18:32.931565] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.987 [2024-05-15 02:18:32.931633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.987 [2024-05-15 02:18:32.931643] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.987 [2024-05-15 02:18:32.931660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.987 [2024-05-15 02:18:32.931669] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.987 [2024-05-15 02:18:32.931684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.987 02:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.246 02:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.246 "name": "Existed_Raid", 00:17:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.246 "strip_size_kb": 64, 00:17:45.246 "state": "configuring", 00:17:45.246 "raid_level": "raid0", 00:17:45.246 "superblock": false, 00:17:45.246 "num_base_bdevs": 4, 00:17:45.246 "num_base_bdevs_discovered": 1, 00:17:45.246 "num_base_bdevs_operational": 4, 00:17:45.246 "base_bdevs_list": [ 00:17:45.246 { 00:17:45.246 "name": "BaseBdev1", 00:17:45.246 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:45.246 "is_configured": true, 00:17:45.246 "data_offset": 0, 00:17:45.246 "data_size": 65536 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "name": "BaseBdev2", 00:17:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.246 "is_configured": false, 00:17:45.246 "data_offset": 0, 00:17:45.246 "data_size": 0 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "name": "BaseBdev3", 00:17:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.246 "is_configured": false, 00:17:45.246 "data_offset": 0, 00:17:45.246 "data_size": 0 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "name": "BaseBdev4", 00:17:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.246 "is_configured": false, 00:17:45.246 "data_offset": 0, 00:17:45.246 "data_size": 0 00:17:45.246 } 00:17:45.246 ] 00:17:45.246 }' 00:17:45.246 02:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.246 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.812 02:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:46.069 [2024-05-15 02:18:33.870883] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.069 BaseBdev2 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:46.069 02:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.326 02:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.584 [ 00:17:46.584 { 00:17:46.584 "name": "BaseBdev2", 00:17:46.584 "aliases": [ 00:17:46.584 "6e04c01d-1261-11ef-99fd-bfc7c66e2865" 00:17:46.584 ], 00:17:46.584 "product_name": "Malloc disk", 00:17:46.584 "block_size": 512, 00:17:46.584 "num_blocks": 65536, 00:17:46.584 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:46.584 "assigned_rate_limits": { 00:17:46.584 "rw_ios_per_sec": 0, 00:17:46.584 "rw_mbytes_per_sec": 0, 00:17:46.584 "r_mbytes_per_sec": 0, 00:17:46.584 "w_mbytes_per_sec": 0 00:17:46.584 }, 00:17:46.584 "claimed": true, 00:17:46.584 "claim_type": "exclusive_write", 00:17:46.584 "zoned": false, 00:17:46.584 "supported_io_types": { 00:17:46.584 "read": true, 00:17:46.584 "write": true, 00:17:46.584 "unmap": true, 00:17:46.584 "write_zeroes": true, 00:17:46.584 "flush": true, 00:17:46.584 "reset": true, 00:17:46.584 "compare": false, 00:17:46.584 "compare_and_write": false, 00:17:46.584 "abort": true, 00:17:46.584 "nvme_admin": false, 00:17:46.584 "nvme_io": false 00:17:46.584 }, 00:17:46.584 "memory_domains": [ 00:17:46.584 { 00:17:46.584 "dma_device_id": "system", 00:17:46.584 "dma_device_type": 1 00:17:46.584 }, 00:17:46.584 { 00:17:46.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.584 "dma_device_type": 2 00:17:46.584 } 00:17:46.584 ], 00:17:46.584 "driver_specific": {} 00:17:46.584 } 00:17:46.584 ] 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.584 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.843 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.843 "name": "Existed_Raid", 00:17:46.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.843 "strip_size_kb": 64, 00:17:46.843 "state": "configuring", 00:17:46.843 "raid_level": "raid0", 00:17:46.843 "superblock": false, 00:17:46.843 "num_base_bdevs": 4, 00:17:46.843 "num_base_bdevs_discovered": 2, 00:17:46.843 "num_base_bdevs_operational": 4, 00:17:46.843 "base_bdevs_list": [ 00:17:46.843 { 00:17:46.843 "name": "BaseBdev1", 00:17:46.843 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:46.843 "is_configured": true, 00:17:46.843 "data_offset": 0, 00:17:46.843 "data_size": 65536 00:17:46.843 }, 00:17:46.843 { 00:17:46.843 "name": "BaseBdev2", 00:17:46.843 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:46.843 "is_configured": true, 00:17:46.843 "data_offset": 0, 00:17:46.843 "data_size": 65536 00:17:46.843 }, 00:17:46.843 { 00:17:46.843 "name": "BaseBdev3", 00:17:46.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.843 "is_configured": false, 00:17:46.843 "data_offset": 0, 00:17:46.843 "data_size": 0 00:17:46.843 }, 00:17:46.843 { 00:17:46.843 "name": "BaseBdev4", 00:17:46.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.843 "is_configured": false, 00:17:46.843 "data_offset": 0, 00:17:46.843 "data_size": 0 00:17:46.843 } 00:17:46.843 ] 00:17:46.843 }' 00:17:46.843 02:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.843 02:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.201 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.460 [2024-05-15 02:18:35.326944] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.460 BaseBdev3 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:47.460 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.718 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.978 [ 00:17:47.978 { 00:17:47.978 "name": "BaseBdev3", 00:17:47.978 "aliases": [ 00:17:47.978 "6ee2ee63-1261-11ef-99fd-bfc7c66e2865" 00:17:47.978 ], 00:17:47.978 "product_name": "Malloc disk", 00:17:47.978 "block_size": 512, 00:17:47.978 "num_blocks": 65536, 00:17:47.978 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:47.978 "assigned_rate_limits": { 00:17:47.978 "rw_ios_per_sec": 0, 00:17:47.978 "rw_mbytes_per_sec": 0, 00:17:47.978 "r_mbytes_per_sec": 0, 00:17:47.978 "w_mbytes_per_sec": 0 00:17:47.978 }, 00:17:47.978 "claimed": true, 00:17:47.978 "claim_type": "exclusive_write", 00:17:47.978 "zoned": false, 00:17:47.978 "supported_io_types": { 00:17:47.978 "read": true, 00:17:47.978 "write": true, 00:17:47.978 "unmap": true, 00:17:47.978 "write_zeroes": true, 00:17:47.978 "flush": true, 00:17:47.978 "reset": true, 00:17:47.978 "compare": false, 00:17:47.978 "compare_and_write": false, 00:17:47.978 "abort": true, 00:17:47.978 "nvme_admin": false, 00:17:47.978 "nvme_io": false 00:17:47.978 }, 00:17:47.978 "memory_domains": [ 00:17:47.978 { 00:17:47.978 "dma_device_id": "system", 00:17:47.978 "dma_device_type": 1 00:17:47.978 }, 00:17:47.978 { 00:17:47.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.978 "dma_device_type": 2 00:17:47.978 } 00:17:47.978 ], 00:17:47.978 "driver_specific": {} 00:17:47.978 } 00:17:47.978 ] 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.978 02:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.237 02:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.237 "name": "Existed_Raid", 00:17:48.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.237 "strip_size_kb": 64, 00:17:48.237 "state": "configuring", 00:17:48.237 "raid_level": "raid0", 00:17:48.237 "superblock": false, 00:17:48.237 "num_base_bdevs": 4, 00:17:48.237 "num_base_bdevs_discovered": 3, 00:17:48.237 "num_base_bdevs_operational": 4, 00:17:48.237 "base_bdevs_list": [ 00:17:48.237 { 00:17:48.237 "name": "BaseBdev1", 00:17:48.237 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:48.237 "is_configured": true, 00:17:48.237 "data_offset": 0, 00:17:48.237 "data_size": 65536 00:17:48.237 }, 00:17:48.237 { 00:17:48.237 "name": "BaseBdev2", 00:17:48.237 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:48.237 "is_configured": true, 00:17:48.237 "data_offset": 0, 00:17:48.237 "data_size": 65536 00:17:48.237 }, 00:17:48.237 { 00:17:48.237 "name": "BaseBdev3", 00:17:48.237 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:48.237 "is_configured": true, 00:17:48.237 "data_offset": 0, 00:17:48.237 "data_size": 65536 00:17:48.237 }, 00:17:48.237 { 00:17:48.237 "name": "BaseBdev4", 00:17:48.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.237 "is_configured": false, 00:17:48.237 "data_offset": 0, 00:17:48.237 "data_size": 0 00:17:48.237 } 00:17:48.237 ] 00:17:48.237 }' 00:17:48.237 02:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.237 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.496 02:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:48.755 [2024-05-15 02:18:36.654966] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.755 [2024-05-15 02:18:36.654998] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7f7a00 00:17:48.755 [2024-05-15 02:18:36.655003] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:48.755 [2024-05-15 02:18:36.655035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c85aec0 00:17:48.755 [2024-05-15 02:18:36.655123] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7f7a00 00:17:48.755 [2024-05-15 02:18:36.655128] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c7f7a00 00:17:48.755 [2024-05-15 02:18:36.655161] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.755 BaseBdev4 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:48.755 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.014 02:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:49.580 [ 00:17:49.580 { 00:17:49.580 "name": "BaseBdev4", 00:17:49.580 "aliases": [ 00:17:49.580 "6fad9288-1261-11ef-99fd-bfc7c66e2865" 00:17:49.580 ], 00:17:49.580 "product_name": "Malloc disk", 00:17:49.580 "block_size": 512, 00:17:49.580 "num_blocks": 65536, 00:17:49.580 "uuid": "6fad9288-1261-11ef-99fd-bfc7c66e2865", 00:17:49.580 "assigned_rate_limits": { 00:17:49.580 "rw_ios_per_sec": 0, 00:17:49.580 "rw_mbytes_per_sec": 0, 00:17:49.580 "r_mbytes_per_sec": 0, 00:17:49.580 "w_mbytes_per_sec": 0 00:17:49.580 }, 00:17:49.580 "claimed": true, 00:17:49.580 "claim_type": "exclusive_write", 00:17:49.580 "zoned": false, 00:17:49.580 "supported_io_types": { 00:17:49.580 "read": true, 00:17:49.580 "write": true, 00:17:49.580 "unmap": true, 00:17:49.580 "write_zeroes": true, 00:17:49.580 "flush": true, 00:17:49.580 "reset": true, 00:17:49.580 "compare": false, 00:17:49.580 "compare_and_write": false, 00:17:49.580 "abort": true, 00:17:49.580 "nvme_admin": false, 00:17:49.580 "nvme_io": false 00:17:49.580 }, 00:17:49.580 "memory_domains": [ 00:17:49.580 { 00:17:49.580 "dma_device_id": "system", 00:17:49.580 "dma_device_type": 1 00:17:49.580 }, 00:17:49.580 { 00:17:49.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.580 "dma_device_type": 2 00:17:49.580 } 00:17:49.580 ], 00:17:49.580 "driver_specific": {} 00:17:49.580 } 00:17:49.580 ] 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.580 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.838 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.838 "name": "Existed_Raid", 00:17:49.838 "uuid": "6fad98ac-1261-11ef-99fd-bfc7c66e2865", 00:17:49.838 "strip_size_kb": 64, 00:17:49.838 "state": "online", 00:17:49.838 "raid_level": "raid0", 00:17:49.838 "superblock": false, 00:17:49.838 "num_base_bdevs": 4, 00:17:49.838 "num_base_bdevs_discovered": 4, 00:17:49.838 "num_base_bdevs_operational": 4, 00:17:49.838 "base_bdevs_list": [ 00:17:49.838 { 00:17:49.838 "name": "BaseBdev1", 00:17:49.838 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:49.838 "is_configured": true, 00:17:49.838 "data_offset": 0, 00:17:49.838 "data_size": 65536 00:17:49.838 }, 00:17:49.838 { 00:17:49.838 "name": "BaseBdev2", 00:17:49.838 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:49.838 "is_configured": true, 00:17:49.838 "data_offset": 0, 00:17:49.838 "data_size": 65536 00:17:49.838 }, 00:17:49.838 { 00:17:49.838 "name": "BaseBdev3", 00:17:49.838 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:49.838 "is_configured": true, 00:17:49.838 "data_offset": 0, 00:17:49.838 "data_size": 65536 00:17:49.838 }, 00:17:49.838 { 00:17:49.838 "name": "BaseBdev4", 00:17:49.838 "uuid": "6fad9288-1261-11ef-99fd-bfc7c66e2865", 00:17:49.838 "is_configured": true, 00:17:49.839 "data_offset": 0, 00:17:49.839 "data_size": 65536 00:17:49.839 } 00:17:49.839 ] 00:17:49.839 }' 00:17:49.839 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.839 02:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:50.097 02:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:50.356 [2024-05-15 02:18:38.210991] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:50.356 "name": "Existed_Raid", 00:17:50.356 "aliases": [ 00:17:50.356 "6fad98ac-1261-11ef-99fd-bfc7c66e2865" 00:17:50.356 ], 00:17:50.356 "product_name": "Raid Volume", 00:17:50.356 "block_size": 512, 00:17:50.356 "num_blocks": 262144, 00:17:50.356 "uuid": "6fad98ac-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "assigned_rate_limits": { 00:17:50.356 "rw_ios_per_sec": 0, 00:17:50.356 "rw_mbytes_per_sec": 0, 00:17:50.356 "r_mbytes_per_sec": 0, 00:17:50.356 "w_mbytes_per_sec": 0 00:17:50.356 }, 00:17:50.356 "claimed": false, 00:17:50.356 "zoned": false, 00:17:50.356 "supported_io_types": { 00:17:50.356 "read": true, 00:17:50.356 "write": true, 00:17:50.356 "unmap": true, 00:17:50.356 "write_zeroes": true, 00:17:50.356 "flush": true, 00:17:50.356 "reset": true, 00:17:50.356 "compare": false, 00:17:50.356 "compare_and_write": false, 00:17:50.356 "abort": false, 00:17:50.356 "nvme_admin": false, 00:17:50.356 "nvme_io": false 00:17:50.356 }, 00:17:50.356 "memory_domains": [ 00:17:50.356 { 00:17:50.356 "dma_device_id": "system", 00:17:50.356 "dma_device_type": 1 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.356 "dma_device_type": 2 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "system", 00:17:50.356 "dma_device_type": 1 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.356 "dma_device_type": 2 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "system", 00:17:50.356 "dma_device_type": 1 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.356 "dma_device_type": 2 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "system", 00:17:50.356 "dma_device_type": 1 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.356 "dma_device_type": 2 00:17:50.356 } 00:17:50.356 ], 00:17:50.356 "driver_specific": { 00:17:50.356 "raid": { 00:17:50.356 "uuid": "6fad98ac-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "strip_size_kb": 64, 00:17:50.356 "state": "online", 00:17:50.356 "raid_level": "raid0", 00:17:50.356 "superblock": false, 00:17:50.356 "num_base_bdevs": 4, 00:17:50.356 "num_base_bdevs_discovered": 4, 00:17:50.356 "num_base_bdevs_operational": 4, 00:17:50.356 "base_bdevs_list": [ 00:17:50.356 { 00:17:50.356 "name": "BaseBdev1", 00:17:50.356 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "is_configured": true, 00:17:50.356 "data_offset": 0, 00:17:50.356 "data_size": 65536 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "name": "BaseBdev2", 00:17:50.356 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "is_configured": true, 00:17:50.356 "data_offset": 0, 00:17:50.356 "data_size": 65536 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "name": "BaseBdev3", 00:17:50.356 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "is_configured": true, 00:17:50.356 "data_offset": 0, 00:17:50.356 "data_size": 65536 00:17:50.356 }, 00:17:50.356 { 00:17:50.356 "name": "BaseBdev4", 00:17:50.356 "uuid": "6fad9288-1261-11ef-99fd-bfc7c66e2865", 00:17:50.356 "is_configured": true, 00:17:50.356 "data_offset": 0, 00:17:50.356 "data_size": 65536 00:17:50.356 } 00:17:50.356 ] 00:17:50.356 } 00:17:50.356 } 00:17:50.356 }' 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:50.356 BaseBdev2 00:17:50.356 BaseBdev3 00:17:50.356 BaseBdev4' 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:50.356 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:50.614 "name": "BaseBdev1", 00:17:50.614 "aliases": [ 00:17:50.614 "6c5efc15-1261-11ef-99fd-bfc7c66e2865" 00:17:50.614 ], 00:17:50.614 "product_name": "Malloc disk", 00:17:50.614 "block_size": 512, 00:17:50.614 "num_blocks": 65536, 00:17:50.614 "uuid": "6c5efc15-1261-11ef-99fd-bfc7c66e2865", 00:17:50.614 "assigned_rate_limits": { 00:17:50.614 "rw_ios_per_sec": 0, 00:17:50.614 "rw_mbytes_per_sec": 0, 00:17:50.614 "r_mbytes_per_sec": 0, 00:17:50.614 "w_mbytes_per_sec": 0 00:17:50.614 }, 00:17:50.614 "claimed": true, 00:17:50.614 "claim_type": "exclusive_write", 00:17:50.614 "zoned": false, 00:17:50.614 "supported_io_types": { 00:17:50.614 "read": true, 00:17:50.614 "write": true, 00:17:50.614 "unmap": true, 00:17:50.614 "write_zeroes": true, 00:17:50.614 "flush": true, 00:17:50.614 "reset": true, 00:17:50.614 "compare": false, 00:17:50.614 "compare_and_write": false, 00:17:50.614 "abort": true, 00:17:50.614 "nvme_admin": false, 00:17:50.614 "nvme_io": false 00:17:50.614 }, 00:17:50.614 "memory_domains": [ 00:17:50.614 { 00:17:50.614 "dma_device_id": "system", 00:17:50.614 "dma_device_type": 1 00:17:50.614 }, 00:17:50.614 { 00:17:50.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.614 "dma_device_type": 2 00:17:50.614 } 00:17:50.614 ], 00:17:50.614 "driver_specific": {} 00:17:50.614 }' 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:50.614 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:51.180 "name": "BaseBdev2", 00:17:51.180 "aliases": [ 00:17:51.180 "6e04c01d-1261-11ef-99fd-bfc7c66e2865" 00:17:51.180 ], 00:17:51.180 "product_name": "Malloc disk", 00:17:51.180 "block_size": 512, 00:17:51.180 "num_blocks": 65536, 00:17:51.180 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:51.180 "assigned_rate_limits": { 00:17:51.180 "rw_ios_per_sec": 0, 00:17:51.180 "rw_mbytes_per_sec": 0, 00:17:51.180 "r_mbytes_per_sec": 0, 00:17:51.180 "w_mbytes_per_sec": 0 00:17:51.180 }, 00:17:51.180 "claimed": true, 00:17:51.180 "claim_type": "exclusive_write", 00:17:51.180 "zoned": false, 00:17:51.180 "supported_io_types": { 00:17:51.180 "read": true, 00:17:51.180 "write": true, 00:17:51.180 "unmap": true, 00:17:51.180 "write_zeroes": true, 00:17:51.180 "flush": true, 00:17:51.180 "reset": true, 00:17:51.180 "compare": false, 00:17:51.180 "compare_and_write": false, 00:17:51.180 "abort": true, 00:17:51.180 "nvme_admin": false, 00:17:51.180 "nvme_io": false 00:17:51.180 }, 00:17:51.180 "memory_domains": [ 00:17:51.180 { 00:17:51.180 "dma_device_id": "system", 00:17:51.180 "dma_device_type": 1 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.180 "dma_device_type": 2 00:17:51.180 } 00:17:51.180 ], 00:17:51.180 "driver_specific": {} 00:17:51.180 }' 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:51.180 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:51.181 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:51.181 02:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:51.439 "name": "BaseBdev3", 00:17:51.439 "aliases": [ 00:17:51.439 "6ee2ee63-1261-11ef-99fd-bfc7c66e2865" 00:17:51.439 ], 00:17:51.439 "product_name": "Malloc disk", 00:17:51.439 "block_size": 512, 00:17:51.439 "num_blocks": 65536, 00:17:51.439 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:51.439 "assigned_rate_limits": { 00:17:51.439 "rw_ios_per_sec": 0, 00:17:51.439 "rw_mbytes_per_sec": 0, 00:17:51.439 "r_mbytes_per_sec": 0, 00:17:51.439 "w_mbytes_per_sec": 0 00:17:51.439 }, 00:17:51.439 "claimed": true, 00:17:51.439 "claim_type": "exclusive_write", 00:17:51.439 "zoned": false, 00:17:51.439 "supported_io_types": { 00:17:51.439 "read": true, 00:17:51.439 "write": true, 00:17:51.439 "unmap": true, 00:17:51.439 "write_zeroes": true, 00:17:51.439 "flush": true, 00:17:51.439 "reset": true, 00:17:51.439 "compare": false, 00:17:51.439 "compare_and_write": false, 00:17:51.439 "abort": true, 00:17:51.439 "nvme_admin": false, 00:17:51.439 "nvme_io": false 00:17:51.439 }, 00:17:51.439 "memory_domains": [ 00:17:51.439 { 00:17:51.439 "dma_device_id": "system", 00:17:51.439 "dma_device_type": 1 00:17:51.439 }, 00:17:51.439 { 00:17:51.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.439 "dma_device_type": 2 00:17:51.439 } 00:17:51.439 ], 00:17:51.439 "driver_specific": {} 00:17:51.439 }' 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:51.439 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:51.440 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:51.698 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:51.698 "name": "BaseBdev4", 00:17:51.698 "aliases": [ 00:17:51.698 "6fad9288-1261-11ef-99fd-bfc7c66e2865" 00:17:51.698 ], 00:17:51.698 "product_name": "Malloc disk", 00:17:51.698 "block_size": 512, 00:17:51.698 "num_blocks": 65536, 00:17:51.699 "uuid": "6fad9288-1261-11ef-99fd-bfc7c66e2865", 00:17:51.699 "assigned_rate_limits": { 00:17:51.699 "rw_ios_per_sec": 0, 00:17:51.699 "rw_mbytes_per_sec": 0, 00:17:51.699 "r_mbytes_per_sec": 0, 00:17:51.699 "w_mbytes_per_sec": 0 00:17:51.699 }, 00:17:51.699 "claimed": true, 00:17:51.699 "claim_type": "exclusive_write", 00:17:51.699 "zoned": false, 00:17:51.699 "supported_io_types": { 00:17:51.699 "read": true, 00:17:51.699 "write": true, 00:17:51.699 "unmap": true, 00:17:51.699 "write_zeroes": true, 00:17:51.699 "flush": true, 00:17:51.699 "reset": true, 00:17:51.699 "compare": false, 00:17:51.699 "compare_and_write": false, 00:17:51.699 "abort": true, 00:17:51.699 "nvme_admin": false, 00:17:51.699 "nvme_io": false 00:17:51.699 }, 00:17:51.699 "memory_domains": [ 00:17:51.699 { 00:17:51.699 "dma_device_id": "system", 00:17:51.699 "dma_device_type": 1 00:17:51.699 }, 00:17:51.699 { 00:17:51.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.699 "dma_device_type": 2 00:17:51.699 } 00:17:51.699 ], 00:17:51.699 "driver_specific": {} 00:17:51.699 }' 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:51.699 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:51.957 [2024-05-15 02:18:39.887010] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.957 [2024-05-15 02:18:39.887047] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.957 [2024-05-15 02:18:39.887075] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.957 02:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.215 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.215 "name": "Existed_Raid", 00:17:52.215 "uuid": "6fad98ac-1261-11ef-99fd-bfc7c66e2865", 00:17:52.215 "strip_size_kb": 64, 00:17:52.215 "state": "offline", 00:17:52.215 "raid_level": "raid0", 00:17:52.215 "superblock": false, 00:17:52.215 "num_base_bdevs": 4, 00:17:52.215 "num_base_bdevs_discovered": 3, 00:17:52.215 "num_base_bdevs_operational": 3, 00:17:52.215 "base_bdevs_list": [ 00:17:52.215 { 00:17:52.215 "name": null, 00:17:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.215 "is_configured": false, 00:17:52.215 "data_offset": 0, 00:17:52.215 "data_size": 65536 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev2", 00:17:52.215 "uuid": "6e04c01d-1261-11ef-99fd-bfc7c66e2865", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 0, 00:17:52.215 "data_size": 65536 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev3", 00:17:52.215 "uuid": "6ee2ee63-1261-11ef-99fd-bfc7c66e2865", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 0, 00:17:52.215 "data_size": 65536 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev4", 00:17:52.215 "uuid": "6fad9288-1261-11ef-99fd-bfc7c66e2865", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 0, 00:17:52.215 "data_size": 65536 00:17:52.215 } 00:17:52.215 ] 00:17:52.215 }' 00:17:52.215 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.215 02:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.781 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:52.781 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:52.781 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.781 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:53.038 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:53.038 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.038 02:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:53.294 [2024-05-15 02:18:41.139944] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:53.294 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.294 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.294 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:53.294 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.555 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:53.555 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.555 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:53.830 [2024-05-15 02:18:41.716745] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.830 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.830 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.830 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.830 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:54.099 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:54.099 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.099 02:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:54.357 [2024-05-15 02:18:42.237540] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:54.357 [2024-05-15 02:18:42.237603] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7f7a00 name Existed_Raid, state offline 00:17:54.357 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:54.357 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:54.357 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:54.357 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:54.616 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.873 BaseBdev2 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:55.131 02:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.388 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.657 [ 00:17:55.657 { 00:17:55.657 "name": "BaseBdev2", 00:17:55.657 "aliases": [ 00:17:55.657 "7362b805-1261-11ef-99fd-bfc7c66e2865" 00:17:55.657 ], 00:17:55.657 "product_name": "Malloc disk", 00:17:55.657 "block_size": 512, 00:17:55.657 "num_blocks": 65536, 00:17:55.657 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:17:55.657 "assigned_rate_limits": { 00:17:55.657 "rw_ios_per_sec": 0, 00:17:55.657 "rw_mbytes_per_sec": 0, 00:17:55.657 "r_mbytes_per_sec": 0, 00:17:55.657 "w_mbytes_per_sec": 0 00:17:55.657 }, 00:17:55.657 "claimed": false, 00:17:55.657 "zoned": false, 00:17:55.657 "supported_io_types": { 00:17:55.657 "read": true, 00:17:55.657 "write": true, 00:17:55.657 "unmap": true, 00:17:55.657 "write_zeroes": true, 00:17:55.657 "flush": true, 00:17:55.657 "reset": true, 00:17:55.657 "compare": false, 00:17:55.657 "compare_and_write": false, 00:17:55.657 "abort": true, 00:17:55.657 "nvme_admin": false, 00:17:55.657 "nvme_io": false 00:17:55.657 }, 00:17:55.657 "memory_domains": [ 00:17:55.657 { 00:17:55.657 "dma_device_id": "system", 00:17:55.658 "dma_device_type": 1 00:17:55.658 }, 00:17:55.658 { 00:17:55.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.658 "dma_device_type": 2 00:17:55.658 } 00:17:55.658 ], 00:17:55.658 "driver_specific": {} 00:17:55.658 } 00:17:55.658 ] 00:17:55.658 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:55.658 02:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:55.658 02:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:55.658 02:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.923 BaseBdev3 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:55.923 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.181 02:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.439 [ 00:17:56.439 { 00:17:56.439 "name": "BaseBdev3", 00:17:56.439 "aliases": [ 00:17:56.439 "73e553fb-1261-11ef-99fd-bfc7c66e2865" 00:17:56.439 ], 00:17:56.439 "product_name": "Malloc disk", 00:17:56.439 "block_size": 512, 00:17:56.439 "num_blocks": 65536, 00:17:56.439 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:17:56.439 "assigned_rate_limits": { 00:17:56.439 "rw_ios_per_sec": 0, 00:17:56.439 "rw_mbytes_per_sec": 0, 00:17:56.439 "r_mbytes_per_sec": 0, 00:17:56.439 "w_mbytes_per_sec": 0 00:17:56.439 }, 00:17:56.439 "claimed": false, 00:17:56.439 "zoned": false, 00:17:56.439 "supported_io_types": { 00:17:56.439 "read": true, 00:17:56.439 "write": true, 00:17:56.439 "unmap": true, 00:17:56.439 "write_zeroes": true, 00:17:56.439 "flush": true, 00:17:56.439 "reset": true, 00:17:56.439 "compare": false, 00:17:56.439 "compare_and_write": false, 00:17:56.439 "abort": true, 00:17:56.439 "nvme_admin": false, 00:17:56.439 "nvme_io": false 00:17:56.439 }, 00:17:56.439 "memory_domains": [ 00:17:56.439 { 00:17:56.439 "dma_device_id": "system", 00:17:56.439 "dma_device_type": 1 00:17:56.439 }, 00:17:56.439 { 00:17:56.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.439 "dma_device_type": 2 00:17:56.439 } 00:17:56.439 ], 00:17:56.439 "driver_specific": {} 00:17:56.439 } 00:17:56.439 ] 00:17:56.439 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:56.439 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:56.439 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:56.439 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:56.696 BaseBdev4 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:56.696 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.969 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:56.969 [ 00:17:56.969 { 00:17:56.969 "name": "BaseBdev4", 00:17:56.969 "aliases": [ 00:17:56.969 "745d8e89-1261-11ef-99fd-bfc7c66e2865" 00:17:56.969 ], 00:17:56.969 "product_name": "Malloc disk", 00:17:56.969 "block_size": 512, 00:17:56.969 "num_blocks": 65536, 00:17:56.969 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:17:56.969 "assigned_rate_limits": { 00:17:56.969 "rw_ios_per_sec": 0, 00:17:56.969 "rw_mbytes_per_sec": 0, 00:17:56.969 "r_mbytes_per_sec": 0, 00:17:56.969 "w_mbytes_per_sec": 0 00:17:56.969 }, 00:17:56.969 "claimed": false, 00:17:56.969 "zoned": false, 00:17:56.969 "supported_io_types": { 00:17:56.969 "read": true, 00:17:56.969 "write": true, 00:17:56.969 "unmap": true, 00:17:56.969 "write_zeroes": true, 00:17:56.969 "flush": true, 00:17:56.969 "reset": true, 00:17:56.969 "compare": false, 00:17:56.969 "compare_and_write": false, 00:17:56.969 "abort": true, 00:17:56.969 "nvme_admin": false, 00:17:56.969 "nvme_io": false 00:17:56.969 }, 00:17:56.969 "memory_domains": [ 00:17:56.969 { 00:17:56.969 "dma_device_id": "system", 00:17:56.969 "dma_device_type": 1 00:17:56.969 }, 00:17:56.969 { 00:17:56.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.969 "dma_device_type": 2 00:17:56.969 } 00:17:56.970 ], 00:17:56.970 "driver_specific": {} 00:17:56.970 } 00:17:56.970 ] 00:17:56.970 02:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:56.970 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:56.970 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:56.970 02:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:57.534 [2024-05-15 02:18:45.283041] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.534 [2024-05-15 02:18:45.283103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.534 [2024-05-15 02:18:45.283113] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.534 [2024-05-15 02:18:45.283567] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.534 [2024-05-15 02:18:45.283582] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.534 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.792 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.792 "name": "Existed_Raid", 00:17:57.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.792 "strip_size_kb": 64, 00:17:57.792 "state": "configuring", 00:17:57.792 "raid_level": "raid0", 00:17:57.792 "superblock": false, 00:17:57.792 "num_base_bdevs": 4, 00:17:57.792 "num_base_bdevs_discovered": 3, 00:17:57.792 "num_base_bdevs_operational": 4, 00:17:57.792 "base_bdevs_list": [ 00:17:57.792 { 00:17:57.792 "name": "BaseBdev1", 00:17:57.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.792 "is_configured": false, 00:17:57.792 "data_offset": 0, 00:17:57.792 "data_size": 0 00:17:57.792 }, 00:17:57.792 { 00:17:57.792 "name": "BaseBdev2", 00:17:57.792 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:17:57.792 "is_configured": true, 00:17:57.792 "data_offset": 0, 00:17:57.792 "data_size": 65536 00:17:57.792 }, 00:17:57.792 { 00:17:57.792 "name": "BaseBdev3", 00:17:57.792 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:17:57.792 "is_configured": true, 00:17:57.792 "data_offset": 0, 00:17:57.792 "data_size": 65536 00:17:57.792 }, 00:17:57.792 { 00:17:57.792 "name": "BaseBdev4", 00:17:57.792 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:17:57.792 "is_configured": true, 00:17:57.792 "data_offset": 0, 00:17:57.792 "data_size": 65536 00:17:57.792 } 00:17:57.792 ] 00:17:57.792 }' 00:17:57.792 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.792 02:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.050 02:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:58.307 [2024-05-15 02:18:46.259004] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.307 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.565 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.565 "name": "Existed_Raid", 00:17:58.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.565 "strip_size_kb": 64, 00:17:58.565 "state": "configuring", 00:17:58.565 "raid_level": "raid0", 00:17:58.565 "superblock": false, 00:17:58.565 "num_base_bdevs": 4, 00:17:58.565 "num_base_bdevs_discovered": 2, 00:17:58.565 "num_base_bdevs_operational": 4, 00:17:58.565 "base_bdevs_list": [ 00:17:58.565 { 00:17:58.565 "name": "BaseBdev1", 00:17:58.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.565 "is_configured": false, 00:17:58.565 "data_offset": 0, 00:17:58.565 "data_size": 0 00:17:58.565 }, 00:17:58.565 { 00:17:58.565 "name": null, 00:17:58.565 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:17:58.565 "is_configured": false, 00:17:58.565 "data_offset": 0, 00:17:58.565 "data_size": 65536 00:17:58.565 }, 00:17:58.565 { 00:17:58.565 "name": "BaseBdev3", 00:17:58.565 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:17:58.565 "is_configured": true, 00:17:58.565 "data_offset": 0, 00:17:58.565 "data_size": 65536 00:17:58.565 }, 00:17:58.565 { 00:17:58.565 "name": "BaseBdev4", 00:17:58.566 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:17:58.566 "is_configured": true, 00:17:58.566 "data_offset": 0, 00:17:58.566 "data_size": 65536 00:17:58.566 } 00:17:58.566 ] 00:17:58.566 }' 00:17:58.566 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.566 02:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.132 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.132 02:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:59.132 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:17:59.132 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:59.389 [2024-05-15 02:18:47.359059] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.389 BaseBdev1 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:59.389 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.646 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:59.903 [ 00:17:59.903 { 00:17:59.903 "name": "BaseBdev1", 00:17:59.903 "aliases": [ 00:17:59.903 "760ee2e4-1261-11ef-99fd-bfc7c66e2865" 00:17:59.903 ], 00:17:59.903 "product_name": "Malloc disk", 00:17:59.903 "block_size": 512, 00:17:59.903 "num_blocks": 65536, 00:17:59.903 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:17:59.903 "assigned_rate_limits": { 00:17:59.903 "rw_ios_per_sec": 0, 00:17:59.903 "rw_mbytes_per_sec": 0, 00:17:59.903 "r_mbytes_per_sec": 0, 00:17:59.903 "w_mbytes_per_sec": 0 00:17:59.903 }, 00:17:59.903 "claimed": true, 00:17:59.903 "claim_type": "exclusive_write", 00:17:59.903 "zoned": false, 00:17:59.903 "supported_io_types": { 00:17:59.903 "read": true, 00:17:59.903 "write": true, 00:17:59.903 "unmap": true, 00:17:59.903 "write_zeroes": true, 00:17:59.903 "flush": true, 00:17:59.903 "reset": true, 00:17:59.903 "compare": false, 00:17:59.903 "compare_and_write": false, 00:17:59.903 "abort": true, 00:17:59.903 "nvme_admin": false, 00:17:59.903 "nvme_io": false 00:17:59.903 }, 00:17:59.903 "memory_domains": [ 00:17:59.903 { 00:17:59.903 "dma_device_id": "system", 00:17:59.903 "dma_device_type": 1 00:17:59.903 }, 00:17:59.903 { 00:17:59.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.903 "dma_device_type": 2 00:17:59.903 } 00:17:59.903 ], 00:17:59.903 "driver_specific": {} 00:17:59.903 } 00:17:59.903 ] 00:17:59.903 02:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:59.903 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:59.903 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.904 02:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.161 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.161 "name": "Existed_Raid", 00:18:00.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.161 "strip_size_kb": 64, 00:18:00.161 "state": "configuring", 00:18:00.161 "raid_level": "raid0", 00:18:00.161 "superblock": false, 00:18:00.161 "num_base_bdevs": 4, 00:18:00.161 "num_base_bdevs_discovered": 3, 00:18:00.161 "num_base_bdevs_operational": 4, 00:18:00.161 "base_bdevs_list": [ 00:18:00.161 { 00:18:00.161 "name": "BaseBdev1", 00:18:00.161 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:00.161 "is_configured": true, 00:18:00.161 "data_offset": 0, 00:18:00.161 "data_size": 65536 00:18:00.161 }, 00:18:00.161 { 00:18:00.161 "name": null, 00:18:00.161 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:00.161 "is_configured": false, 00:18:00.161 "data_offset": 0, 00:18:00.161 "data_size": 65536 00:18:00.161 }, 00:18:00.161 { 00:18:00.161 "name": "BaseBdev3", 00:18:00.161 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:00.161 "is_configured": true, 00:18:00.161 "data_offset": 0, 00:18:00.161 "data_size": 65536 00:18:00.161 }, 00:18:00.161 { 00:18:00.161 "name": "BaseBdev4", 00:18:00.161 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:00.161 "is_configured": true, 00:18:00.161 "data_offset": 0, 00:18:00.161 "data_size": 65536 00:18:00.161 } 00:18:00.161 ] 00:18:00.161 }' 00:18:00.161 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.161 02:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.729 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.729 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:01.005 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:01.005 02:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:01.263 [2024-05-15 02:18:49.058860] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.263 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.521 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.521 "name": "Existed_Raid", 00:18:01.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.521 "strip_size_kb": 64, 00:18:01.521 "state": "configuring", 00:18:01.521 "raid_level": "raid0", 00:18:01.521 "superblock": false, 00:18:01.521 "num_base_bdevs": 4, 00:18:01.521 "num_base_bdevs_discovered": 2, 00:18:01.521 "num_base_bdevs_operational": 4, 00:18:01.521 "base_bdevs_list": [ 00:18:01.521 { 00:18:01.521 "name": "BaseBdev1", 00:18:01.521 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:01.521 "is_configured": true, 00:18:01.521 "data_offset": 0, 00:18:01.521 "data_size": 65536 00:18:01.521 }, 00:18:01.521 { 00:18:01.521 "name": null, 00:18:01.521 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:01.521 "is_configured": false, 00:18:01.521 "data_offset": 0, 00:18:01.521 "data_size": 65536 00:18:01.521 }, 00:18:01.521 { 00:18:01.521 "name": null, 00:18:01.521 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:01.521 "is_configured": false, 00:18:01.521 "data_offset": 0, 00:18:01.521 "data_size": 65536 00:18:01.521 }, 00:18:01.521 { 00:18:01.521 "name": "BaseBdev4", 00:18:01.521 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:01.521 "is_configured": true, 00:18:01.521 "data_offset": 0, 00:18:01.521 "data_size": 65536 00:18:01.521 } 00:18:01.521 ] 00:18:01.521 }' 00:18:01.521 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.521 02:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.780 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.780 02:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.038 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:18:02.038 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:02.296 [2024-05-15 02:18:50.246836] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.296 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.554 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.554 "name": "Existed_Raid", 00:18:02.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.554 "strip_size_kb": 64, 00:18:02.554 "state": "configuring", 00:18:02.554 "raid_level": "raid0", 00:18:02.554 "superblock": false, 00:18:02.554 "num_base_bdevs": 4, 00:18:02.554 "num_base_bdevs_discovered": 3, 00:18:02.554 "num_base_bdevs_operational": 4, 00:18:02.554 "base_bdevs_list": [ 00:18:02.554 { 00:18:02.554 "name": "BaseBdev1", 00:18:02.554 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:02.554 "is_configured": true, 00:18:02.554 "data_offset": 0, 00:18:02.554 "data_size": 65536 00:18:02.554 }, 00:18:02.554 { 00:18:02.554 "name": null, 00:18:02.554 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:02.554 "is_configured": false, 00:18:02.554 "data_offset": 0, 00:18:02.554 "data_size": 65536 00:18:02.554 }, 00:18:02.554 { 00:18:02.554 "name": "BaseBdev3", 00:18:02.554 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:02.554 "is_configured": true, 00:18:02.554 "data_offset": 0, 00:18:02.554 "data_size": 65536 00:18:02.554 }, 00:18:02.554 { 00:18:02.554 "name": "BaseBdev4", 00:18:02.554 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:02.554 "is_configured": true, 00:18:02.554 "data_offset": 0, 00:18:02.554 "data_size": 65536 00:18:02.554 } 00:18:02.554 ] 00:18:02.554 }' 00:18:02.554 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.554 02:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.812 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.812 02:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:03.379 [2024-05-15 02:18:51.310825] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.379 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.655 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.655 "name": "Existed_Raid", 00:18:03.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.655 "strip_size_kb": 64, 00:18:03.655 "state": "configuring", 00:18:03.655 "raid_level": "raid0", 00:18:03.655 "superblock": false, 00:18:03.655 "num_base_bdevs": 4, 00:18:03.655 "num_base_bdevs_discovered": 2, 00:18:03.655 "num_base_bdevs_operational": 4, 00:18:03.655 "base_bdevs_list": [ 00:18:03.655 { 00:18:03.655 "name": null, 00:18:03.655 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:03.655 "is_configured": false, 00:18:03.655 "data_offset": 0, 00:18:03.655 "data_size": 65536 00:18:03.655 }, 00:18:03.655 { 00:18:03.655 "name": null, 00:18:03.655 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:03.655 "is_configured": false, 00:18:03.655 "data_offset": 0, 00:18:03.655 "data_size": 65536 00:18:03.655 }, 00:18:03.655 { 00:18:03.655 "name": "BaseBdev3", 00:18:03.655 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:03.655 "is_configured": true, 00:18:03.655 "data_offset": 0, 00:18:03.655 "data_size": 65536 00:18:03.655 }, 00:18:03.655 { 00:18:03.655 "name": "BaseBdev4", 00:18:03.655 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:03.655 "is_configured": true, 00:18:03.655 "data_offset": 0, 00:18:03.655 "data_size": 65536 00:18:03.655 } 00:18:03.655 ] 00:18:03.655 }' 00:18:03.655 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.655 02:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.913 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:03.913 02:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.171 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:18:04.171 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:04.428 [2024-05-15 02:18:52.363852] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.428 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.687 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.687 "name": "Existed_Raid", 00:18:04.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.687 "strip_size_kb": 64, 00:18:04.687 "state": "configuring", 00:18:04.687 "raid_level": "raid0", 00:18:04.687 "superblock": false, 00:18:04.687 "num_base_bdevs": 4, 00:18:04.687 "num_base_bdevs_discovered": 3, 00:18:04.687 "num_base_bdevs_operational": 4, 00:18:04.687 "base_bdevs_list": [ 00:18:04.687 { 00:18:04.687 "name": null, 00:18:04.687 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:04.687 "is_configured": false, 00:18:04.687 "data_offset": 0, 00:18:04.687 "data_size": 65536 00:18:04.687 }, 00:18:04.687 { 00:18:04.687 "name": "BaseBdev2", 00:18:04.687 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:04.687 "is_configured": true, 00:18:04.687 "data_offset": 0, 00:18:04.687 "data_size": 65536 00:18:04.687 }, 00:18:04.687 { 00:18:04.687 "name": "BaseBdev3", 00:18:04.687 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:04.687 "is_configured": true, 00:18:04.687 "data_offset": 0, 00:18:04.687 "data_size": 65536 00:18:04.687 }, 00:18:04.687 { 00:18:04.687 "name": "BaseBdev4", 00:18:04.687 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:04.687 "is_configured": true, 00:18:04.687 "data_offset": 0, 00:18:04.687 "data_size": 65536 00:18:04.687 } 00:18:04.687 ] 00:18:04.687 }' 00:18:04.687 02:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.687 02:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.253 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.253 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:05.511 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:18:05.511 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.511 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:05.511 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 760ee2e4-1261-11ef-99fd-bfc7c66e2865 00:18:06.077 [2024-05-15 02:18:53.803933] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:06.077 [2024-05-15 02:18:53.803966] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7f7f00 00:18:06.077 [2024-05-15 02:18:53.803971] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:06.077 [2024-05-15 02:18:53.804005] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c85ae20 00:18:06.077 [2024-05-15 02:18:53.804067] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7f7f00 00:18:06.077 [2024-05-15 02:18:53.804071] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c7f7f00 00:18:06.077 [2024-05-15 02:18:53.804105] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.077 NewBaseBdev 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:06.077 02:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.077 02:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:06.335 [ 00:18:06.335 { 00:18:06.335 "name": "NewBaseBdev", 00:18:06.335 "aliases": [ 00:18:06.335 "760ee2e4-1261-11ef-99fd-bfc7c66e2865" 00:18:06.335 ], 00:18:06.335 "product_name": "Malloc disk", 00:18:06.335 "block_size": 512, 00:18:06.335 "num_blocks": 65536, 00:18:06.335 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:06.335 "assigned_rate_limits": { 00:18:06.335 "rw_ios_per_sec": 0, 00:18:06.335 "rw_mbytes_per_sec": 0, 00:18:06.335 "r_mbytes_per_sec": 0, 00:18:06.335 "w_mbytes_per_sec": 0 00:18:06.335 }, 00:18:06.335 "claimed": true, 00:18:06.335 "claim_type": "exclusive_write", 00:18:06.335 "zoned": false, 00:18:06.335 "supported_io_types": { 00:18:06.335 "read": true, 00:18:06.335 "write": true, 00:18:06.335 "unmap": true, 00:18:06.335 "write_zeroes": true, 00:18:06.335 "flush": true, 00:18:06.335 "reset": true, 00:18:06.335 "compare": false, 00:18:06.335 "compare_and_write": false, 00:18:06.335 "abort": true, 00:18:06.335 "nvme_admin": false, 00:18:06.335 "nvme_io": false 00:18:06.335 }, 00:18:06.335 "memory_domains": [ 00:18:06.335 { 00:18:06.335 "dma_device_id": "system", 00:18:06.335 "dma_device_type": 1 00:18:06.335 }, 00:18:06.335 { 00:18:06.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.335 "dma_device_type": 2 00:18:06.335 } 00:18:06.335 ], 00:18:06.335 "driver_specific": {} 00:18:06.335 } 00:18:06.335 ] 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.335 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.593 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.593 "name": "Existed_Raid", 00:18:06.593 "uuid": "79e651de-1261-11ef-99fd-bfc7c66e2865", 00:18:06.593 "strip_size_kb": 64, 00:18:06.593 "state": "online", 00:18:06.593 "raid_level": "raid0", 00:18:06.593 "superblock": false, 00:18:06.593 "num_base_bdevs": 4, 00:18:06.593 "num_base_bdevs_discovered": 4, 00:18:06.593 "num_base_bdevs_operational": 4, 00:18:06.593 "base_bdevs_list": [ 00:18:06.593 { 00:18:06.593 "name": "NewBaseBdev", 00:18:06.593 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:06.593 "is_configured": true, 00:18:06.594 "data_offset": 0, 00:18:06.594 "data_size": 65536 00:18:06.594 }, 00:18:06.594 { 00:18:06.594 "name": "BaseBdev2", 00:18:06.594 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:06.594 "is_configured": true, 00:18:06.594 "data_offset": 0, 00:18:06.594 "data_size": 65536 00:18:06.594 }, 00:18:06.594 { 00:18:06.594 "name": "BaseBdev3", 00:18:06.594 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:06.594 "is_configured": true, 00:18:06.594 "data_offset": 0, 00:18:06.594 "data_size": 65536 00:18:06.594 }, 00:18:06.594 { 00:18:06.594 "name": "BaseBdev4", 00:18:06.594 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:06.594 "is_configured": true, 00:18:06.594 "data_offset": 0, 00:18:06.594 "data_size": 65536 00:18:06.594 } 00:18:06.594 ] 00:18:06.594 }' 00:18:06.594 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.594 02:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.159 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:18:07.159 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:07.159 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:07.159 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:07.160 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:07.160 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:07.160 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:07.160 02:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:07.418 [2024-05-15 02:18:55.211829] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:07.418 "name": "Existed_Raid", 00:18:07.418 "aliases": [ 00:18:07.418 "79e651de-1261-11ef-99fd-bfc7c66e2865" 00:18:07.418 ], 00:18:07.418 "product_name": "Raid Volume", 00:18:07.418 "block_size": 512, 00:18:07.418 "num_blocks": 262144, 00:18:07.418 "uuid": "79e651de-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "assigned_rate_limits": { 00:18:07.418 "rw_ios_per_sec": 0, 00:18:07.418 "rw_mbytes_per_sec": 0, 00:18:07.418 "r_mbytes_per_sec": 0, 00:18:07.418 "w_mbytes_per_sec": 0 00:18:07.418 }, 00:18:07.418 "claimed": false, 00:18:07.418 "zoned": false, 00:18:07.418 "supported_io_types": { 00:18:07.418 "read": true, 00:18:07.418 "write": true, 00:18:07.418 "unmap": true, 00:18:07.418 "write_zeroes": true, 00:18:07.418 "flush": true, 00:18:07.418 "reset": true, 00:18:07.418 "compare": false, 00:18:07.418 "compare_and_write": false, 00:18:07.418 "abort": false, 00:18:07.418 "nvme_admin": false, 00:18:07.418 "nvme_io": false 00:18:07.418 }, 00:18:07.418 "memory_domains": [ 00:18:07.418 { 00:18:07.418 "dma_device_id": "system", 00:18:07.418 "dma_device_type": 1 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.418 "dma_device_type": 2 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "system", 00:18:07.418 "dma_device_type": 1 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.418 "dma_device_type": 2 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "system", 00:18:07.418 "dma_device_type": 1 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.418 "dma_device_type": 2 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "system", 00:18:07.418 "dma_device_type": 1 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.418 "dma_device_type": 2 00:18:07.418 } 00:18:07.418 ], 00:18:07.418 "driver_specific": { 00:18:07.418 "raid": { 00:18:07.418 "uuid": "79e651de-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "strip_size_kb": 64, 00:18:07.418 "state": "online", 00:18:07.418 "raid_level": "raid0", 00:18:07.418 "superblock": false, 00:18:07.418 "num_base_bdevs": 4, 00:18:07.418 "num_base_bdevs_discovered": 4, 00:18:07.418 "num_base_bdevs_operational": 4, 00:18:07.418 "base_bdevs_list": [ 00:18:07.418 { 00:18:07.418 "name": "NewBaseBdev", 00:18:07.418 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "is_configured": true, 00:18:07.418 "data_offset": 0, 00:18:07.418 "data_size": 65536 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "name": "BaseBdev2", 00:18:07.418 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "is_configured": true, 00:18:07.418 "data_offset": 0, 00:18:07.418 "data_size": 65536 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "name": "BaseBdev3", 00:18:07.418 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "is_configured": true, 00:18:07.418 "data_offset": 0, 00:18:07.418 "data_size": 65536 00:18:07.418 }, 00:18:07.418 { 00:18:07.418 "name": "BaseBdev4", 00:18:07.418 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:07.418 "is_configured": true, 00:18:07.418 "data_offset": 0, 00:18:07.418 "data_size": 65536 00:18:07.418 } 00:18:07.418 ] 00:18:07.418 } 00:18:07.418 } 00:18:07.418 }' 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:18:07.418 BaseBdev2 00:18:07.418 BaseBdev3 00:18:07.418 BaseBdev4' 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:07.418 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:07.677 "name": "NewBaseBdev", 00:18:07.677 "aliases": [ 00:18:07.677 "760ee2e4-1261-11ef-99fd-bfc7c66e2865" 00:18:07.677 ], 00:18:07.677 "product_name": "Malloc disk", 00:18:07.677 "block_size": 512, 00:18:07.677 "num_blocks": 65536, 00:18:07.677 "uuid": "760ee2e4-1261-11ef-99fd-bfc7c66e2865", 00:18:07.677 "assigned_rate_limits": { 00:18:07.677 "rw_ios_per_sec": 0, 00:18:07.677 "rw_mbytes_per_sec": 0, 00:18:07.677 "r_mbytes_per_sec": 0, 00:18:07.677 "w_mbytes_per_sec": 0 00:18:07.677 }, 00:18:07.677 "claimed": true, 00:18:07.677 "claim_type": "exclusive_write", 00:18:07.677 "zoned": false, 00:18:07.677 "supported_io_types": { 00:18:07.677 "read": true, 00:18:07.677 "write": true, 00:18:07.677 "unmap": true, 00:18:07.677 "write_zeroes": true, 00:18:07.677 "flush": true, 00:18:07.677 "reset": true, 00:18:07.677 "compare": false, 00:18:07.677 "compare_and_write": false, 00:18:07.677 "abort": true, 00:18:07.677 "nvme_admin": false, 00:18:07.677 "nvme_io": false 00:18:07.677 }, 00:18:07.677 "memory_domains": [ 00:18:07.677 { 00:18:07.677 "dma_device_id": "system", 00:18:07.677 "dma_device_type": 1 00:18:07.677 }, 00:18:07.677 { 00:18:07.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.677 "dma_device_type": 2 00:18:07.677 } 00:18:07.677 ], 00:18:07.677 "driver_specific": {} 00:18:07.677 }' 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:07.677 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:07.935 "name": "BaseBdev2", 00:18:07.935 "aliases": [ 00:18:07.935 "7362b805-1261-11ef-99fd-bfc7c66e2865" 00:18:07.935 ], 00:18:07.935 "product_name": "Malloc disk", 00:18:07.935 "block_size": 512, 00:18:07.935 "num_blocks": 65536, 00:18:07.935 "uuid": "7362b805-1261-11ef-99fd-bfc7c66e2865", 00:18:07.935 "assigned_rate_limits": { 00:18:07.935 "rw_ios_per_sec": 0, 00:18:07.935 "rw_mbytes_per_sec": 0, 00:18:07.935 "r_mbytes_per_sec": 0, 00:18:07.935 "w_mbytes_per_sec": 0 00:18:07.935 }, 00:18:07.935 "claimed": true, 00:18:07.935 "claim_type": "exclusive_write", 00:18:07.935 "zoned": false, 00:18:07.935 "supported_io_types": { 00:18:07.935 "read": true, 00:18:07.935 "write": true, 00:18:07.935 "unmap": true, 00:18:07.935 "write_zeroes": true, 00:18:07.935 "flush": true, 00:18:07.935 "reset": true, 00:18:07.935 "compare": false, 00:18:07.935 "compare_and_write": false, 00:18:07.935 "abort": true, 00:18:07.935 "nvme_admin": false, 00:18:07.935 "nvme_io": false 00:18:07.935 }, 00:18:07.935 "memory_domains": [ 00:18:07.935 { 00:18:07.935 "dma_device_id": "system", 00:18:07.935 "dma_device_type": 1 00:18:07.935 }, 00:18:07.935 { 00:18:07.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.935 "dma_device_type": 2 00:18:07.935 } 00:18:07.935 ], 00:18:07.935 "driver_specific": {} 00:18:07.935 }' 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:07.935 02:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:08.194 "name": "BaseBdev3", 00:18:08.194 "aliases": [ 00:18:08.194 "73e553fb-1261-11ef-99fd-bfc7c66e2865" 00:18:08.194 ], 00:18:08.194 "product_name": "Malloc disk", 00:18:08.194 "block_size": 512, 00:18:08.194 "num_blocks": 65536, 00:18:08.194 "uuid": "73e553fb-1261-11ef-99fd-bfc7c66e2865", 00:18:08.194 "assigned_rate_limits": { 00:18:08.194 "rw_ios_per_sec": 0, 00:18:08.194 "rw_mbytes_per_sec": 0, 00:18:08.194 "r_mbytes_per_sec": 0, 00:18:08.194 "w_mbytes_per_sec": 0 00:18:08.194 }, 00:18:08.194 "claimed": true, 00:18:08.194 "claim_type": "exclusive_write", 00:18:08.194 "zoned": false, 00:18:08.194 "supported_io_types": { 00:18:08.194 "read": true, 00:18:08.194 "write": true, 00:18:08.194 "unmap": true, 00:18:08.194 "write_zeroes": true, 00:18:08.194 "flush": true, 00:18:08.194 "reset": true, 00:18:08.194 "compare": false, 00:18:08.194 "compare_and_write": false, 00:18:08.194 "abort": true, 00:18:08.194 "nvme_admin": false, 00:18:08.194 "nvme_io": false 00:18:08.194 }, 00:18:08.194 "memory_domains": [ 00:18:08.194 { 00:18:08.194 "dma_device_id": "system", 00:18:08.194 "dma_device_type": 1 00:18:08.194 }, 00:18:08.194 { 00:18:08.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.194 "dma_device_type": 2 00:18:08.194 } 00:18:08.194 ], 00:18:08.194 "driver_specific": {} 00:18:08.194 }' 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:08.194 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:08.452 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:08.710 "name": "BaseBdev4", 00:18:08.710 "aliases": [ 00:18:08.710 "745d8e89-1261-11ef-99fd-bfc7c66e2865" 00:18:08.710 ], 00:18:08.710 "product_name": "Malloc disk", 00:18:08.710 "block_size": 512, 00:18:08.710 "num_blocks": 65536, 00:18:08.710 "uuid": "745d8e89-1261-11ef-99fd-bfc7c66e2865", 00:18:08.710 "assigned_rate_limits": { 00:18:08.710 "rw_ios_per_sec": 0, 00:18:08.710 "rw_mbytes_per_sec": 0, 00:18:08.710 "r_mbytes_per_sec": 0, 00:18:08.710 "w_mbytes_per_sec": 0 00:18:08.710 }, 00:18:08.710 "claimed": true, 00:18:08.710 "claim_type": "exclusive_write", 00:18:08.710 "zoned": false, 00:18:08.710 "supported_io_types": { 00:18:08.710 "read": true, 00:18:08.710 "write": true, 00:18:08.710 "unmap": true, 00:18:08.710 "write_zeroes": true, 00:18:08.710 "flush": true, 00:18:08.710 "reset": true, 00:18:08.710 "compare": false, 00:18:08.710 "compare_and_write": false, 00:18:08.710 "abort": true, 00:18:08.710 "nvme_admin": false, 00:18:08.710 "nvme_io": false 00:18:08.710 }, 00:18:08.710 "memory_domains": [ 00:18:08.710 { 00:18:08.710 "dma_device_id": "system", 00:18:08.710 "dma_device_type": 1 00:18:08.710 }, 00:18:08.710 { 00:18:08.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.710 "dma_device_type": 2 00:18:08.710 } 00:18:08.710 ], 00:18:08.710 "driver_specific": {} 00:18:08.710 }' 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:08.710 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:08.968 [2024-05-15 02:18:56.767749] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.968 [2024-05-15 02:18:56.767780] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.968 [2024-05-15 02:18:56.767804] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.968 [2024-05-15 02:18:56.767820] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.968 [2024-05-15 02:18:56.767826] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7f7f00 name Existed_Raid, state offline 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 57242 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 57242 ']' 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 57242 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 57242 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:18:08.968 killing process with pid 57242 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57242' 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 57242 00:18:08.968 [2024-05-15 02:18:56.801426] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 57242 00:18:08.968 [2024-05-15 02:18:56.821154] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:18:08.968 00:18:08.968 real 0m28.725s 00:18:08.968 user 0m52.992s 00:18:08.968 sys 0m3.645s 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.968 02:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.968 ************************************ 00:18:08.968 END TEST raid_state_function_test 00:18:08.968 ************************************ 00:18:09.228 02:18:57 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:09.228 02:18:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:09.228 02:18:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.228 02:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.228 ************************************ 00:18:09.228 START TEST raid_state_function_test_sb 00:18:09.228 ************************************ 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=58065 00:18:09.228 Process raid pid: 58065 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 58065' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 58065 /var/tmp/spdk-raid.sock 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 58065 ']' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:09.228 02:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.228 [2024-05-15 02:18:57.028331] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:09.228 [2024-05-15 02:18:57.028615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:09.794 EAL: TSC is not safe to use in SMP mode 00:18:09.794 EAL: TSC is not invariant 00:18:09.794 [2024-05-15 02:18:57.524104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.794 [2024-05-15 02:18:57.609794] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:09.794 [2024-05-15 02:18:57.611981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.794 [2024-05-15 02:18:57.612698] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.794 [2024-05-15 02:18:57.612712] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.359 02:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:10.359 02:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:18:10.359 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:10.618 [2024-05-15 02:18:58.571878] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.618 [2024-05-15 02:18:58.571956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.618 [2024-05-15 02:18:58.571961] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.618 [2024-05-15 02:18:58.571970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.618 [2024-05-15 02:18:58.571974] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.618 [2024-05-15 02:18:58.571982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.618 [2024-05-15 02:18:58.571985] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.618 [2024-05-15 02:18:58.571993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.618 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.876 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.876 "name": "Existed_Raid", 00:18:10.876 "uuid": "7cbdd7ac-1261-11ef-99fd-bfc7c66e2865", 00:18:10.876 "strip_size_kb": 64, 00:18:10.876 "state": "configuring", 00:18:10.876 "raid_level": "raid0", 00:18:10.876 "superblock": true, 00:18:10.876 "num_base_bdevs": 4, 00:18:10.876 "num_base_bdevs_discovered": 0, 00:18:10.876 "num_base_bdevs_operational": 4, 00:18:10.876 "base_bdevs_list": [ 00:18:10.876 { 00:18:10.877 "name": "BaseBdev1", 00:18:10.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.877 "is_configured": false, 00:18:10.877 "data_offset": 0, 00:18:10.877 "data_size": 0 00:18:10.877 }, 00:18:10.877 { 00:18:10.877 "name": "BaseBdev2", 00:18:10.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.877 "is_configured": false, 00:18:10.877 "data_offset": 0, 00:18:10.877 "data_size": 0 00:18:10.877 }, 00:18:10.877 { 00:18:10.877 "name": "BaseBdev3", 00:18:10.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.877 "is_configured": false, 00:18:10.877 "data_offset": 0, 00:18:10.877 "data_size": 0 00:18:10.877 }, 00:18:10.877 { 00:18:10.877 "name": "BaseBdev4", 00:18:10.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.877 "is_configured": false, 00:18:10.877 "data_offset": 0, 00:18:10.877 "data_size": 0 00:18:10.877 } 00:18:10.877 ] 00:18:10.877 }' 00:18:10.877 02:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.877 02:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.442 02:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.701 [2024-05-15 02:18:59.479836] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.701 [2024-05-15 02:18:59.479870] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82833e500 name Existed_Raid, state configuring 00:18:11.701 02:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:11.701 [2024-05-15 02:18:59.703846] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.701 [2024-05-15 02:18:59.703912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.701 [2024-05-15 02:18:59.703918] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.701 [2024-05-15 02:18:59.703926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.701 [2024-05-15 02:18:59.703930] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.701 [2024-05-15 02:18:59.703937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.701 [2024-05-15 02:18:59.703940] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:11.701 [2024-05-15 02:18:59.703947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:11.962 02:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.220 [2024-05-15 02:18:59.980815] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.220 BaseBdev1 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:12.220 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.478 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.737 [ 00:18:12.737 { 00:18:12.737 "name": "BaseBdev1", 00:18:12.737 "aliases": [ 00:18:12.737 "7d94ae70-1261-11ef-99fd-bfc7c66e2865" 00:18:12.737 ], 00:18:12.737 "product_name": "Malloc disk", 00:18:12.737 "block_size": 512, 00:18:12.737 "num_blocks": 65536, 00:18:12.737 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:12.737 "assigned_rate_limits": { 00:18:12.737 "rw_ios_per_sec": 0, 00:18:12.737 "rw_mbytes_per_sec": 0, 00:18:12.737 "r_mbytes_per_sec": 0, 00:18:12.737 "w_mbytes_per_sec": 0 00:18:12.737 }, 00:18:12.737 "claimed": true, 00:18:12.737 "claim_type": "exclusive_write", 00:18:12.737 "zoned": false, 00:18:12.737 "supported_io_types": { 00:18:12.737 "read": true, 00:18:12.737 "write": true, 00:18:12.737 "unmap": true, 00:18:12.737 "write_zeroes": true, 00:18:12.737 "flush": true, 00:18:12.737 "reset": true, 00:18:12.737 "compare": false, 00:18:12.737 "compare_and_write": false, 00:18:12.737 "abort": true, 00:18:12.737 "nvme_admin": false, 00:18:12.737 "nvme_io": false 00:18:12.737 }, 00:18:12.737 "memory_domains": [ 00:18:12.737 { 00:18:12.737 "dma_device_id": "system", 00:18:12.737 "dma_device_type": 1 00:18:12.737 }, 00:18:12.737 { 00:18:12.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.737 "dma_device_type": 2 00:18:12.737 } 00:18:12.737 ], 00:18:12.737 "driver_specific": {} 00:18:12.737 } 00:18:12.737 ] 00:18:12.737 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:12.737 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.737 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.737 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.737 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.738 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.996 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.996 "name": "Existed_Raid", 00:18:12.996 "uuid": "7d6a912a-1261-11ef-99fd-bfc7c66e2865", 00:18:12.996 "strip_size_kb": 64, 00:18:12.996 "state": "configuring", 00:18:12.996 "raid_level": "raid0", 00:18:12.996 "superblock": true, 00:18:12.996 "num_base_bdevs": 4, 00:18:12.996 "num_base_bdevs_discovered": 1, 00:18:12.996 "num_base_bdevs_operational": 4, 00:18:12.996 "base_bdevs_list": [ 00:18:12.996 { 00:18:12.996 "name": "BaseBdev1", 00:18:12.996 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:12.996 "is_configured": true, 00:18:12.996 "data_offset": 2048, 00:18:12.996 "data_size": 63488 00:18:12.996 }, 00:18:12.996 { 00:18:12.996 "name": "BaseBdev2", 00:18:12.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.996 "is_configured": false, 00:18:12.996 "data_offset": 0, 00:18:12.996 "data_size": 0 00:18:12.996 }, 00:18:12.996 { 00:18:12.996 "name": "BaseBdev3", 00:18:12.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.996 "is_configured": false, 00:18:12.996 "data_offset": 0, 00:18:12.996 "data_size": 0 00:18:12.996 }, 00:18:12.996 { 00:18:12.996 "name": "BaseBdev4", 00:18:12.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.996 "is_configured": false, 00:18:12.996 "data_offset": 0, 00:18:12.996 "data_size": 0 00:18:12.996 } 00:18:12.996 ] 00:18:12.996 }' 00:18:12.996 02:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.996 02:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.254 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.512 [2024-05-15 02:19:01.315843] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.512 [2024-05-15 02:19:01.315897] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82833e500 name Existed_Raid, state configuring 00:18:13.512 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:13.770 [2024-05-15 02:19:01.579856] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.770 [2024-05-15 02:19:01.580622] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.770 [2024-05-15 02:19:01.580667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.770 [2024-05-15 02:19:01.580672] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.770 [2024-05-15 02:19:01.580681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.770 [2024-05-15 02:19:01.580684] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:13.770 [2024-05-15 02:19:01.580692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.770 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.028 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.028 "name": "Existed_Raid", 00:18:14.028 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:14.028 "strip_size_kb": 64, 00:18:14.028 "state": "configuring", 00:18:14.028 "raid_level": "raid0", 00:18:14.028 "superblock": true, 00:18:14.028 "num_base_bdevs": 4, 00:18:14.028 "num_base_bdevs_discovered": 1, 00:18:14.028 "num_base_bdevs_operational": 4, 00:18:14.028 "base_bdevs_list": [ 00:18:14.028 { 00:18:14.028 "name": "BaseBdev1", 00:18:14.028 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:14.028 "is_configured": true, 00:18:14.028 "data_offset": 2048, 00:18:14.028 "data_size": 63488 00:18:14.028 }, 00:18:14.028 { 00:18:14.028 "name": "BaseBdev2", 00:18:14.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.028 "is_configured": false, 00:18:14.028 "data_offset": 0, 00:18:14.028 "data_size": 0 00:18:14.028 }, 00:18:14.028 { 00:18:14.028 "name": "BaseBdev3", 00:18:14.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.028 "is_configured": false, 00:18:14.028 "data_offset": 0, 00:18:14.028 "data_size": 0 00:18:14.028 }, 00:18:14.028 { 00:18:14.028 "name": "BaseBdev4", 00:18:14.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.028 "is_configured": false, 00:18:14.028 "data_offset": 0, 00:18:14.028 "data_size": 0 00:18:14.028 } 00:18:14.028 ] 00:18:14.028 }' 00:18:14.028 02:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.028 02:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.286 02:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.543 [2024-05-15 02:19:02.507981] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.543 BaseBdev2 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:14.543 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.801 02:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.059 [ 00:18:15.059 { 00:18:15.059 "name": "BaseBdev2", 00:18:15.059 "aliases": [ 00:18:15.059 "7f166cbe-1261-11ef-99fd-bfc7c66e2865" 00:18:15.059 ], 00:18:15.059 "product_name": "Malloc disk", 00:18:15.059 "block_size": 512, 00:18:15.059 "num_blocks": 65536, 00:18:15.059 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:15.059 "assigned_rate_limits": { 00:18:15.059 "rw_ios_per_sec": 0, 00:18:15.059 "rw_mbytes_per_sec": 0, 00:18:15.059 "r_mbytes_per_sec": 0, 00:18:15.059 "w_mbytes_per_sec": 0 00:18:15.059 }, 00:18:15.059 "claimed": true, 00:18:15.059 "claim_type": "exclusive_write", 00:18:15.059 "zoned": false, 00:18:15.059 "supported_io_types": { 00:18:15.059 "read": true, 00:18:15.059 "write": true, 00:18:15.059 "unmap": true, 00:18:15.059 "write_zeroes": true, 00:18:15.059 "flush": true, 00:18:15.059 "reset": true, 00:18:15.059 "compare": false, 00:18:15.059 "compare_and_write": false, 00:18:15.059 "abort": true, 00:18:15.059 "nvme_admin": false, 00:18:15.059 "nvme_io": false 00:18:15.059 }, 00:18:15.059 "memory_domains": [ 00:18:15.059 { 00:18:15.059 "dma_device_id": "system", 00:18:15.059 "dma_device_type": 1 00:18:15.059 }, 00:18:15.059 { 00:18:15.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.059 "dma_device_type": 2 00:18:15.059 } 00:18:15.059 ], 00:18:15.059 "driver_specific": {} 00:18:15.059 } 00:18:15.059 ] 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.059 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.318 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.318 "name": "Existed_Raid", 00:18:15.318 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:15.318 "strip_size_kb": 64, 00:18:15.318 "state": "configuring", 00:18:15.318 "raid_level": "raid0", 00:18:15.318 "superblock": true, 00:18:15.318 "num_base_bdevs": 4, 00:18:15.318 "num_base_bdevs_discovered": 2, 00:18:15.318 "num_base_bdevs_operational": 4, 00:18:15.318 "base_bdevs_list": [ 00:18:15.318 { 00:18:15.318 "name": "BaseBdev1", 00:18:15.318 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:15.318 "is_configured": true, 00:18:15.318 "data_offset": 2048, 00:18:15.318 "data_size": 63488 00:18:15.318 }, 00:18:15.318 { 00:18:15.318 "name": "BaseBdev2", 00:18:15.318 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:15.318 "is_configured": true, 00:18:15.318 "data_offset": 2048, 00:18:15.318 "data_size": 63488 00:18:15.318 }, 00:18:15.318 { 00:18:15.318 "name": "BaseBdev3", 00:18:15.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.318 "is_configured": false, 00:18:15.318 "data_offset": 0, 00:18:15.318 "data_size": 0 00:18:15.318 }, 00:18:15.318 { 00:18:15.318 "name": "BaseBdev4", 00:18:15.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.318 "is_configured": false, 00:18:15.318 "data_offset": 0, 00:18:15.318 "data_size": 0 00:18:15.318 } 00:18:15.318 ] 00:18:15.318 }' 00:18:15.318 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.318 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.883 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.142 [2024-05-15 02:19:03.971955] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.142 BaseBdev3 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:16.142 02:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.400 02:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.658 [ 00:18:16.658 { 00:18:16.658 "name": "BaseBdev3", 00:18:16.658 "aliases": [ 00:18:16.658 "7ff5d05b-1261-11ef-99fd-bfc7c66e2865" 00:18:16.658 ], 00:18:16.658 "product_name": "Malloc disk", 00:18:16.658 "block_size": 512, 00:18:16.658 "num_blocks": 65536, 00:18:16.658 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:16.658 "assigned_rate_limits": { 00:18:16.658 "rw_ios_per_sec": 0, 00:18:16.658 "rw_mbytes_per_sec": 0, 00:18:16.658 "r_mbytes_per_sec": 0, 00:18:16.658 "w_mbytes_per_sec": 0 00:18:16.658 }, 00:18:16.658 "claimed": true, 00:18:16.658 "claim_type": "exclusive_write", 00:18:16.658 "zoned": false, 00:18:16.658 "supported_io_types": { 00:18:16.658 "read": true, 00:18:16.658 "write": true, 00:18:16.658 "unmap": true, 00:18:16.658 "write_zeroes": true, 00:18:16.658 "flush": true, 00:18:16.658 "reset": true, 00:18:16.658 "compare": false, 00:18:16.658 "compare_and_write": false, 00:18:16.658 "abort": true, 00:18:16.658 "nvme_admin": false, 00:18:16.658 "nvme_io": false 00:18:16.658 }, 00:18:16.658 "memory_domains": [ 00:18:16.658 { 00:18:16.658 "dma_device_id": "system", 00:18:16.658 "dma_device_type": 1 00:18:16.658 }, 00:18:16.658 { 00:18:16.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.658 "dma_device_type": 2 00:18:16.658 } 00:18:16.658 ], 00:18:16.658 "driver_specific": {} 00:18:16.658 } 00:18:16.658 ] 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.658 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.916 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.916 "name": "Existed_Raid", 00:18:16.916 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:16.916 "strip_size_kb": 64, 00:18:16.916 "state": "configuring", 00:18:16.916 "raid_level": "raid0", 00:18:16.916 "superblock": true, 00:18:16.916 "num_base_bdevs": 4, 00:18:16.916 "num_base_bdevs_discovered": 3, 00:18:16.916 "num_base_bdevs_operational": 4, 00:18:16.916 "base_bdevs_list": [ 00:18:16.916 { 00:18:16.916 "name": "BaseBdev1", 00:18:16.916 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:16.916 "is_configured": true, 00:18:16.916 "data_offset": 2048, 00:18:16.916 "data_size": 63488 00:18:16.916 }, 00:18:16.916 { 00:18:16.916 "name": "BaseBdev2", 00:18:16.916 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:16.916 "is_configured": true, 00:18:16.916 "data_offset": 2048, 00:18:16.916 "data_size": 63488 00:18:16.916 }, 00:18:16.916 { 00:18:16.916 "name": "BaseBdev3", 00:18:16.916 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:16.916 "is_configured": true, 00:18:16.916 "data_offset": 2048, 00:18:16.916 "data_size": 63488 00:18:16.916 }, 00:18:16.916 { 00:18:16.916 "name": "BaseBdev4", 00:18:16.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.916 "is_configured": false, 00:18:16.916 "data_offset": 0, 00:18:16.916 "data_size": 0 00:18:16.916 } 00:18:16.916 ] 00:18:16.916 }' 00:18:16.916 02:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.916 02:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.480 02:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:17.737 [2024-05-15 02:19:05.503950] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:17.737 [2024-05-15 02:19:05.504021] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82833ea00 00:18:17.737 [2024-05-15 02:19:05.504027] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:17.737 [2024-05-15 02:19:05.504047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8283a1ec0 00:18:17.737 [2024-05-15 02:19:05.504089] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82833ea00 00:18:17.738 [2024-05-15 02:19:05.504101] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82833ea00 00:18:17.738 [2024-05-15 02:19:05.504119] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.738 BaseBdev4 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:17.738 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.048 02:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:18.048 [ 00:18:18.048 { 00:18:18.048 "name": "BaseBdev4", 00:18:18.048 "aliases": [ 00:18:18.048 "80df93f6-1261-11ef-99fd-bfc7c66e2865" 00:18:18.048 ], 00:18:18.048 "product_name": "Malloc disk", 00:18:18.048 "block_size": 512, 00:18:18.048 "num_blocks": 65536, 00:18:18.048 "uuid": "80df93f6-1261-11ef-99fd-bfc7c66e2865", 00:18:18.048 "assigned_rate_limits": { 00:18:18.048 "rw_ios_per_sec": 0, 00:18:18.048 "rw_mbytes_per_sec": 0, 00:18:18.048 "r_mbytes_per_sec": 0, 00:18:18.048 "w_mbytes_per_sec": 0 00:18:18.048 }, 00:18:18.048 "claimed": true, 00:18:18.048 "claim_type": "exclusive_write", 00:18:18.048 "zoned": false, 00:18:18.048 "supported_io_types": { 00:18:18.048 "read": true, 00:18:18.048 "write": true, 00:18:18.048 "unmap": true, 00:18:18.048 "write_zeroes": true, 00:18:18.048 "flush": true, 00:18:18.048 "reset": true, 00:18:18.048 "compare": false, 00:18:18.048 "compare_and_write": false, 00:18:18.048 "abort": true, 00:18:18.048 "nvme_admin": false, 00:18:18.048 "nvme_io": false 00:18:18.048 }, 00:18:18.048 "memory_domains": [ 00:18:18.048 { 00:18:18.048 "dma_device_id": "system", 00:18:18.048 "dma_device_type": 1 00:18:18.048 }, 00:18:18.048 { 00:18:18.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.048 "dma_device_type": 2 00:18:18.048 } 00:18:18.048 ], 00:18:18.048 "driver_specific": {} 00:18:18.048 } 00:18:18.048 ] 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:18.306 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.307 "name": "Existed_Raid", 00:18:18.307 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:18.307 "strip_size_kb": 64, 00:18:18.307 "state": "online", 00:18:18.307 "raid_level": "raid0", 00:18:18.307 "superblock": true, 00:18:18.307 "num_base_bdevs": 4, 00:18:18.307 "num_base_bdevs_discovered": 4, 00:18:18.307 "num_base_bdevs_operational": 4, 00:18:18.307 "base_bdevs_list": [ 00:18:18.307 { 00:18:18.307 "name": "BaseBdev1", 00:18:18.307 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:18.307 "is_configured": true, 00:18:18.307 "data_offset": 2048, 00:18:18.307 "data_size": 63488 00:18:18.307 }, 00:18:18.307 { 00:18:18.307 "name": "BaseBdev2", 00:18:18.307 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:18.307 "is_configured": true, 00:18:18.307 "data_offset": 2048, 00:18:18.307 "data_size": 63488 00:18:18.307 }, 00:18:18.307 { 00:18:18.307 "name": "BaseBdev3", 00:18:18.307 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:18.307 "is_configured": true, 00:18:18.307 "data_offset": 2048, 00:18:18.307 "data_size": 63488 00:18:18.307 }, 00:18:18.307 { 00:18:18.307 "name": "BaseBdev4", 00:18:18.307 "uuid": "80df93f6-1261-11ef-99fd-bfc7c66e2865", 00:18:18.307 "is_configured": true, 00:18:18.307 "data_offset": 2048, 00:18:18.307 "data_size": 63488 00:18:18.307 } 00:18:18.307 ] 00:18:18.307 }' 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.307 02:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:18.873 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:19.131 [2024-05-15 02:19:06.959885] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.131 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:19.131 "name": "Existed_Raid", 00:18:19.131 "aliases": [ 00:18:19.131 "7e88d291-1261-11ef-99fd-bfc7c66e2865" 00:18:19.131 ], 00:18:19.131 "product_name": "Raid Volume", 00:18:19.131 "block_size": 512, 00:18:19.131 "num_blocks": 253952, 00:18:19.131 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:19.131 "assigned_rate_limits": { 00:18:19.131 "rw_ios_per_sec": 0, 00:18:19.131 "rw_mbytes_per_sec": 0, 00:18:19.131 "r_mbytes_per_sec": 0, 00:18:19.131 "w_mbytes_per_sec": 0 00:18:19.131 }, 00:18:19.131 "claimed": false, 00:18:19.132 "zoned": false, 00:18:19.132 "supported_io_types": { 00:18:19.132 "read": true, 00:18:19.132 "write": true, 00:18:19.132 "unmap": true, 00:18:19.132 "write_zeroes": true, 00:18:19.132 "flush": true, 00:18:19.132 "reset": true, 00:18:19.132 "compare": false, 00:18:19.132 "compare_and_write": false, 00:18:19.132 "abort": false, 00:18:19.132 "nvme_admin": false, 00:18:19.132 "nvme_io": false 00:18:19.132 }, 00:18:19.132 "memory_domains": [ 00:18:19.132 { 00:18:19.132 "dma_device_id": "system", 00:18:19.132 "dma_device_type": 1 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.132 "dma_device_type": 2 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "system", 00:18:19.132 "dma_device_type": 1 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.132 "dma_device_type": 2 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "system", 00:18:19.132 "dma_device_type": 1 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.132 "dma_device_type": 2 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "system", 00:18:19.132 "dma_device_type": 1 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.132 "dma_device_type": 2 00:18:19.132 } 00:18:19.132 ], 00:18:19.132 "driver_specific": { 00:18:19.132 "raid": { 00:18:19.132 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:19.132 "strip_size_kb": 64, 00:18:19.132 "state": "online", 00:18:19.132 "raid_level": "raid0", 00:18:19.132 "superblock": true, 00:18:19.132 "num_base_bdevs": 4, 00:18:19.132 "num_base_bdevs_discovered": 4, 00:18:19.132 "num_base_bdevs_operational": 4, 00:18:19.132 "base_bdevs_list": [ 00:18:19.132 { 00:18:19.132 "name": "BaseBdev1", 00:18:19.132 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:19.132 "is_configured": true, 00:18:19.132 "data_offset": 2048, 00:18:19.132 "data_size": 63488 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "name": "BaseBdev2", 00:18:19.132 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:19.132 "is_configured": true, 00:18:19.132 "data_offset": 2048, 00:18:19.132 "data_size": 63488 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "name": "BaseBdev3", 00:18:19.132 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:19.132 "is_configured": true, 00:18:19.132 "data_offset": 2048, 00:18:19.132 "data_size": 63488 00:18:19.132 }, 00:18:19.132 { 00:18:19.132 "name": "BaseBdev4", 00:18:19.132 "uuid": "80df93f6-1261-11ef-99fd-bfc7c66e2865", 00:18:19.132 "is_configured": true, 00:18:19.132 "data_offset": 2048, 00:18:19.132 "data_size": 63488 00:18:19.132 } 00:18:19.132 ] 00:18:19.132 } 00:18:19.132 } 00:18:19.132 }' 00:18:19.132 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.132 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:18:19.132 BaseBdev2 00:18:19.132 BaseBdev3 00:18:19.132 BaseBdev4' 00:18:19.132 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:19.132 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:19.132 02:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:19.393 "name": "BaseBdev1", 00:18:19.393 "aliases": [ 00:18:19.393 "7d94ae70-1261-11ef-99fd-bfc7c66e2865" 00:18:19.393 ], 00:18:19.393 "product_name": "Malloc disk", 00:18:19.393 "block_size": 512, 00:18:19.393 "num_blocks": 65536, 00:18:19.393 "uuid": "7d94ae70-1261-11ef-99fd-bfc7c66e2865", 00:18:19.393 "assigned_rate_limits": { 00:18:19.393 "rw_ios_per_sec": 0, 00:18:19.393 "rw_mbytes_per_sec": 0, 00:18:19.393 "r_mbytes_per_sec": 0, 00:18:19.393 "w_mbytes_per_sec": 0 00:18:19.393 }, 00:18:19.393 "claimed": true, 00:18:19.393 "claim_type": "exclusive_write", 00:18:19.393 "zoned": false, 00:18:19.393 "supported_io_types": { 00:18:19.393 "read": true, 00:18:19.393 "write": true, 00:18:19.393 "unmap": true, 00:18:19.393 "write_zeroes": true, 00:18:19.393 "flush": true, 00:18:19.393 "reset": true, 00:18:19.393 "compare": false, 00:18:19.393 "compare_and_write": false, 00:18:19.393 "abort": true, 00:18:19.393 "nvme_admin": false, 00:18:19.393 "nvme_io": false 00:18:19.393 }, 00:18:19.393 "memory_domains": [ 00:18:19.393 { 00:18:19.393 "dma_device_id": "system", 00:18:19.393 "dma_device_type": 1 00:18:19.393 }, 00:18:19.393 { 00:18:19.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.393 "dma_device_type": 2 00:18:19.393 } 00:18:19.393 ], 00:18:19.393 "driver_specific": {} 00:18:19.393 }' 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:19.393 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:19.652 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:19.652 "name": "BaseBdev2", 00:18:19.652 "aliases": [ 00:18:19.652 "7f166cbe-1261-11ef-99fd-bfc7c66e2865" 00:18:19.652 ], 00:18:19.652 "product_name": "Malloc disk", 00:18:19.652 "block_size": 512, 00:18:19.652 "num_blocks": 65536, 00:18:19.652 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:19.652 "assigned_rate_limits": { 00:18:19.652 "rw_ios_per_sec": 0, 00:18:19.652 "rw_mbytes_per_sec": 0, 00:18:19.652 "r_mbytes_per_sec": 0, 00:18:19.652 "w_mbytes_per_sec": 0 00:18:19.652 }, 00:18:19.652 "claimed": true, 00:18:19.652 "claim_type": "exclusive_write", 00:18:19.652 "zoned": false, 00:18:19.652 "supported_io_types": { 00:18:19.652 "read": true, 00:18:19.652 "write": true, 00:18:19.652 "unmap": true, 00:18:19.652 "write_zeroes": true, 00:18:19.652 "flush": true, 00:18:19.652 "reset": true, 00:18:19.652 "compare": false, 00:18:19.652 "compare_and_write": false, 00:18:19.652 "abort": true, 00:18:19.652 "nvme_admin": false, 00:18:19.652 "nvme_io": false 00:18:19.652 }, 00:18:19.652 "memory_domains": [ 00:18:19.652 { 00:18:19.652 "dma_device_id": "system", 00:18:19.652 "dma_device_type": 1 00:18:19.652 }, 00:18:19.652 { 00:18:19.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.652 "dma_device_type": 2 00:18:19.652 } 00:18:19.652 ], 00:18:19.652 "driver_specific": {} 00:18:19.652 }' 00:18:19.652 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.652 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.652 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:19.653 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:19.910 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:19.910 "name": "BaseBdev3", 00:18:19.910 "aliases": [ 00:18:19.910 "7ff5d05b-1261-11ef-99fd-bfc7c66e2865" 00:18:19.910 ], 00:18:19.910 "product_name": "Malloc disk", 00:18:19.910 "block_size": 512, 00:18:19.910 "num_blocks": 65536, 00:18:19.910 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:19.910 "assigned_rate_limits": { 00:18:19.910 "rw_ios_per_sec": 0, 00:18:19.910 "rw_mbytes_per_sec": 0, 00:18:19.910 "r_mbytes_per_sec": 0, 00:18:19.910 "w_mbytes_per_sec": 0 00:18:19.910 }, 00:18:19.910 "claimed": true, 00:18:19.910 "claim_type": "exclusive_write", 00:18:19.910 "zoned": false, 00:18:19.910 "supported_io_types": { 00:18:19.910 "read": true, 00:18:19.910 "write": true, 00:18:19.910 "unmap": true, 00:18:19.910 "write_zeroes": true, 00:18:19.910 "flush": true, 00:18:19.911 "reset": true, 00:18:19.911 "compare": false, 00:18:19.911 "compare_and_write": false, 00:18:19.911 "abort": true, 00:18:19.911 "nvme_admin": false, 00:18:19.911 "nvme_io": false 00:18:19.911 }, 00:18:19.911 "memory_domains": [ 00:18:19.911 { 00:18:19.911 "dma_device_id": "system", 00:18:19.911 "dma_device_type": 1 00:18:19.911 }, 00:18:19.911 { 00:18:19.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.911 "dma_device_type": 2 00:18:19.911 } 00:18:19.911 ], 00:18:19.911 "driver_specific": {} 00:18:19.911 }' 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.911 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.168 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.168 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:20.168 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:20.168 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:20.168 02:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:20.427 "name": "BaseBdev4", 00:18:20.427 "aliases": [ 00:18:20.427 "80df93f6-1261-11ef-99fd-bfc7c66e2865" 00:18:20.427 ], 00:18:20.427 "product_name": "Malloc disk", 00:18:20.427 "block_size": 512, 00:18:20.427 "num_blocks": 65536, 00:18:20.427 "uuid": "80df93f6-1261-11ef-99fd-bfc7c66e2865", 00:18:20.427 "assigned_rate_limits": { 00:18:20.427 "rw_ios_per_sec": 0, 00:18:20.427 "rw_mbytes_per_sec": 0, 00:18:20.427 "r_mbytes_per_sec": 0, 00:18:20.427 "w_mbytes_per_sec": 0 00:18:20.427 }, 00:18:20.427 "claimed": true, 00:18:20.427 "claim_type": "exclusive_write", 00:18:20.427 "zoned": false, 00:18:20.427 "supported_io_types": { 00:18:20.427 "read": true, 00:18:20.427 "write": true, 00:18:20.427 "unmap": true, 00:18:20.427 "write_zeroes": true, 00:18:20.427 "flush": true, 00:18:20.427 "reset": true, 00:18:20.427 "compare": false, 00:18:20.427 "compare_and_write": false, 00:18:20.427 "abort": true, 00:18:20.427 "nvme_admin": false, 00:18:20.427 "nvme_io": false 00:18:20.427 }, 00:18:20.427 "memory_domains": [ 00:18:20.427 { 00:18:20.427 "dma_device_id": "system", 00:18:20.427 "dma_device_type": 1 00:18:20.427 }, 00:18:20.427 { 00:18:20.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.427 "dma_device_type": 2 00:18:20.427 } 00:18:20.427 ], 00:18:20.427 "driver_specific": {} 00:18:20.427 }' 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:20.427 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:20.697 [2024-05-15 02:19:08.523875] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.697 [2024-05-15 02:19:08.523910] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.697 [2024-05-15 02:19:08.523934] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.697 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.980 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.980 "name": "Existed_Raid", 00:18:20.980 "uuid": "7e88d291-1261-11ef-99fd-bfc7c66e2865", 00:18:20.980 "strip_size_kb": 64, 00:18:20.980 "state": "offline", 00:18:20.980 "raid_level": "raid0", 00:18:20.980 "superblock": true, 00:18:20.980 "num_base_bdevs": 4, 00:18:20.980 "num_base_bdevs_discovered": 3, 00:18:20.980 "num_base_bdevs_operational": 3, 00:18:20.980 "base_bdevs_list": [ 00:18:20.980 { 00:18:20.980 "name": null, 00:18:20.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.980 "is_configured": false, 00:18:20.980 "data_offset": 2048, 00:18:20.980 "data_size": 63488 00:18:20.980 }, 00:18:20.980 { 00:18:20.980 "name": "BaseBdev2", 00:18:20.980 "uuid": "7f166cbe-1261-11ef-99fd-bfc7c66e2865", 00:18:20.980 "is_configured": true, 00:18:20.980 "data_offset": 2048, 00:18:20.980 "data_size": 63488 00:18:20.980 }, 00:18:20.980 { 00:18:20.980 "name": "BaseBdev3", 00:18:20.980 "uuid": "7ff5d05b-1261-11ef-99fd-bfc7c66e2865", 00:18:20.980 "is_configured": true, 00:18:20.980 "data_offset": 2048, 00:18:20.980 "data_size": 63488 00:18:20.980 }, 00:18:20.980 { 00:18:20.980 "name": "BaseBdev4", 00:18:20.980 "uuid": "80df93f6-1261-11ef-99fd-bfc7c66e2865", 00:18:20.980 "is_configured": true, 00:18:20.980 "data_offset": 2048, 00:18:20.981 "data_size": 63488 00:18:20.981 } 00:18:20.981 ] 00:18:20.981 }' 00:18:20.981 02:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.981 02:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.239 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:21.239 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.239 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.239 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:21.497 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:21.497 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.497 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:22.063 [2024-05-15 02:19:09.776678] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.063 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.063 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.063 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.063 02:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:22.322 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:22.322 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.322 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:22.581 [2024-05-15 02:19:10.417489] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:22.581 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.581 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.581 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.581 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:22.838 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:22.838 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.838 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:23.097 [2024-05-15 02:19:10.886279] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:23.097 [2024-05-15 02:19:10.886324] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82833ea00 name Existed_Raid, state offline 00:18:23.097 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:23.097 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:23.097 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.097 02:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:23.397 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:23.397 BaseBdev2 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:23.671 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.929 02:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.187 [ 00:18:24.187 { 00:18:24.187 "name": "BaseBdev2", 00:18:24.187 "aliases": [ 00:18:24.187 "8462877b-1261-11ef-99fd-bfc7c66e2865" 00:18:24.187 ], 00:18:24.187 "product_name": "Malloc disk", 00:18:24.187 "block_size": 512, 00:18:24.187 "num_blocks": 65536, 00:18:24.187 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:24.187 "assigned_rate_limits": { 00:18:24.187 "rw_ios_per_sec": 0, 00:18:24.187 "rw_mbytes_per_sec": 0, 00:18:24.187 "r_mbytes_per_sec": 0, 00:18:24.187 "w_mbytes_per_sec": 0 00:18:24.187 }, 00:18:24.187 "claimed": false, 00:18:24.187 "zoned": false, 00:18:24.187 "supported_io_types": { 00:18:24.187 "read": true, 00:18:24.187 "write": true, 00:18:24.187 "unmap": true, 00:18:24.187 "write_zeroes": true, 00:18:24.187 "flush": true, 00:18:24.187 "reset": true, 00:18:24.187 "compare": false, 00:18:24.187 "compare_and_write": false, 00:18:24.187 "abort": true, 00:18:24.187 "nvme_admin": false, 00:18:24.187 "nvme_io": false 00:18:24.187 }, 00:18:24.187 "memory_domains": [ 00:18:24.187 { 00:18:24.187 "dma_device_id": "system", 00:18:24.187 "dma_device_type": 1 00:18:24.187 }, 00:18:24.187 { 00:18:24.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.187 "dma_device_type": 2 00:18:24.187 } 00:18:24.187 ], 00:18:24.187 "driver_specific": {} 00:18:24.187 } 00:18:24.187 ] 00:18:24.187 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:24.187 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:24.187 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:24.187 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:24.446 BaseBdev3 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:24.446 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.704 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:24.962 [ 00:18:24.962 { 00:18:24.962 "name": "BaseBdev3", 00:18:24.962 "aliases": [ 00:18:24.962 "84dfa895-1261-11ef-99fd-bfc7c66e2865" 00:18:24.962 ], 00:18:24.962 "product_name": "Malloc disk", 00:18:24.962 "block_size": 512, 00:18:24.962 "num_blocks": 65536, 00:18:24.962 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:24.962 "assigned_rate_limits": { 00:18:24.962 "rw_ios_per_sec": 0, 00:18:24.962 "rw_mbytes_per_sec": 0, 00:18:24.962 "r_mbytes_per_sec": 0, 00:18:24.962 "w_mbytes_per_sec": 0 00:18:24.962 }, 00:18:24.962 "claimed": false, 00:18:24.962 "zoned": false, 00:18:24.962 "supported_io_types": { 00:18:24.962 "read": true, 00:18:24.962 "write": true, 00:18:24.962 "unmap": true, 00:18:24.962 "write_zeroes": true, 00:18:24.962 "flush": true, 00:18:24.962 "reset": true, 00:18:24.962 "compare": false, 00:18:24.962 "compare_and_write": false, 00:18:24.962 "abort": true, 00:18:24.962 "nvme_admin": false, 00:18:24.962 "nvme_io": false 00:18:24.962 }, 00:18:24.962 "memory_domains": [ 00:18:24.962 { 00:18:24.962 "dma_device_id": "system", 00:18:24.962 "dma_device_type": 1 00:18:24.962 }, 00:18:24.962 { 00:18:24.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.962 "dma_device_type": 2 00:18:24.962 } 00:18:24.962 ], 00:18:24.962 "driver_specific": {} 00:18:24.962 } 00:18:24.962 ] 00:18:24.962 02:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:24.962 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:24.962 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:24.962 02:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:25.219 BaseBdev4 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:25.219 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.477 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.734 [ 00:18:25.734 { 00:18:25.734 "name": "BaseBdev4", 00:18:25.734 "aliases": [ 00:18:25.734 "855cc99a-1261-11ef-99fd-bfc7c66e2865" 00:18:25.734 ], 00:18:25.734 "product_name": "Malloc disk", 00:18:25.734 "block_size": 512, 00:18:25.734 "num_blocks": 65536, 00:18:25.734 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:25.734 "assigned_rate_limits": { 00:18:25.734 "rw_ios_per_sec": 0, 00:18:25.734 "rw_mbytes_per_sec": 0, 00:18:25.734 "r_mbytes_per_sec": 0, 00:18:25.734 "w_mbytes_per_sec": 0 00:18:25.734 }, 00:18:25.734 "claimed": false, 00:18:25.734 "zoned": false, 00:18:25.734 "supported_io_types": { 00:18:25.734 "read": true, 00:18:25.734 "write": true, 00:18:25.734 "unmap": true, 00:18:25.734 "write_zeroes": true, 00:18:25.734 "flush": true, 00:18:25.734 "reset": true, 00:18:25.734 "compare": false, 00:18:25.734 "compare_and_write": false, 00:18:25.734 "abort": true, 00:18:25.734 "nvme_admin": false, 00:18:25.734 "nvme_io": false 00:18:25.734 }, 00:18:25.735 "memory_domains": [ 00:18:25.735 { 00:18:25.735 "dma_device_id": "system", 00:18:25.735 "dma_device_type": 1 00:18:25.735 }, 00:18:25.735 { 00:18:25.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.735 "dma_device_type": 2 00:18:25.735 } 00:18:25.735 ], 00:18:25.735 "driver_specific": {} 00:18:25.735 } 00:18:25.735 ] 00:18:25.735 02:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:25.735 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:25.735 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:25.735 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:26.018 [2024-05-15 02:19:13.875374] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.018 [2024-05-15 02:19:13.875436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.018 [2024-05-15 02:19:13.875447] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.018 [2024-05-15 02:19:13.875894] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.018 [2024-05-15 02:19:13.875907] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.018 02:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.294 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.294 "name": "Existed_Raid", 00:18:26.294 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:26.294 "strip_size_kb": 64, 00:18:26.294 "state": "configuring", 00:18:26.294 "raid_level": "raid0", 00:18:26.294 "superblock": true, 00:18:26.294 "num_base_bdevs": 4, 00:18:26.295 "num_base_bdevs_discovered": 3, 00:18:26.295 "num_base_bdevs_operational": 4, 00:18:26.295 "base_bdevs_list": [ 00:18:26.295 { 00:18:26.295 "name": "BaseBdev1", 00:18:26.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.295 "is_configured": false, 00:18:26.295 "data_offset": 0, 00:18:26.295 "data_size": 0 00:18:26.295 }, 00:18:26.295 { 00:18:26.295 "name": "BaseBdev2", 00:18:26.295 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:26.295 "is_configured": true, 00:18:26.295 "data_offset": 2048, 00:18:26.295 "data_size": 63488 00:18:26.295 }, 00:18:26.295 { 00:18:26.295 "name": "BaseBdev3", 00:18:26.295 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:26.295 "is_configured": true, 00:18:26.295 "data_offset": 2048, 00:18:26.295 "data_size": 63488 00:18:26.295 }, 00:18:26.295 { 00:18:26.295 "name": "BaseBdev4", 00:18:26.295 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:26.295 "is_configured": true, 00:18:26.295 "data_offset": 2048, 00:18:26.295 "data_size": 63488 00:18:26.295 } 00:18:26.295 ] 00:18:26.295 }' 00:18:26.295 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.295 02:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.553 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:26.810 [2024-05-15 02:19:14.815442] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.068 02:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.068 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.068 "name": "Existed_Raid", 00:18:27.068 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:27.068 "strip_size_kb": 64, 00:18:27.068 "state": "configuring", 00:18:27.068 "raid_level": "raid0", 00:18:27.068 "superblock": true, 00:18:27.068 "num_base_bdevs": 4, 00:18:27.068 "num_base_bdevs_discovered": 2, 00:18:27.068 "num_base_bdevs_operational": 4, 00:18:27.068 "base_bdevs_list": [ 00:18:27.068 { 00:18:27.068 "name": "BaseBdev1", 00:18:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.068 "is_configured": false, 00:18:27.068 "data_offset": 0, 00:18:27.068 "data_size": 0 00:18:27.068 }, 00:18:27.068 { 00:18:27.068 "name": null, 00:18:27.068 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:27.068 "is_configured": false, 00:18:27.068 "data_offset": 2048, 00:18:27.068 "data_size": 63488 00:18:27.068 }, 00:18:27.068 { 00:18:27.068 "name": "BaseBdev3", 00:18:27.068 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:27.068 "is_configured": true, 00:18:27.068 "data_offset": 2048, 00:18:27.068 "data_size": 63488 00:18:27.068 }, 00:18:27.068 { 00:18:27.068 "name": "BaseBdev4", 00:18:27.068 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:27.068 "is_configured": true, 00:18:27.068 "data_offset": 2048, 00:18:27.068 "data_size": 63488 00:18:27.068 } 00:18:27.069 ] 00:18:27.069 }' 00:18:27.069 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.069 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.634 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.634 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.634 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:18:27.634 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.198 [2024-05-15 02:19:15.919612] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.198 BaseBdev1 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:28.198 02:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.497 02:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.789 [ 00:18:28.789 { 00:18:28.789 "name": "BaseBdev1", 00:18:28.789 "aliases": [ 00:18:28.789 "8714e1bd-1261-11ef-99fd-bfc7c66e2865" 00:18:28.789 ], 00:18:28.789 "product_name": "Malloc disk", 00:18:28.789 "block_size": 512, 00:18:28.789 "num_blocks": 65536, 00:18:28.789 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:28.789 "assigned_rate_limits": { 00:18:28.789 "rw_ios_per_sec": 0, 00:18:28.789 "rw_mbytes_per_sec": 0, 00:18:28.789 "r_mbytes_per_sec": 0, 00:18:28.789 "w_mbytes_per_sec": 0 00:18:28.789 }, 00:18:28.789 "claimed": true, 00:18:28.789 "claim_type": "exclusive_write", 00:18:28.789 "zoned": false, 00:18:28.789 "supported_io_types": { 00:18:28.789 "read": true, 00:18:28.789 "write": true, 00:18:28.789 "unmap": true, 00:18:28.789 "write_zeroes": true, 00:18:28.789 "flush": true, 00:18:28.789 "reset": true, 00:18:28.789 "compare": false, 00:18:28.789 "compare_and_write": false, 00:18:28.789 "abort": true, 00:18:28.789 "nvme_admin": false, 00:18:28.789 "nvme_io": false 00:18:28.789 }, 00:18:28.789 "memory_domains": [ 00:18:28.789 { 00:18:28.789 "dma_device_id": "system", 00:18:28.789 "dma_device_type": 1 00:18:28.789 }, 00:18:28.789 { 00:18:28.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.789 "dma_device_type": 2 00:18:28.789 } 00:18:28.789 ], 00:18:28.789 "driver_specific": {} 00:18:28.789 } 00:18:28.789 ] 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.789 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.790 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.047 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.047 "name": "Existed_Raid", 00:18:29.047 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:29.047 "strip_size_kb": 64, 00:18:29.047 "state": "configuring", 00:18:29.047 "raid_level": "raid0", 00:18:29.047 "superblock": true, 00:18:29.047 "num_base_bdevs": 4, 00:18:29.047 "num_base_bdevs_discovered": 3, 00:18:29.047 "num_base_bdevs_operational": 4, 00:18:29.047 "base_bdevs_list": [ 00:18:29.047 { 00:18:29.047 "name": "BaseBdev1", 00:18:29.047 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:29.047 "is_configured": true, 00:18:29.047 "data_offset": 2048, 00:18:29.047 "data_size": 63488 00:18:29.047 }, 00:18:29.047 { 00:18:29.047 "name": null, 00:18:29.047 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:29.047 "is_configured": false, 00:18:29.047 "data_offset": 2048, 00:18:29.047 "data_size": 63488 00:18:29.047 }, 00:18:29.047 { 00:18:29.047 "name": "BaseBdev3", 00:18:29.047 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:29.047 "is_configured": true, 00:18:29.047 "data_offset": 2048, 00:18:29.047 "data_size": 63488 00:18:29.047 }, 00:18:29.047 { 00:18:29.047 "name": "BaseBdev4", 00:18:29.047 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:29.047 "is_configured": true, 00:18:29.047 "data_offset": 2048, 00:18:29.047 "data_size": 63488 00:18:29.047 } 00:18:29.047 ] 00:18:29.047 }' 00:18:29.047 02:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.047 02:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.304 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.304 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.868 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:29.868 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:29.868 [2024-05-15 02:19:17.867628] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.126 02:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.126 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.126 "name": "Existed_Raid", 00:18:30.126 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:30.126 "strip_size_kb": 64, 00:18:30.126 "state": "configuring", 00:18:30.126 "raid_level": "raid0", 00:18:30.126 "superblock": true, 00:18:30.126 "num_base_bdevs": 4, 00:18:30.126 "num_base_bdevs_discovered": 2, 00:18:30.126 "num_base_bdevs_operational": 4, 00:18:30.126 "base_bdevs_list": [ 00:18:30.126 { 00:18:30.126 "name": "BaseBdev1", 00:18:30.126 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:30.126 "is_configured": true, 00:18:30.126 "data_offset": 2048, 00:18:30.126 "data_size": 63488 00:18:30.126 }, 00:18:30.126 { 00:18:30.126 "name": null, 00:18:30.126 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:30.126 "is_configured": false, 00:18:30.126 "data_offset": 2048, 00:18:30.126 "data_size": 63488 00:18:30.126 }, 00:18:30.126 { 00:18:30.126 "name": null, 00:18:30.126 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:30.126 "is_configured": false, 00:18:30.126 "data_offset": 2048, 00:18:30.126 "data_size": 63488 00:18:30.126 }, 00:18:30.126 { 00:18:30.126 "name": "BaseBdev4", 00:18:30.126 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:30.126 "is_configured": true, 00:18:30.126 "data_offset": 2048, 00:18:30.126 "data_size": 63488 00:18:30.126 } 00:18:30.126 ] 00:18:30.126 }' 00:18:30.126 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.126 02:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.692 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.692 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:30.950 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:18:30.950 02:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:31.208 [2024-05-15 02:19:18.991707] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.208 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.467 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.467 "name": "Existed_Raid", 00:18:31.467 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:31.467 "strip_size_kb": 64, 00:18:31.467 "state": "configuring", 00:18:31.467 "raid_level": "raid0", 00:18:31.467 "superblock": true, 00:18:31.467 "num_base_bdevs": 4, 00:18:31.467 "num_base_bdevs_discovered": 3, 00:18:31.467 "num_base_bdevs_operational": 4, 00:18:31.467 "base_bdevs_list": [ 00:18:31.467 { 00:18:31.467 "name": "BaseBdev1", 00:18:31.467 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:31.467 "is_configured": true, 00:18:31.467 "data_offset": 2048, 00:18:31.467 "data_size": 63488 00:18:31.467 }, 00:18:31.467 { 00:18:31.467 "name": null, 00:18:31.467 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:31.467 "is_configured": false, 00:18:31.467 "data_offset": 2048, 00:18:31.467 "data_size": 63488 00:18:31.467 }, 00:18:31.467 { 00:18:31.467 "name": "BaseBdev3", 00:18:31.467 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:31.467 "is_configured": true, 00:18:31.467 "data_offset": 2048, 00:18:31.467 "data_size": 63488 00:18:31.467 }, 00:18:31.467 { 00:18:31.467 "name": "BaseBdev4", 00:18:31.467 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:31.467 "is_configured": true, 00:18:31.467 "data_offset": 2048, 00:18:31.467 "data_size": 63488 00:18:31.467 } 00:18:31.467 ] 00:18:31.467 }' 00:18:31.467 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.467 02:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.725 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.725 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.984 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:18:31.984 02:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.244 [2024-05-15 02:19:20.179794] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.244 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.834 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.834 "name": "Existed_Raid", 00:18:32.834 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:32.834 "strip_size_kb": 64, 00:18:32.834 "state": "configuring", 00:18:32.834 "raid_level": "raid0", 00:18:32.834 "superblock": true, 00:18:32.834 "num_base_bdevs": 4, 00:18:32.834 "num_base_bdevs_discovered": 2, 00:18:32.834 "num_base_bdevs_operational": 4, 00:18:32.834 "base_bdevs_list": [ 00:18:32.834 { 00:18:32.834 "name": null, 00:18:32.834 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:32.834 "is_configured": false, 00:18:32.834 "data_offset": 2048, 00:18:32.834 "data_size": 63488 00:18:32.834 }, 00:18:32.834 { 00:18:32.834 "name": null, 00:18:32.834 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:32.834 "is_configured": false, 00:18:32.834 "data_offset": 2048, 00:18:32.834 "data_size": 63488 00:18:32.834 }, 00:18:32.834 { 00:18:32.834 "name": "BaseBdev3", 00:18:32.834 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:32.834 "is_configured": true, 00:18:32.834 "data_offset": 2048, 00:18:32.834 "data_size": 63488 00:18:32.834 }, 00:18:32.834 { 00:18:32.834 "name": "BaseBdev4", 00:18:32.834 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:32.834 "is_configured": true, 00:18:32.834 "data_offset": 2048, 00:18:32.834 "data_size": 63488 00:18:32.834 } 00:18:32.834 ] 00:18:32.834 }' 00:18:32.834 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.834 02:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.095 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.095 02:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:33.354 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:18:33.354 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:33.633 [2024-05-15 02:19:21.420690] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.633 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.891 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.891 "name": "Existed_Raid", 00:18:33.891 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:33.891 "strip_size_kb": 64, 00:18:33.891 "state": "configuring", 00:18:33.891 "raid_level": "raid0", 00:18:33.891 "superblock": true, 00:18:33.891 "num_base_bdevs": 4, 00:18:33.891 "num_base_bdevs_discovered": 3, 00:18:33.891 "num_base_bdevs_operational": 4, 00:18:33.891 "base_bdevs_list": [ 00:18:33.891 { 00:18:33.891 "name": null, 00:18:33.891 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:33.891 "is_configured": false, 00:18:33.891 "data_offset": 2048, 00:18:33.891 "data_size": 63488 00:18:33.891 }, 00:18:33.891 { 00:18:33.891 "name": "BaseBdev2", 00:18:33.891 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:33.891 "is_configured": true, 00:18:33.891 "data_offset": 2048, 00:18:33.891 "data_size": 63488 00:18:33.891 }, 00:18:33.891 { 00:18:33.891 "name": "BaseBdev3", 00:18:33.891 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:33.891 "is_configured": true, 00:18:33.891 "data_offset": 2048, 00:18:33.891 "data_size": 63488 00:18:33.891 }, 00:18:33.891 { 00:18:33.891 "name": "BaseBdev4", 00:18:33.891 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:33.891 "is_configured": true, 00:18:33.891 "data_offset": 2048, 00:18:33.892 "data_size": 63488 00:18:33.892 } 00:18:33.892 ] 00:18:33.892 }' 00:18:33.892 02:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.892 02:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.149 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:34.407 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:18:34.407 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.407 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:34.665 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8714e1bd-1261-11ef-99fd-bfc7c66e2865 00:18:34.923 [2024-05-15 02:19:22.868883] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:34.923 [2024-05-15 02:19:22.868937] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82833ef00 00:18:34.923 [2024-05-15 02:19:22.868942] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:34.923 [2024-05-15 02:19:22.868962] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8283a1e20 00:18:34.923 [2024-05-15 02:19:22.868998] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82833ef00 00:18:34.923 [2024-05-15 02:19:22.869002] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82833ef00 00:18:34.923 [2024-05-15 02:19:22.869019] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.923 NewBaseBdev 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:34.923 02:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.181 02:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:35.439 [ 00:18:35.439 { 00:18:35.439 "name": "NewBaseBdev", 00:18:35.439 "aliases": [ 00:18:35.439 "8714e1bd-1261-11ef-99fd-bfc7c66e2865" 00:18:35.439 ], 00:18:35.439 "product_name": "Malloc disk", 00:18:35.439 "block_size": 512, 00:18:35.439 "num_blocks": 65536, 00:18:35.439 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:35.439 "assigned_rate_limits": { 00:18:35.439 "rw_ios_per_sec": 0, 00:18:35.439 "rw_mbytes_per_sec": 0, 00:18:35.439 "r_mbytes_per_sec": 0, 00:18:35.439 "w_mbytes_per_sec": 0 00:18:35.439 }, 00:18:35.439 "claimed": true, 00:18:35.439 "claim_type": "exclusive_write", 00:18:35.439 "zoned": false, 00:18:35.439 "supported_io_types": { 00:18:35.439 "read": true, 00:18:35.439 "write": true, 00:18:35.439 "unmap": true, 00:18:35.439 "write_zeroes": true, 00:18:35.439 "flush": true, 00:18:35.439 "reset": true, 00:18:35.439 "compare": false, 00:18:35.439 "compare_and_write": false, 00:18:35.439 "abort": true, 00:18:35.439 "nvme_admin": false, 00:18:35.439 "nvme_io": false 00:18:35.439 }, 00:18:35.439 "memory_domains": [ 00:18:35.439 { 00:18:35.439 "dma_device_id": "system", 00:18:35.439 "dma_device_type": 1 00:18:35.439 }, 00:18:35.439 { 00:18:35.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.439 "dma_device_type": 2 00:18:35.439 } 00:18:35.439 ], 00:18:35.439 "driver_specific": {} 00:18:35.439 } 00:18:35.439 ] 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.695 "name": "Existed_Raid", 00:18:35.695 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:35.695 "strip_size_kb": 64, 00:18:35.695 "state": "online", 00:18:35.695 "raid_level": "raid0", 00:18:35.695 "superblock": true, 00:18:35.695 "num_base_bdevs": 4, 00:18:35.695 "num_base_bdevs_discovered": 4, 00:18:35.695 "num_base_bdevs_operational": 4, 00:18:35.695 "base_bdevs_list": [ 00:18:35.695 { 00:18:35.695 "name": "NewBaseBdev", 00:18:35.695 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:35.695 "is_configured": true, 00:18:35.695 "data_offset": 2048, 00:18:35.695 "data_size": 63488 00:18:35.695 }, 00:18:35.695 { 00:18:35.695 "name": "BaseBdev2", 00:18:35.695 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:35.695 "is_configured": true, 00:18:35.695 "data_offset": 2048, 00:18:35.695 "data_size": 63488 00:18:35.695 }, 00:18:35.695 { 00:18:35.695 "name": "BaseBdev3", 00:18:35.695 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:35.695 "is_configured": true, 00:18:35.695 "data_offset": 2048, 00:18:35.695 "data_size": 63488 00:18:35.695 }, 00:18:35.695 { 00:18:35.695 "name": "BaseBdev4", 00:18:35.695 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:35.695 "is_configured": true, 00:18:35.695 "data_offset": 2048, 00:18:35.695 "data_size": 63488 00:18:35.695 } 00:18:35.695 ] 00:18:35.695 }' 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.695 02:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:36.262 [2024-05-15 02:19:24.244891] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.262 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:36.262 "name": "Existed_Raid", 00:18:36.262 "aliases": [ 00:18:36.262 "85dcf875-1261-11ef-99fd-bfc7c66e2865" 00:18:36.262 ], 00:18:36.262 "product_name": "Raid Volume", 00:18:36.262 "block_size": 512, 00:18:36.262 "num_blocks": 253952, 00:18:36.262 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:36.262 "assigned_rate_limits": { 00:18:36.262 "rw_ios_per_sec": 0, 00:18:36.262 "rw_mbytes_per_sec": 0, 00:18:36.262 "r_mbytes_per_sec": 0, 00:18:36.262 "w_mbytes_per_sec": 0 00:18:36.262 }, 00:18:36.262 "claimed": false, 00:18:36.262 "zoned": false, 00:18:36.262 "supported_io_types": { 00:18:36.262 "read": true, 00:18:36.262 "write": true, 00:18:36.262 "unmap": true, 00:18:36.262 "write_zeroes": true, 00:18:36.262 "flush": true, 00:18:36.262 "reset": true, 00:18:36.262 "compare": false, 00:18:36.262 "compare_and_write": false, 00:18:36.262 "abort": false, 00:18:36.262 "nvme_admin": false, 00:18:36.262 "nvme_io": false 00:18:36.262 }, 00:18:36.262 "memory_domains": [ 00:18:36.262 { 00:18:36.262 "dma_device_id": "system", 00:18:36.263 "dma_device_type": 1 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.263 "dma_device_type": 2 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "system", 00:18:36.263 "dma_device_type": 1 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.263 "dma_device_type": 2 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "system", 00:18:36.263 "dma_device_type": 1 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.263 "dma_device_type": 2 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "system", 00:18:36.263 "dma_device_type": 1 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.263 "dma_device_type": 2 00:18:36.263 } 00:18:36.263 ], 00:18:36.263 "driver_specific": { 00:18:36.263 "raid": { 00:18:36.263 "uuid": "85dcf875-1261-11ef-99fd-bfc7c66e2865", 00:18:36.263 "strip_size_kb": 64, 00:18:36.263 "state": "online", 00:18:36.263 "raid_level": "raid0", 00:18:36.263 "superblock": true, 00:18:36.263 "num_base_bdevs": 4, 00:18:36.263 "num_base_bdevs_discovered": 4, 00:18:36.263 "num_base_bdevs_operational": 4, 00:18:36.263 "base_bdevs_list": [ 00:18:36.263 { 00:18:36.263 "name": "NewBaseBdev", 00:18:36.263 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:36.263 "is_configured": true, 00:18:36.263 "data_offset": 2048, 00:18:36.263 "data_size": 63488 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "name": "BaseBdev2", 00:18:36.263 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:36.263 "is_configured": true, 00:18:36.263 "data_offset": 2048, 00:18:36.263 "data_size": 63488 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "name": "BaseBdev3", 00:18:36.263 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:36.263 "is_configured": true, 00:18:36.263 "data_offset": 2048, 00:18:36.263 "data_size": 63488 00:18:36.263 }, 00:18:36.263 { 00:18:36.263 "name": "BaseBdev4", 00:18:36.263 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:36.263 "is_configured": true, 00:18:36.263 "data_offset": 2048, 00:18:36.263 "data_size": 63488 00:18:36.263 } 00:18:36.263 ] 00:18:36.263 } 00:18:36.263 } 00:18:36.263 }' 00:18:36.263 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.263 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:18:36.263 BaseBdev2 00:18:36.263 BaseBdev3 00:18:36.263 BaseBdev4' 00:18:36.263 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:36.263 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:36.263 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:36.829 "name": "NewBaseBdev", 00:18:36.829 "aliases": [ 00:18:36.829 "8714e1bd-1261-11ef-99fd-bfc7c66e2865" 00:18:36.829 ], 00:18:36.829 "product_name": "Malloc disk", 00:18:36.829 "block_size": 512, 00:18:36.829 "num_blocks": 65536, 00:18:36.829 "uuid": "8714e1bd-1261-11ef-99fd-bfc7c66e2865", 00:18:36.829 "assigned_rate_limits": { 00:18:36.829 "rw_ios_per_sec": 0, 00:18:36.829 "rw_mbytes_per_sec": 0, 00:18:36.829 "r_mbytes_per_sec": 0, 00:18:36.829 "w_mbytes_per_sec": 0 00:18:36.829 }, 00:18:36.829 "claimed": true, 00:18:36.829 "claim_type": "exclusive_write", 00:18:36.829 "zoned": false, 00:18:36.829 "supported_io_types": { 00:18:36.829 "read": true, 00:18:36.829 "write": true, 00:18:36.829 "unmap": true, 00:18:36.829 "write_zeroes": true, 00:18:36.829 "flush": true, 00:18:36.829 "reset": true, 00:18:36.829 "compare": false, 00:18:36.829 "compare_and_write": false, 00:18:36.829 "abort": true, 00:18:36.829 "nvme_admin": false, 00:18:36.829 "nvme_io": false 00:18:36.829 }, 00:18:36.829 "memory_domains": [ 00:18:36.829 { 00:18:36.829 "dma_device_id": "system", 00:18:36.829 "dma_device_type": 1 00:18:36.829 }, 00:18:36.829 { 00:18:36.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.829 "dma_device_type": 2 00:18:36.829 } 00:18:36.829 ], 00:18:36.829 "driver_specific": {} 00:18:36.829 }' 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:36.829 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:36.830 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:36.830 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:37.089 "name": "BaseBdev2", 00:18:37.089 "aliases": [ 00:18:37.089 "8462877b-1261-11ef-99fd-bfc7c66e2865" 00:18:37.089 ], 00:18:37.089 "product_name": "Malloc disk", 00:18:37.089 "block_size": 512, 00:18:37.089 "num_blocks": 65536, 00:18:37.089 "uuid": "8462877b-1261-11ef-99fd-bfc7c66e2865", 00:18:37.089 "assigned_rate_limits": { 00:18:37.089 "rw_ios_per_sec": 0, 00:18:37.089 "rw_mbytes_per_sec": 0, 00:18:37.089 "r_mbytes_per_sec": 0, 00:18:37.089 "w_mbytes_per_sec": 0 00:18:37.089 }, 00:18:37.089 "claimed": true, 00:18:37.089 "claim_type": "exclusive_write", 00:18:37.089 "zoned": false, 00:18:37.089 "supported_io_types": { 00:18:37.089 "read": true, 00:18:37.089 "write": true, 00:18:37.089 "unmap": true, 00:18:37.089 "write_zeroes": true, 00:18:37.089 "flush": true, 00:18:37.089 "reset": true, 00:18:37.089 "compare": false, 00:18:37.089 "compare_and_write": false, 00:18:37.089 "abort": true, 00:18:37.089 "nvme_admin": false, 00:18:37.089 "nvme_io": false 00:18:37.089 }, 00:18:37.089 "memory_domains": [ 00:18:37.089 { 00:18:37.089 "dma_device_id": "system", 00:18:37.089 "dma_device_type": 1 00:18:37.089 }, 00:18:37.089 { 00:18:37.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.089 "dma_device_type": 2 00:18:37.089 } 00:18:37.089 ], 00:18:37.089 "driver_specific": {} 00:18:37.089 }' 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:37.089 02:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:37.348 "name": "BaseBdev3", 00:18:37.348 "aliases": [ 00:18:37.348 "84dfa895-1261-11ef-99fd-bfc7c66e2865" 00:18:37.348 ], 00:18:37.348 "product_name": "Malloc disk", 00:18:37.348 "block_size": 512, 00:18:37.348 "num_blocks": 65536, 00:18:37.348 "uuid": "84dfa895-1261-11ef-99fd-bfc7c66e2865", 00:18:37.348 "assigned_rate_limits": { 00:18:37.348 "rw_ios_per_sec": 0, 00:18:37.348 "rw_mbytes_per_sec": 0, 00:18:37.348 "r_mbytes_per_sec": 0, 00:18:37.348 "w_mbytes_per_sec": 0 00:18:37.348 }, 00:18:37.348 "claimed": true, 00:18:37.348 "claim_type": "exclusive_write", 00:18:37.348 "zoned": false, 00:18:37.348 "supported_io_types": { 00:18:37.348 "read": true, 00:18:37.348 "write": true, 00:18:37.348 "unmap": true, 00:18:37.348 "write_zeroes": true, 00:18:37.348 "flush": true, 00:18:37.348 "reset": true, 00:18:37.348 "compare": false, 00:18:37.348 "compare_and_write": false, 00:18:37.348 "abort": true, 00:18:37.348 "nvme_admin": false, 00:18:37.348 "nvme_io": false 00:18:37.348 }, 00:18:37.348 "memory_domains": [ 00:18:37.348 { 00:18:37.348 "dma_device_id": "system", 00:18:37.348 "dma_device_type": 1 00:18:37.348 }, 00:18:37.348 { 00:18:37.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.348 "dma_device_type": 2 00:18:37.348 } 00:18:37.348 ], 00:18:37.348 "driver_specific": {} 00:18:37.348 }' 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:37.348 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:37.606 "name": "BaseBdev4", 00:18:37.606 "aliases": [ 00:18:37.606 "855cc99a-1261-11ef-99fd-bfc7c66e2865" 00:18:37.606 ], 00:18:37.606 "product_name": "Malloc disk", 00:18:37.606 "block_size": 512, 00:18:37.606 "num_blocks": 65536, 00:18:37.606 "uuid": "855cc99a-1261-11ef-99fd-bfc7c66e2865", 00:18:37.606 "assigned_rate_limits": { 00:18:37.606 "rw_ios_per_sec": 0, 00:18:37.606 "rw_mbytes_per_sec": 0, 00:18:37.606 "r_mbytes_per_sec": 0, 00:18:37.606 "w_mbytes_per_sec": 0 00:18:37.606 }, 00:18:37.606 "claimed": true, 00:18:37.606 "claim_type": "exclusive_write", 00:18:37.606 "zoned": false, 00:18:37.606 "supported_io_types": { 00:18:37.606 "read": true, 00:18:37.606 "write": true, 00:18:37.606 "unmap": true, 00:18:37.606 "write_zeroes": true, 00:18:37.606 "flush": true, 00:18:37.606 "reset": true, 00:18:37.606 "compare": false, 00:18:37.606 "compare_and_write": false, 00:18:37.606 "abort": true, 00:18:37.606 "nvme_admin": false, 00:18:37.606 "nvme_io": false 00:18:37.606 }, 00:18:37.606 "memory_domains": [ 00:18:37.606 { 00:18:37.606 "dma_device_id": "system", 00:18:37.606 "dma_device_type": 1 00:18:37.606 }, 00:18:37.606 { 00:18:37.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.606 "dma_device_type": 2 00:18:37.606 } 00:18:37.606 ], 00:18:37.606 "driver_specific": {} 00:18:37.606 }' 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:37.606 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:37.991 [2024-05-15 02:19:25.816971] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.991 [2024-05-15 02:19:25.817003] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.991 [2024-05-15 02:19:25.817026] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.991 [2024-05-15 02:19:25.817040] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.991 [2024-05-15 02:19:25.817044] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82833ef00 name Existed_Raid, state offline 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 58065 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 58065 ']' 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 58065 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 58065 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:18:37.991 killing process with pid 58065 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58065' 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 58065 00:18:37.991 [2024-05-15 02:19:25.845558] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.991 02:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 58065 00:18:37.991 [2024-05-15 02:19:25.864674] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.259 02:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:18:38.259 00:18:38.259 real 0m28.997s 00:18:38.259 user 0m53.347s 00:18:38.259 sys 0m3.794s 00:18:38.259 ************************************ 00:18:38.259 END TEST raid_state_function_test_sb 00:18:38.259 ************************************ 00:18:38.259 02:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.259 02:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.259 02:19:26 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:38.259 02:19:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:18:38.259 02:19:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.259 02:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.259 ************************************ 00:18:38.259 START TEST raid_superblock_test 00:18:38.259 ************************************ 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=58887 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 58887 /var/tmp/spdk-raid.sock 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 58887 ']' 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:38.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:38.259 02:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.259 [2024-05-15 02:19:26.061667] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:38.259 [2024-05-15 02:19:26.061886] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:38.824 EAL: TSC is not safe to use in SMP mode 00:18:38.824 EAL: TSC is not invariant 00:18:38.824 [2024-05-15 02:19:26.559361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.824 [2024-05-15 02:19:26.644429] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:38.824 [2024-05-15 02:19:26.646624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.824 [2024-05-15 02:19:26.647419] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.824 [2024-05-15 02:19:26.647451] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:39.389 malloc1 00:18:39.389 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.647 [2024-05-15 02:19:27.602731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.647 [2024-05-15 02:19:27.602797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.647 [2024-05-15 02:19:27.603374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb6780 00:18:39.647 [2024-05-15 02:19:27.603401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.647 [2024-05-15 02:19:27.604164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.647 [2024-05-15 02:19:27.604195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.647 pt1 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.647 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:39.904 malloc2 00:18:39.905 02:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.162 [2024-05-15 02:19:28.102758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.162 [2024-05-15 02:19:28.102817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.162 [2024-05-15 02:19:28.102845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb6c80 00:18:40.162 [2024-05-15 02:19:28.102854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.162 [2024-05-15 02:19:28.103384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.163 [2024-05-15 02:19:28.103418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.163 pt2 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.163 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:40.421 malloc3 00:18:40.421 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:40.680 [2024-05-15 02:19:28.550787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:40.680 [2024-05-15 02:19:28.550851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.680 [2024-05-15 02:19:28.550879] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb7180 00:18:40.680 [2024-05-15 02:19:28.550888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.680 [2024-05-15 02:19:28.551443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.680 [2024-05-15 02:19:28.551472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:40.680 pt3 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.680 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:40.939 malloc4 00:18:40.939 02:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:41.197 [2024-05-15 02:19:29.094806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:41.197 [2024-05-15 02:19:29.094865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.197 [2024-05-15 02:19:29.094892] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb7680 00:18:41.197 [2024-05-15 02:19:29.094901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.197 [2024-05-15 02:19:29.095424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.197 [2024-05-15 02:19:29.095452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:41.197 pt4 00:18:41.197 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:41.197 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.197 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:41.456 [2024-05-15 02:19:29.442837] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.456 [2024-05-15 02:19:29.443299] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.456 [2024-05-15 02:19:29.443320] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:41.456 [2024-05-15 02:19:29.443330] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:41.456 [2024-05-15 02:19:29.443381] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bdb7900 00:18:41.456 [2024-05-15 02:19:29.443386] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:41.456 [2024-05-15 02:19:29.443417] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be19e20 00:18:41.456 [2024-05-15 02:19:29.443475] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bdb7900 00:18:41.456 [2024-05-15 02:19:29.443479] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bdb7900 00:18:41.456 [2024-05-15 02:19:29.443502] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.456 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.714 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.714 "name": "raid_bdev1", 00:18:41.714 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:41.714 "strip_size_kb": 64, 00:18:41.714 "state": "online", 00:18:41.714 "raid_level": "raid0", 00:18:41.714 "superblock": true, 00:18:41.714 "num_base_bdevs": 4, 00:18:41.714 "num_base_bdevs_discovered": 4, 00:18:41.714 "num_base_bdevs_operational": 4, 00:18:41.714 "base_bdevs_list": [ 00:18:41.714 { 00:18:41.714 "name": "pt1", 00:18:41.714 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:41.714 "is_configured": true, 00:18:41.714 "data_offset": 2048, 00:18:41.714 "data_size": 63488 00:18:41.714 }, 00:18:41.714 { 00:18:41.714 "name": "pt2", 00:18:41.714 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:41.714 "is_configured": true, 00:18:41.714 "data_offset": 2048, 00:18:41.714 "data_size": 63488 00:18:41.714 }, 00:18:41.714 { 00:18:41.714 "name": "pt3", 00:18:41.714 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:41.714 "is_configured": true, 00:18:41.714 "data_offset": 2048, 00:18:41.714 "data_size": 63488 00:18:41.714 }, 00:18:41.714 { 00:18:41.714 "name": "pt4", 00:18:41.714 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:41.714 "is_configured": true, 00:18:41.714 "data_offset": 2048, 00:18:41.714 "data_size": 63488 00:18:41.714 } 00:18:41.714 ] 00:18:41.714 }' 00:18:41.714 02:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.714 02:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.280 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:42.281 [2024-05-15 02:19:30.246912] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:42.281 "name": "raid_bdev1", 00:18:42.281 "aliases": [ 00:18:42.281 "8f246071-1261-11ef-99fd-bfc7c66e2865" 00:18:42.281 ], 00:18:42.281 "product_name": "Raid Volume", 00:18:42.281 "block_size": 512, 00:18:42.281 "num_blocks": 253952, 00:18:42.281 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:42.281 "assigned_rate_limits": { 00:18:42.281 "rw_ios_per_sec": 0, 00:18:42.281 "rw_mbytes_per_sec": 0, 00:18:42.281 "r_mbytes_per_sec": 0, 00:18:42.281 "w_mbytes_per_sec": 0 00:18:42.281 }, 00:18:42.281 "claimed": false, 00:18:42.281 "zoned": false, 00:18:42.281 "supported_io_types": { 00:18:42.281 "read": true, 00:18:42.281 "write": true, 00:18:42.281 "unmap": true, 00:18:42.281 "write_zeroes": true, 00:18:42.281 "flush": true, 00:18:42.281 "reset": true, 00:18:42.281 "compare": false, 00:18:42.281 "compare_and_write": false, 00:18:42.281 "abort": false, 00:18:42.281 "nvme_admin": false, 00:18:42.281 "nvme_io": false 00:18:42.281 }, 00:18:42.281 "memory_domains": [ 00:18:42.281 { 00:18:42.281 "dma_device_id": "system", 00:18:42.281 "dma_device_type": 1 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.281 "dma_device_type": 2 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "system", 00:18:42.281 "dma_device_type": 1 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.281 "dma_device_type": 2 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "system", 00:18:42.281 "dma_device_type": 1 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.281 "dma_device_type": 2 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "system", 00:18:42.281 "dma_device_type": 1 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.281 "dma_device_type": 2 00:18:42.281 } 00:18:42.281 ], 00:18:42.281 "driver_specific": { 00:18:42.281 "raid": { 00:18:42.281 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:42.281 "strip_size_kb": 64, 00:18:42.281 "state": "online", 00:18:42.281 "raid_level": "raid0", 00:18:42.281 "superblock": true, 00:18:42.281 "num_base_bdevs": 4, 00:18:42.281 "num_base_bdevs_discovered": 4, 00:18:42.281 "num_base_bdevs_operational": 4, 00:18:42.281 "base_bdevs_list": [ 00:18:42.281 { 00:18:42.281 "name": "pt1", 00:18:42.281 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "name": "pt2", 00:18:42.281 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "name": "pt3", 00:18:42.281 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 }, 00:18:42.281 { 00:18:42.281 "name": "pt4", 00:18:42.281 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:42.281 "is_configured": true, 00:18:42.281 "data_offset": 2048, 00:18:42.281 "data_size": 63488 00:18:42.281 } 00:18:42.281 ] 00:18:42.281 } 00:18:42.281 } 00:18:42.281 }' 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:18:42.281 pt2 00:18:42.281 pt3 00:18:42.281 pt4' 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:42.281 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:42.539 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:42.539 "name": "pt1", 00:18:42.539 "aliases": [ 00:18:42.539 "10b96644-6ab1-0458-8220-3d27cbc72251" 00:18:42.539 ], 00:18:42.539 "product_name": "passthru", 00:18:42.539 "block_size": 512, 00:18:42.539 "num_blocks": 65536, 00:18:42.539 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:42.539 "assigned_rate_limits": { 00:18:42.539 "rw_ios_per_sec": 0, 00:18:42.539 "rw_mbytes_per_sec": 0, 00:18:42.539 "r_mbytes_per_sec": 0, 00:18:42.539 "w_mbytes_per_sec": 0 00:18:42.539 }, 00:18:42.539 "claimed": true, 00:18:42.539 "claim_type": "exclusive_write", 00:18:42.539 "zoned": false, 00:18:42.539 "supported_io_types": { 00:18:42.539 "read": true, 00:18:42.539 "write": true, 00:18:42.539 "unmap": true, 00:18:42.539 "write_zeroes": true, 00:18:42.539 "flush": true, 00:18:42.539 "reset": true, 00:18:42.539 "compare": false, 00:18:42.539 "compare_and_write": false, 00:18:42.539 "abort": true, 00:18:42.539 "nvme_admin": false, 00:18:42.539 "nvme_io": false 00:18:42.539 }, 00:18:42.539 "memory_domains": [ 00:18:42.539 { 00:18:42.539 "dma_device_id": "system", 00:18:42.539 "dma_device_type": 1 00:18:42.539 }, 00:18:42.539 { 00:18:42.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.539 "dma_device_type": 2 00:18:42.539 } 00:18:42.539 ], 00:18:42.539 "driver_specific": { 00:18:42.539 "passthru": { 00:18:42.539 "name": "pt1", 00:18:42.539 "base_bdev_name": "malloc1" 00:18:42.539 } 00:18:42.539 } 00:18:42.539 }' 00:18:42.539 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:42.539 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:42.539 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:42.539 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:42.798 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:43.057 "name": "pt2", 00:18:43.057 "aliases": [ 00:18:43.057 "21108149-f7b0-695c-a1ae-9c6651c8379b" 00:18:43.057 ], 00:18:43.057 "product_name": "passthru", 00:18:43.057 "block_size": 512, 00:18:43.057 "num_blocks": 65536, 00:18:43.057 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:43.057 "assigned_rate_limits": { 00:18:43.057 "rw_ios_per_sec": 0, 00:18:43.057 "rw_mbytes_per_sec": 0, 00:18:43.057 "r_mbytes_per_sec": 0, 00:18:43.057 "w_mbytes_per_sec": 0 00:18:43.057 }, 00:18:43.057 "claimed": true, 00:18:43.057 "claim_type": "exclusive_write", 00:18:43.057 "zoned": false, 00:18:43.057 "supported_io_types": { 00:18:43.057 "read": true, 00:18:43.057 "write": true, 00:18:43.057 "unmap": true, 00:18:43.057 "write_zeroes": true, 00:18:43.057 "flush": true, 00:18:43.057 "reset": true, 00:18:43.057 "compare": false, 00:18:43.057 "compare_and_write": false, 00:18:43.057 "abort": true, 00:18:43.057 "nvme_admin": false, 00:18:43.057 "nvme_io": false 00:18:43.057 }, 00:18:43.057 "memory_domains": [ 00:18:43.057 { 00:18:43.057 "dma_device_id": "system", 00:18:43.057 "dma_device_type": 1 00:18:43.057 }, 00:18:43.057 { 00:18:43.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.057 "dma_device_type": 2 00:18:43.057 } 00:18:43.057 ], 00:18:43.057 "driver_specific": { 00:18:43.057 "passthru": { 00:18:43.057 "name": "pt2", 00:18:43.057 "base_bdev_name": "malloc2" 00:18:43.057 } 00:18:43.057 } 00:18:43.057 }' 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:43.057 02:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:43.316 "name": "pt3", 00:18:43.316 "aliases": [ 00:18:43.316 "a3ce9858-61ed-1157-a4ef-532d073bfb56" 00:18:43.316 ], 00:18:43.316 "product_name": "passthru", 00:18:43.316 "block_size": 512, 00:18:43.316 "num_blocks": 65536, 00:18:43.316 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:43.316 "assigned_rate_limits": { 00:18:43.316 "rw_ios_per_sec": 0, 00:18:43.316 "rw_mbytes_per_sec": 0, 00:18:43.316 "r_mbytes_per_sec": 0, 00:18:43.316 "w_mbytes_per_sec": 0 00:18:43.316 }, 00:18:43.316 "claimed": true, 00:18:43.316 "claim_type": "exclusive_write", 00:18:43.316 "zoned": false, 00:18:43.316 "supported_io_types": { 00:18:43.316 "read": true, 00:18:43.316 "write": true, 00:18:43.316 "unmap": true, 00:18:43.316 "write_zeroes": true, 00:18:43.316 "flush": true, 00:18:43.316 "reset": true, 00:18:43.316 "compare": false, 00:18:43.316 "compare_and_write": false, 00:18:43.316 "abort": true, 00:18:43.316 "nvme_admin": false, 00:18:43.316 "nvme_io": false 00:18:43.316 }, 00:18:43.316 "memory_domains": [ 00:18:43.316 { 00:18:43.316 "dma_device_id": "system", 00:18:43.316 "dma_device_type": 1 00:18:43.316 }, 00:18:43.316 { 00:18:43.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.316 "dma_device_type": 2 00:18:43.316 } 00:18:43.316 ], 00:18:43.316 "driver_specific": { 00:18:43.316 "passthru": { 00:18:43.316 "name": "pt3", 00:18:43.316 "base_bdev_name": "malloc3" 00:18:43.316 } 00:18:43.316 } 00:18:43.316 }' 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:43.316 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:43.575 "name": "pt4", 00:18:43.575 "aliases": [ 00:18:43.575 "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1" 00:18:43.575 ], 00:18:43.575 "product_name": "passthru", 00:18:43.575 "block_size": 512, 00:18:43.575 "num_blocks": 65536, 00:18:43.575 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:43.575 "assigned_rate_limits": { 00:18:43.575 "rw_ios_per_sec": 0, 00:18:43.575 "rw_mbytes_per_sec": 0, 00:18:43.575 "r_mbytes_per_sec": 0, 00:18:43.575 "w_mbytes_per_sec": 0 00:18:43.575 }, 00:18:43.575 "claimed": true, 00:18:43.575 "claim_type": "exclusive_write", 00:18:43.575 "zoned": false, 00:18:43.575 "supported_io_types": { 00:18:43.575 "read": true, 00:18:43.575 "write": true, 00:18:43.575 "unmap": true, 00:18:43.575 "write_zeroes": true, 00:18:43.575 "flush": true, 00:18:43.575 "reset": true, 00:18:43.575 "compare": false, 00:18:43.575 "compare_and_write": false, 00:18:43.575 "abort": true, 00:18:43.575 "nvme_admin": false, 00:18:43.575 "nvme_io": false 00:18:43.575 }, 00:18:43.575 "memory_domains": [ 00:18:43.575 { 00:18:43.575 "dma_device_id": "system", 00:18:43.575 "dma_device_type": 1 00:18:43.575 }, 00:18:43.575 { 00:18:43.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.575 "dma_device_type": 2 00:18:43.575 } 00:18:43.575 ], 00:18:43.575 "driver_specific": { 00:18:43.575 "passthru": { 00:18:43.575 "name": "pt4", 00:18:43.575 "base_bdev_name": "malloc4" 00:18:43.575 } 00:18:43.575 } 00:18:43.575 }' 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:43.575 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:43.834 [2024-05-15 02:19:31.755021] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.834 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8f246071-1261-11ef-99fd-bfc7c66e2865 00:18:43.834 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8f246071-1261-11ef-99fd-bfc7c66e2865 ']' 00:18:43.834 02:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:44.093 [2024-05-15 02:19:32.010987] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.093 [2024-05-15 02:19:32.011017] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.093 [2024-05-15 02:19:32.011040] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.093 [2024-05-15 02:19:32.011055] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.093 [2024-05-15 02:19:32.011060] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdb7900 name raid_bdev1, state offline 00:18:44.093 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.093 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:44.352 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:44.352 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:44.352 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.352 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:44.610 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.610 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:44.868 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.868 02:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:45.436 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.436 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:45.436 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:45.436 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:45.700 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:45.958 [2024-05-15 02:19:33.835116] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:45.958 [2024-05-15 02:19:33.835597] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:45.958 [2024-05-15 02:19:33.835610] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:45.958 [2024-05-15 02:19:33.835618] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:45.958 [2024-05-15 02:19:33.835631] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:45.958 [2024-05-15 02:19:33.835684] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:45.958 [2024-05-15 02:19:33.835695] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:45.958 [2024-05-15 02:19:33.835704] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:45.958 [2024-05-15 02:19:33.835712] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.958 [2024-05-15 02:19:33.835716] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdb7680 name raid_bdev1, state configuring 00:18:45.958 request: 00:18:45.958 { 00:18:45.958 "name": "raid_bdev1", 00:18:45.958 "raid_level": "raid0", 00:18:45.958 "base_bdevs": [ 00:18:45.958 "malloc1", 00:18:45.958 "malloc2", 00:18:45.958 "malloc3", 00:18:45.958 "malloc4" 00:18:45.958 ], 00:18:45.958 "superblock": false, 00:18:45.958 "strip_size_kb": 64, 00:18:45.958 "method": "bdev_raid_create", 00:18:45.958 "req_id": 1 00:18:45.958 } 00:18:45.958 Got JSON-RPC error response 00:18:45.958 response: 00:18:45.958 { 00:18:45.958 "code": -17, 00:18:45.958 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:45.958 } 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.958 02:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:46.216 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:46.216 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:46.216 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.475 [2024-05-15 02:19:34.355138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.475 [2024-05-15 02:19:34.355199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.475 [2024-05-15 02:19:34.355227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb7180 00:18:46.475 [2024-05-15 02:19:34.355239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.475 [2024-05-15 02:19:34.355839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.475 [2024-05-15 02:19:34.355870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.475 [2024-05-15 02:19:34.355903] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:46.475 [2024-05-15 02:19:34.355913] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:46.475 pt1 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.475 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.734 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.734 "name": "raid_bdev1", 00:18:46.734 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:46.734 "strip_size_kb": 64, 00:18:46.734 "state": "configuring", 00:18:46.734 "raid_level": "raid0", 00:18:46.734 "superblock": true, 00:18:46.734 "num_base_bdevs": 4, 00:18:46.734 "num_base_bdevs_discovered": 1, 00:18:46.734 "num_base_bdevs_operational": 4, 00:18:46.734 "base_bdevs_list": [ 00:18:46.734 { 00:18:46.734 "name": "pt1", 00:18:46.734 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:46.734 "is_configured": true, 00:18:46.734 "data_offset": 2048, 00:18:46.734 "data_size": 63488 00:18:46.734 }, 00:18:46.734 { 00:18:46.734 "name": null, 00:18:46.734 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:46.734 "is_configured": false, 00:18:46.734 "data_offset": 2048, 00:18:46.734 "data_size": 63488 00:18:46.734 }, 00:18:46.734 { 00:18:46.734 "name": null, 00:18:46.734 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:46.734 "is_configured": false, 00:18:46.734 "data_offset": 2048, 00:18:46.734 "data_size": 63488 00:18:46.734 }, 00:18:46.734 { 00:18:46.734 "name": null, 00:18:46.734 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:46.734 "is_configured": false, 00:18:46.734 "data_offset": 2048, 00:18:46.734 "data_size": 63488 00:18:46.734 } 00:18:46.734 ] 00:18:46.734 }' 00:18:46.734 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.734 02:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.992 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:46.992 02:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:47.250 [2024-05-15 02:19:35.167220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:47.250 [2024-05-15 02:19:35.167282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.250 [2024-05-15 02:19:35.167312] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb6780 00:18:47.250 [2024-05-15 02:19:35.167321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.250 [2024-05-15 02:19:35.167422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.250 [2024-05-15 02:19:35.167432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:47.250 [2024-05-15 02:19:35.167453] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:47.250 [2024-05-15 02:19:35.167461] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.250 pt2 00:18:47.250 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:47.509 [2024-05-15 02:19:35.451252] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.509 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.766 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.766 "name": "raid_bdev1", 00:18:47.766 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:47.766 "strip_size_kb": 64, 00:18:47.766 "state": "configuring", 00:18:47.766 "raid_level": "raid0", 00:18:47.766 "superblock": true, 00:18:47.766 "num_base_bdevs": 4, 00:18:47.766 "num_base_bdevs_discovered": 1, 00:18:47.766 "num_base_bdevs_operational": 4, 00:18:47.766 "base_bdevs_list": [ 00:18:47.766 { 00:18:47.766 "name": "pt1", 00:18:47.766 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:47.766 "is_configured": true, 00:18:47.766 "data_offset": 2048, 00:18:47.766 "data_size": 63488 00:18:47.766 }, 00:18:47.766 { 00:18:47.766 "name": null, 00:18:47.766 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:47.766 "is_configured": false, 00:18:47.766 "data_offset": 2048, 00:18:47.766 "data_size": 63488 00:18:47.766 }, 00:18:47.766 { 00:18:47.766 "name": null, 00:18:47.766 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:47.766 "is_configured": false, 00:18:47.766 "data_offset": 2048, 00:18:47.766 "data_size": 63488 00:18:47.766 }, 00:18:47.766 { 00:18:47.766 "name": null, 00:18:47.766 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:47.766 "is_configured": false, 00:18:47.766 "data_offset": 2048, 00:18:47.766 "data_size": 63488 00:18:47.766 } 00:18:47.766 ] 00:18:47.766 }' 00:18:47.766 02:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.766 02:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.023 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:48.023 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:48.023 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.589 [2024-05-15 02:19:36.327310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.589 [2024-05-15 02:19:36.327369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.589 [2024-05-15 02:19:36.327414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb6780 00:18:48.589 [2024-05-15 02:19:36.327423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.589 [2024-05-15 02:19:36.327522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.589 [2024-05-15 02:19:36.327530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.589 [2024-05-15 02:19:36.327551] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:48.589 [2024-05-15 02:19:36.327559] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.589 pt2 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:48.589 [2024-05-15 02:19:36.575328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:48.589 [2024-05-15 02:19:36.575403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.589 [2024-05-15 02:19:36.575432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb7b80 00:18:48.589 [2024-05-15 02:19:36.575440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.589 [2024-05-15 02:19:36.575543] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.589 [2024-05-15 02:19:36.575552] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:48.589 [2024-05-15 02:19:36.575573] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:48.589 [2024-05-15 02:19:36.575581] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:48.589 pt3 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:48.589 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:48.848 [2024-05-15 02:19:36.863352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:48.848 [2024-05-15 02:19:36.863415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.848 [2024-05-15 02:19:36.863445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bdb7900 00:18:48.848 [2024-05-15 02:19:36.863453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.848 [2024-05-15 02:19:36.863553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.848 [2024-05-15 02:19:36.863562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:48.848 [2024-05-15 02:19:36.863583] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:48.848 [2024-05-15 02:19:36.863590] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:48.848 [2024-05-15 02:19:36.863619] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bdb6c80 00:18:48.848 [2024-05-15 02:19:36.863623] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:48.848 [2024-05-15 02:19:36.863644] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be19e20 00:18:48.848 [2024-05-15 02:19:36.863688] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bdb6c80 00:18:48.848 [2024-05-15 02:19:36.863691] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bdb6c80 00:18:48.848 [2024-05-15 02:19:36.863709] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.106 pt4 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.106 02:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.364 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.364 "name": "raid_bdev1", 00:18:49.364 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:49.364 "strip_size_kb": 64, 00:18:49.364 "state": "online", 00:18:49.364 "raid_level": "raid0", 00:18:49.364 "superblock": true, 00:18:49.364 "num_base_bdevs": 4, 00:18:49.364 "num_base_bdevs_discovered": 4, 00:18:49.364 "num_base_bdevs_operational": 4, 00:18:49.364 "base_bdevs_list": [ 00:18:49.364 { 00:18:49.364 "name": "pt1", 00:18:49.364 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:49.364 "is_configured": true, 00:18:49.364 "data_offset": 2048, 00:18:49.364 "data_size": 63488 00:18:49.364 }, 00:18:49.364 { 00:18:49.364 "name": "pt2", 00:18:49.364 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:49.364 "is_configured": true, 00:18:49.364 "data_offset": 2048, 00:18:49.364 "data_size": 63488 00:18:49.364 }, 00:18:49.364 { 00:18:49.364 "name": "pt3", 00:18:49.364 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:49.364 "is_configured": true, 00:18:49.364 "data_offset": 2048, 00:18:49.364 "data_size": 63488 00:18:49.364 }, 00:18:49.364 { 00:18:49.364 "name": "pt4", 00:18:49.364 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:49.364 "is_configured": true, 00:18:49.364 "data_offset": 2048, 00:18:49.364 "data_size": 63488 00:18:49.364 } 00:18:49.364 ] 00:18:49.364 }' 00:18:49.364 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.364 02:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:49.623 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:49.898 [2024-05-15 02:19:37.827486] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:49.898 "name": "raid_bdev1", 00:18:49.898 "aliases": [ 00:18:49.898 "8f246071-1261-11ef-99fd-bfc7c66e2865" 00:18:49.898 ], 00:18:49.898 "product_name": "Raid Volume", 00:18:49.898 "block_size": 512, 00:18:49.898 "num_blocks": 253952, 00:18:49.898 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:49.898 "assigned_rate_limits": { 00:18:49.898 "rw_ios_per_sec": 0, 00:18:49.898 "rw_mbytes_per_sec": 0, 00:18:49.898 "r_mbytes_per_sec": 0, 00:18:49.898 "w_mbytes_per_sec": 0 00:18:49.898 }, 00:18:49.898 "claimed": false, 00:18:49.898 "zoned": false, 00:18:49.898 "supported_io_types": { 00:18:49.898 "read": true, 00:18:49.898 "write": true, 00:18:49.898 "unmap": true, 00:18:49.898 "write_zeroes": true, 00:18:49.898 "flush": true, 00:18:49.898 "reset": true, 00:18:49.898 "compare": false, 00:18:49.898 "compare_and_write": false, 00:18:49.898 "abort": false, 00:18:49.898 "nvme_admin": false, 00:18:49.898 "nvme_io": false 00:18:49.898 }, 00:18:49.898 "memory_domains": [ 00:18:49.898 { 00:18:49.898 "dma_device_id": "system", 00:18:49.898 "dma_device_type": 1 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.898 "dma_device_type": 2 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "system", 00:18:49.898 "dma_device_type": 1 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.898 "dma_device_type": 2 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "system", 00:18:49.898 "dma_device_type": 1 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.898 "dma_device_type": 2 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "system", 00:18:49.898 "dma_device_type": 1 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.898 "dma_device_type": 2 00:18:49.898 } 00:18:49.898 ], 00:18:49.898 "driver_specific": { 00:18:49.898 "raid": { 00:18:49.898 "uuid": "8f246071-1261-11ef-99fd-bfc7c66e2865", 00:18:49.898 "strip_size_kb": 64, 00:18:49.898 "state": "online", 00:18:49.898 "raid_level": "raid0", 00:18:49.898 "superblock": true, 00:18:49.898 "num_base_bdevs": 4, 00:18:49.898 "num_base_bdevs_discovered": 4, 00:18:49.898 "num_base_bdevs_operational": 4, 00:18:49.898 "base_bdevs_list": [ 00:18:49.898 { 00:18:49.898 "name": "pt1", 00:18:49.898 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:49.898 "is_configured": true, 00:18:49.898 "data_offset": 2048, 00:18:49.898 "data_size": 63488 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "name": "pt2", 00:18:49.898 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:49.898 "is_configured": true, 00:18:49.898 "data_offset": 2048, 00:18:49.898 "data_size": 63488 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "name": "pt3", 00:18:49.898 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:49.898 "is_configured": true, 00:18:49.898 "data_offset": 2048, 00:18:49.898 "data_size": 63488 00:18:49.898 }, 00:18:49.898 { 00:18:49.898 "name": "pt4", 00:18:49.898 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:49.898 "is_configured": true, 00:18:49.898 "data_offset": 2048, 00:18:49.898 "data_size": 63488 00:18:49.898 } 00:18:49.898 ] 00:18:49.898 } 00:18:49.898 } 00:18:49.898 }' 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:18:49.898 pt2 00:18:49.898 pt3 00:18:49.898 pt4' 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:49.898 02:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:50.156 "name": "pt1", 00:18:50.156 "aliases": [ 00:18:50.156 "10b96644-6ab1-0458-8220-3d27cbc72251" 00:18:50.156 ], 00:18:50.156 "product_name": "passthru", 00:18:50.156 "block_size": 512, 00:18:50.156 "num_blocks": 65536, 00:18:50.156 "uuid": "10b96644-6ab1-0458-8220-3d27cbc72251", 00:18:50.156 "assigned_rate_limits": { 00:18:50.156 "rw_ios_per_sec": 0, 00:18:50.156 "rw_mbytes_per_sec": 0, 00:18:50.156 "r_mbytes_per_sec": 0, 00:18:50.156 "w_mbytes_per_sec": 0 00:18:50.156 }, 00:18:50.156 "claimed": true, 00:18:50.156 "claim_type": "exclusive_write", 00:18:50.156 "zoned": false, 00:18:50.156 "supported_io_types": { 00:18:50.156 "read": true, 00:18:50.156 "write": true, 00:18:50.156 "unmap": true, 00:18:50.156 "write_zeroes": true, 00:18:50.156 "flush": true, 00:18:50.156 "reset": true, 00:18:50.156 "compare": false, 00:18:50.156 "compare_and_write": false, 00:18:50.156 "abort": true, 00:18:50.156 "nvme_admin": false, 00:18:50.156 "nvme_io": false 00:18:50.156 }, 00:18:50.156 "memory_domains": [ 00:18:50.156 { 00:18:50.156 "dma_device_id": "system", 00:18:50.156 "dma_device_type": 1 00:18:50.156 }, 00:18:50.156 { 00:18:50.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.156 "dma_device_type": 2 00:18:50.156 } 00:18:50.156 ], 00:18:50.156 "driver_specific": { 00:18:50.156 "passthru": { 00:18:50.156 "name": "pt1", 00:18:50.156 "base_bdev_name": "malloc1" 00:18:50.156 } 00:18:50.156 } 00:18:50.156 }' 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:50.156 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:50.723 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:50.723 "name": "pt2", 00:18:50.723 "aliases": [ 00:18:50.723 "21108149-f7b0-695c-a1ae-9c6651c8379b" 00:18:50.723 ], 00:18:50.723 "product_name": "passthru", 00:18:50.723 "block_size": 512, 00:18:50.723 "num_blocks": 65536, 00:18:50.723 "uuid": "21108149-f7b0-695c-a1ae-9c6651c8379b", 00:18:50.723 "assigned_rate_limits": { 00:18:50.723 "rw_ios_per_sec": 0, 00:18:50.723 "rw_mbytes_per_sec": 0, 00:18:50.723 "r_mbytes_per_sec": 0, 00:18:50.723 "w_mbytes_per_sec": 0 00:18:50.723 }, 00:18:50.723 "claimed": true, 00:18:50.723 "claim_type": "exclusive_write", 00:18:50.723 "zoned": false, 00:18:50.723 "supported_io_types": { 00:18:50.723 "read": true, 00:18:50.723 "write": true, 00:18:50.723 "unmap": true, 00:18:50.723 "write_zeroes": true, 00:18:50.723 "flush": true, 00:18:50.723 "reset": true, 00:18:50.723 "compare": false, 00:18:50.723 "compare_and_write": false, 00:18:50.723 "abort": true, 00:18:50.723 "nvme_admin": false, 00:18:50.723 "nvme_io": false 00:18:50.723 }, 00:18:50.723 "memory_domains": [ 00:18:50.723 { 00:18:50.723 "dma_device_id": "system", 00:18:50.723 "dma_device_type": 1 00:18:50.723 }, 00:18:50.723 { 00:18:50.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.723 "dma_device_type": 2 00:18:50.723 } 00:18:50.723 ], 00:18:50.723 "driver_specific": { 00:18:50.723 "passthru": { 00:18:50.723 "name": "pt2", 00:18:50.723 "base_bdev_name": "malloc2" 00:18:50.723 } 00:18:50.723 } 00:18:50.723 }' 00:18:50.723 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.723 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:50.724 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:50.982 "name": "pt3", 00:18:50.982 "aliases": [ 00:18:50.982 "a3ce9858-61ed-1157-a4ef-532d073bfb56" 00:18:50.982 ], 00:18:50.982 "product_name": "passthru", 00:18:50.982 "block_size": 512, 00:18:50.982 "num_blocks": 65536, 00:18:50.982 "uuid": "a3ce9858-61ed-1157-a4ef-532d073bfb56", 00:18:50.982 "assigned_rate_limits": { 00:18:50.982 "rw_ios_per_sec": 0, 00:18:50.982 "rw_mbytes_per_sec": 0, 00:18:50.982 "r_mbytes_per_sec": 0, 00:18:50.982 "w_mbytes_per_sec": 0 00:18:50.982 }, 00:18:50.982 "claimed": true, 00:18:50.982 "claim_type": "exclusive_write", 00:18:50.982 "zoned": false, 00:18:50.982 "supported_io_types": { 00:18:50.982 "read": true, 00:18:50.982 "write": true, 00:18:50.982 "unmap": true, 00:18:50.982 "write_zeroes": true, 00:18:50.982 "flush": true, 00:18:50.982 "reset": true, 00:18:50.982 "compare": false, 00:18:50.982 "compare_and_write": false, 00:18:50.982 "abort": true, 00:18:50.982 "nvme_admin": false, 00:18:50.982 "nvme_io": false 00:18:50.982 }, 00:18:50.982 "memory_domains": [ 00:18:50.982 { 00:18:50.982 "dma_device_id": "system", 00:18:50.982 "dma_device_type": 1 00:18:50.982 }, 00:18:50.982 { 00:18:50.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.982 "dma_device_type": 2 00:18:50.982 } 00:18:50.982 ], 00:18:50.982 "driver_specific": { 00:18:50.982 "passthru": { 00:18:50.982 "name": "pt3", 00:18:50.982 "base_bdev_name": "malloc3" 00:18:50.982 } 00:18:50.982 } 00:18:50.982 }' 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:50.982 02:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:51.240 "name": "pt4", 00:18:51.240 "aliases": [ 00:18:51.240 "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1" 00:18:51.240 ], 00:18:51.240 "product_name": "passthru", 00:18:51.240 "block_size": 512, 00:18:51.240 "num_blocks": 65536, 00:18:51.240 "uuid": "d0d96eb2-b8b2-e759-8cc1-6b24b46952c1", 00:18:51.240 "assigned_rate_limits": { 00:18:51.240 "rw_ios_per_sec": 0, 00:18:51.240 "rw_mbytes_per_sec": 0, 00:18:51.240 "r_mbytes_per_sec": 0, 00:18:51.240 "w_mbytes_per_sec": 0 00:18:51.240 }, 00:18:51.240 "claimed": true, 00:18:51.240 "claim_type": "exclusive_write", 00:18:51.240 "zoned": false, 00:18:51.240 "supported_io_types": { 00:18:51.240 "read": true, 00:18:51.240 "write": true, 00:18:51.240 "unmap": true, 00:18:51.240 "write_zeroes": true, 00:18:51.240 "flush": true, 00:18:51.240 "reset": true, 00:18:51.240 "compare": false, 00:18:51.240 "compare_and_write": false, 00:18:51.240 "abort": true, 00:18:51.240 "nvme_admin": false, 00:18:51.240 "nvme_io": false 00:18:51.240 }, 00:18:51.240 "memory_domains": [ 00:18:51.240 { 00:18:51.240 "dma_device_id": "system", 00:18:51.240 "dma_device_type": 1 00:18:51.240 }, 00:18:51.240 { 00:18:51.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.240 "dma_device_type": 2 00:18:51.240 } 00:18:51.240 ], 00:18:51.240 "driver_specific": { 00:18:51.240 "passthru": { 00:18:51.240 "name": "pt4", 00:18:51.240 "base_bdev_name": "malloc4" 00:18:51.240 } 00:18:51.240 } 00:18:51.240 }' 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:51.240 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:51.241 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.241 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:51.499 [2024-05-15 02:19:39.443584] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8f246071-1261-11ef-99fd-bfc7c66e2865 '!=' 8f246071-1261-11ef-99fd-bfc7c66e2865 ']' 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 58887 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 58887 ']' 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 58887 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 58887 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:18:51.499 killing process with pid 58887 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58887' 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 58887 00:18:51.499 [2024-05-15 02:19:39.478604] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.499 [2024-05-15 02:19:39.478658] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.499 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 58887 00:18:51.499 [2024-05-15 02:19:39.478677] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.499 [2024-05-15 02:19:39.478682] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bdb6c80 name raid_bdev1, state offline 00:18:51.499 [2024-05-15 02:19:39.498740] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.759 02:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:51.759 00:18:51.759 real 0m13.601s 00:18:51.759 user 0m24.352s 00:18:51.759 sys 0m2.065s 00:18:51.759 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:51.759 02:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.759 ************************************ 00:18:51.759 END TEST raid_superblock_test 00:18:51.759 ************************************ 00:18:51.759 02:19:39 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:18:51.759 02:19:39 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:51.759 02:19:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:51.759 02:19:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:51.759 02:19:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.759 ************************************ 00:18:51.759 START TEST raid_state_function_test 00:18:51.759 ************************************ 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=59286 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:51.759 Process raid pid: 59286 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 59286' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 59286 /var/tmp/spdk-raid.sock 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 59286 ']' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.759 02:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.759 [2024-05-15 02:19:39.704076] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:51.759 [2024-05-15 02:19:39.704249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:52.325 EAL: TSC is not safe to use in SMP mode 00:18:52.325 EAL: TSC is not invariant 00:18:52.325 [2024-05-15 02:19:40.170456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.325 [2024-05-15 02:19:40.263943] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:52.325 [2024-05-15 02:19:40.266394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.325 [2024-05-15 02:19:40.267149] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.325 [2024-05-15 02:19:40.267162] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.891 02:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:52.891 02:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:18:52.891 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:53.149 [2024-05-15 02:19:40.922706] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.149 [2024-05-15 02:19:40.922766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.149 [2024-05-15 02:19:40.922771] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.149 [2024-05-15 02:19:40.922780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.149 [2024-05-15 02:19:40.922783] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:53.149 [2024-05-15 02:19:40.922790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:53.149 [2024-05-15 02:19:40.922794] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:53.149 [2024-05-15 02:19:40.922800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.149 02:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.408 02:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.408 "name": "Existed_Raid", 00:18:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.408 "strip_size_kb": 64, 00:18:53.408 "state": "configuring", 00:18:53.408 "raid_level": "concat", 00:18:53.408 "superblock": false, 00:18:53.408 "num_base_bdevs": 4, 00:18:53.408 "num_base_bdevs_discovered": 0, 00:18:53.408 "num_base_bdevs_operational": 4, 00:18:53.408 "base_bdevs_list": [ 00:18:53.408 { 00:18:53.408 "name": "BaseBdev1", 00:18:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.408 "is_configured": false, 00:18:53.408 "data_offset": 0, 00:18:53.408 "data_size": 0 00:18:53.408 }, 00:18:53.408 { 00:18:53.408 "name": "BaseBdev2", 00:18:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.408 "is_configured": false, 00:18:53.408 "data_offset": 0, 00:18:53.408 "data_size": 0 00:18:53.408 }, 00:18:53.408 { 00:18:53.408 "name": "BaseBdev3", 00:18:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.408 "is_configured": false, 00:18:53.408 "data_offset": 0, 00:18:53.408 "data_size": 0 00:18:53.408 }, 00:18:53.408 { 00:18:53.408 "name": "BaseBdev4", 00:18:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.408 "is_configured": false, 00:18:53.408 "data_offset": 0, 00:18:53.408 "data_size": 0 00:18:53.408 } 00:18:53.408 ] 00:18:53.408 }' 00:18:53.408 02:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.408 02:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 02:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:53.925 [2024-05-15 02:19:41.886747] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.925 [2024-05-15 02:19:41.886780] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa05500 name Existed_Raid, state configuring 00:18:53.925 02:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:54.184 [2024-05-15 02:19:42.122760] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.184 [2024-05-15 02:19:42.122820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.184 [2024-05-15 02:19:42.122831] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.184 [2024-05-15 02:19:42.122844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.184 [2024-05-15 02:19:42.122849] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.184 [2024-05-15 02:19:42.122859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.184 [2024-05-15 02:19:42.122864] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:54.184 [2024-05-15 02:19:42.122875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:54.184 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:54.442 [2024-05-15 02:19:42.427752] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.442 BaseBdev1 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:54.442 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.700 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:54.959 [ 00:18:54.959 { 00:18:54.959 "name": "BaseBdev1", 00:18:54.959 "aliases": [ 00:18:54.959 "96e19315-1261-11ef-99fd-bfc7c66e2865" 00:18:54.959 ], 00:18:54.959 "product_name": "Malloc disk", 00:18:54.959 "block_size": 512, 00:18:54.959 "num_blocks": 65536, 00:18:54.959 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:18:54.959 "assigned_rate_limits": { 00:18:54.959 "rw_ios_per_sec": 0, 00:18:54.959 "rw_mbytes_per_sec": 0, 00:18:54.959 "r_mbytes_per_sec": 0, 00:18:54.959 "w_mbytes_per_sec": 0 00:18:54.959 }, 00:18:54.959 "claimed": true, 00:18:54.959 "claim_type": "exclusive_write", 00:18:54.959 "zoned": false, 00:18:54.959 "supported_io_types": { 00:18:54.959 "read": true, 00:18:54.959 "write": true, 00:18:54.959 "unmap": true, 00:18:54.959 "write_zeroes": true, 00:18:54.959 "flush": true, 00:18:54.959 "reset": true, 00:18:54.959 "compare": false, 00:18:54.959 "compare_and_write": false, 00:18:54.959 "abort": true, 00:18:54.959 "nvme_admin": false, 00:18:54.959 "nvme_io": false 00:18:54.959 }, 00:18:54.959 "memory_domains": [ 00:18:54.959 { 00:18:54.959 "dma_device_id": "system", 00:18:54.959 "dma_device_type": 1 00:18:54.959 }, 00:18:54.959 { 00:18:54.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.959 "dma_device_type": 2 00:18:54.959 } 00:18:54.959 ], 00:18:54.959 "driver_specific": {} 00:18:54.959 } 00:18:54.959 ] 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:55.217 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.218 02:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.476 02:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.476 "name": "Existed_Raid", 00:18:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.476 "strip_size_kb": 64, 00:18:55.476 "state": "configuring", 00:18:55.476 "raid_level": "concat", 00:18:55.476 "superblock": false, 00:18:55.476 "num_base_bdevs": 4, 00:18:55.476 "num_base_bdevs_discovered": 1, 00:18:55.476 "num_base_bdevs_operational": 4, 00:18:55.476 "base_bdevs_list": [ 00:18:55.476 { 00:18:55.476 "name": "BaseBdev1", 00:18:55.476 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:18:55.476 "is_configured": true, 00:18:55.476 "data_offset": 0, 00:18:55.476 "data_size": 65536 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev2", 00:18:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.476 "is_configured": false, 00:18:55.476 "data_offset": 0, 00:18:55.476 "data_size": 0 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev3", 00:18:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.476 "is_configured": false, 00:18:55.476 "data_offset": 0, 00:18:55.476 "data_size": 0 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev4", 00:18:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.476 "is_configured": false, 00:18:55.476 "data_offset": 0, 00:18:55.476 "data_size": 0 00:18:55.476 } 00:18:55.476 ] 00:18:55.476 }' 00:18:55.476 02:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.476 02:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.733 02:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:55.992 [2024-05-15 02:19:43.974899] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.992 [2024-05-15 02:19:43.974940] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa05500 name Existed_Raid, state configuring 00:18:55.992 02:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:56.250 [2024-05-15 02:19:44.234928] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.250 [2024-05-15 02:19:44.235687] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.250 [2024-05-15 02:19:44.235746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.250 [2024-05-15 02:19:44.235751] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.250 [2024-05-15 02:19:44.235760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.250 [2024-05-15 02:19:44.235764] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.250 [2024-05-15 02:19:44.235771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.250 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.251 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.818 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.818 "name": "Existed_Raid", 00:18:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.818 "strip_size_kb": 64, 00:18:56.818 "state": "configuring", 00:18:56.818 "raid_level": "concat", 00:18:56.818 "superblock": false, 00:18:56.818 "num_base_bdevs": 4, 00:18:56.818 "num_base_bdevs_discovered": 1, 00:18:56.818 "num_base_bdevs_operational": 4, 00:18:56.818 "base_bdevs_list": [ 00:18:56.818 { 00:18:56.818 "name": "BaseBdev1", 00:18:56.818 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:18:56.818 "is_configured": true, 00:18:56.818 "data_offset": 0, 00:18:56.818 "data_size": 65536 00:18:56.818 }, 00:18:56.818 { 00:18:56.818 "name": "BaseBdev2", 00:18:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.818 "is_configured": false, 00:18:56.818 "data_offset": 0, 00:18:56.818 "data_size": 0 00:18:56.818 }, 00:18:56.818 { 00:18:56.818 "name": "BaseBdev3", 00:18:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.818 "is_configured": false, 00:18:56.818 "data_offset": 0, 00:18:56.818 "data_size": 0 00:18:56.818 }, 00:18:56.818 { 00:18:56.818 "name": "BaseBdev4", 00:18:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.818 "is_configured": false, 00:18:56.818 "data_offset": 0, 00:18:56.818 "data_size": 0 00:18:56.818 } 00:18:56.818 ] 00:18:56.818 }' 00:18:56.818 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.818 02:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.076 02:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:57.355 [2024-05-15 02:19:45.123108] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.355 BaseBdev2 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:57.355 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.614 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:57.872 [ 00:18:57.872 { 00:18:57.872 "name": "BaseBdev2", 00:18:57.872 "aliases": [ 00:18:57.872 "987cfa84-1261-11ef-99fd-bfc7c66e2865" 00:18:57.872 ], 00:18:57.872 "product_name": "Malloc disk", 00:18:57.872 "block_size": 512, 00:18:57.872 "num_blocks": 65536, 00:18:57.872 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:18:57.872 "assigned_rate_limits": { 00:18:57.872 "rw_ios_per_sec": 0, 00:18:57.872 "rw_mbytes_per_sec": 0, 00:18:57.872 "r_mbytes_per_sec": 0, 00:18:57.872 "w_mbytes_per_sec": 0 00:18:57.872 }, 00:18:57.872 "claimed": true, 00:18:57.872 "claim_type": "exclusive_write", 00:18:57.872 "zoned": false, 00:18:57.872 "supported_io_types": { 00:18:57.872 "read": true, 00:18:57.872 "write": true, 00:18:57.872 "unmap": true, 00:18:57.872 "write_zeroes": true, 00:18:57.872 "flush": true, 00:18:57.872 "reset": true, 00:18:57.872 "compare": false, 00:18:57.872 "compare_and_write": false, 00:18:57.872 "abort": true, 00:18:57.872 "nvme_admin": false, 00:18:57.872 "nvme_io": false 00:18:57.872 }, 00:18:57.872 "memory_domains": [ 00:18:57.872 { 00:18:57.872 "dma_device_id": "system", 00:18:57.872 "dma_device_type": 1 00:18:57.872 }, 00:18:57.872 { 00:18:57.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.872 "dma_device_type": 2 00:18:57.872 } 00:18:57.872 ], 00:18:57.872 "driver_specific": {} 00:18:57.872 } 00:18:57.872 ] 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.872 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.129 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.129 "name": "Existed_Raid", 00:18:58.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.129 "strip_size_kb": 64, 00:18:58.129 "state": "configuring", 00:18:58.129 "raid_level": "concat", 00:18:58.129 "superblock": false, 00:18:58.129 "num_base_bdevs": 4, 00:18:58.129 "num_base_bdevs_discovered": 2, 00:18:58.129 "num_base_bdevs_operational": 4, 00:18:58.129 "base_bdevs_list": [ 00:18:58.129 { 00:18:58.129 "name": "BaseBdev1", 00:18:58.129 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:18:58.129 "is_configured": true, 00:18:58.129 "data_offset": 0, 00:18:58.129 "data_size": 65536 00:18:58.129 }, 00:18:58.129 { 00:18:58.129 "name": "BaseBdev2", 00:18:58.129 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:18:58.129 "is_configured": true, 00:18:58.129 "data_offset": 0, 00:18:58.129 "data_size": 65536 00:18:58.129 }, 00:18:58.129 { 00:18:58.129 "name": "BaseBdev3", 00:18:58.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.129 "is_configured": false, 00:18:58.129 "data_offset": 0, 00:18:58.130 "data_size": 0 00:18:58.130 }, 00:18:58.130 { 00:18:58.130 "name": "BaseBdev4", 00:18:58.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.130 "is_configured": false, 00:18:58.130 "data_offset": 0, 00:18:58.130 "data_size": 0 00:18:58.130 } 00:18:58.130 ] 00:18:58.130 }' 00:18:58.130 02:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.130 02:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.388 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:58.652 [2024-05-15 02:19:46.563203] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.652 BaseBdev3 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:58.652 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.911 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:59.169 [ 00:18:59.169 { 00:18:59.169 "name": "BaseBdev3", 00:18:59.169 "aliases": [ 00:18:59.169 "9958b90e-1261-11ef-99fd-bfc7c66e2865" 00:18:59.169 ], 00:18:59.169 "product_name": "Malloc disk", 00:18:59.169 "block_size": 512, 00:18:59.169 "num_blocks": 65536, 00:18:59.169 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:18:59.169 "assigned_rate_limits": { 00:18:59.169 "rw_ios_per_sec": 0, 00:18:59.169 "rw_mbytes_per_sec": 0, 00:18:59.169 "r_mbytes_per_sec": 0, 00:18:59.169 "w_mbytes_per_sec": 0 00:18:59.169 }, 00:18:59.169 "claimed": true, 00:18:59.169 "claim_type": "exclusive_write", 00:18:59.169 "zoned": false, 00:18:59.169 "supported_io_types": { 00:18:59.169 "read": true, 00:18:59.169 "write": true, 00:18:59.169 "unmap": true, 00:18:59.169 "write_zeroes": true, 00:18:59.169 "flush": true, 00:18:59.169 "reset": true, 00:18:59.169 "compare": false, 00:18:59.169 "compare_and_write": false, 00:18:59.169 "abort": true, 00:18:59.169 "nvme_admin": false, 00:18:59.169 "nvme_io": false 00:18:59.169 }, 00:18:59.169 "memory_domains": [ 00:18:59.169 { 00:18:59.169 "dma_device_id": "system", 00:18:59.169 "dma_device_type": 1 00:18:59.169 }, 00:18:59.169 { 00:18:59.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.169 "dma_device_type": 2 00:18:59.169 } 00:18:59.169 ], 00:18:59.169 "driver_specific": {} 00:18:59.169 } 00:18:59.169 ] 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.169 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.428 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.428 "name": "Existed_Raid", 00:18:59.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.428 "strip_size_kb": 64, 00:18:59.428 "state": "configuring", 00:18:59.428 "raid_level": "concat", 00:18:59.428 "superblock": false, 00:18:59.428 "num_base_bdevs": 4, 00:18:59.428 "num_base_bdevs_discovered": 3, 00:18:59.428 "num_base_bdevs_operational": 4, 00:18:59.428 "base_bdevs_list": [ 00:18:59.428 { 00:18:59.428 "name": "BaseBdev1", 00:18:59.428 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:18:59.428 "is_configured": true, 00:18:59.428 "data_offset": 0, 00:18:59.428 "data_size": 65536 00:18:59.428 }, 00:18:59.428 { 00:18:59.428 "name": "BaseBdev2", 00:18:59.428 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:18:59.428 "is_configured": true, 00:18:59.428 "data_offset": 0, 00:18:59.428 "data_size": 65536 00:18:59.428 }, 00:18:59.428 { 00:18:59.428 "name": "BaseBdev3", 00:18:59.428 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:18:59.428 "is_configured": true, 00:18:59.428 "data_offset": 0, 00:18:59.428 "data_size": 65536 00:18:59.428 }, 00:18:59.428 { 00:18:59.428 "name": "BaseBdev4", 00:18:59.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.428 "is_configured": false, 00:18:59.428 "data_offset": 0, 00:18:59.428 "data_size": 0 00:18:59.428 } 00:18:59.428 ] 00:18:59.428 }' 00:18:59.428 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.428 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:59.993 [2024-05-15 02:19:47.943267] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:59.993 [2024-05-15 02:19:47.943305] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa05a00 00:18:59.993 [2024-05-15 02:19:47.943310] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:59.993 [2024-05-15 02:19:47.943340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa68ec0 00:18:59.993 [2024-05-15 02:19:47.943428] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa05a00 00:18:59.993 [2024-05-15 02:19:47.943432] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aa05a00 00:18:59.993 [2024-05-15 02:19:47.943463] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.993 BaseBdev4 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:59.993 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.252 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:00.510 [ 00:19:00.510 { 00:19:00.510 "name": "BaseBdev4", 00:19:00.510 "aliases": [ 00:19:00.510 "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865" 00:19:00.510 ], 00:19:00.510 "product_name": "Malloc disk", 00:19:00.510 "block_size": 512, 00:19:00.510 "num_blocks": 65536, 00:19:00.510 "uuid": "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865", 00:19:00.510 "assigned_rate_limits": { 00:19:00.510 "rw_ios_per_sec": 0, 00:19:00.510 "rw_mbytes_per_sec": 0, 00:19:00.510 "r_mbytes_per_sec": 0, 00:19:00.510 "w_mbytes_per_sec": 0 00:19:00.510 }, 00:19:00.510 "claimed": true, 00:19:00.510 "claim_type": "exclusive_write", 00:19:00.510 "zoned": false, 00:19:00.510 "supported_io_types": { 00:19:00.510 "read": true, 00:19:00.510 "write": true, 00:19:00.510 "unmap": true, 00:19:00.510 "write_zeroes": true, 00:19:00.510 "flush": true, 00:19:00.510 "reset": true, 00:19:00.510 "compare": false, 00:19:00.510 "compare_and_write": false, 00:19:00.510 "abort": true, 00:19:00.510 "nvme_admin": false, 00:19:00.510 "nvme_io": false 00:19:00.510 }, 00:19:00.510 "memory_domains": [ 00:19:00.510 { 00:19:00.510 "dma_device_id": "system", 00:19:00.510 "dma_device_type": 1 00:19:00.510 }, 00:19:00.510 { 00:19:00.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.510 "dma_device_type": 2 00:19:00.510 } 00:19:00.510 ], 00:19:00.510 "driver_specific": {} 00:19:00.510 } 00:19:00.510 ] 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.510 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.076 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.076 "name": "Existed_Raid", 00:19:01.076 "uuid": "9a2b5420-1261-11ef-99fd-bfc7c66e2865", 00:19:01.076 "strip_size_kb": 64, 00:19:01.076 "state": "online", 00:19:01.076 "raid_level": "concat", 00:19:01.076 "superblock": false, 00:19:01.076 "num_base_bdevs": 4, 00:19:01.076 "num_base_bdevs_discovered": 4, 00:19:01.076 "num_base_bdevs_operational": 4, 00:19:01.076 "base_bdevs_list": [ 00:19:01.076 { 00:19:01.076 "name": "BaseBdev1", 00:19:01.076 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:19:01.076 "is_configured": true, 00:19:01.076 "data_offset": 0, 00:19:01.076 "data_size": 65536 00:19:01.076 }, 00:19:01.076 { 00:19:01.076 "name": "BaseBdev2", 00:19:01.076 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:19:01.076 "is_configured": true, 00:19:01.076 "data_offset": 0, 00:19:01.076 "data_size": 65536 00:19:01.076 }, 00:19:01.076 { 00:19:01.076 "name": "BaseBdev3", 00:19:01.076 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:19:01.076 "is_configured": true, 00:19:01.076 "data_offset": 0, 00:19:01.076 "data_size": 65536 00:19:01.076 }, 00:19:01.076 { 00:19:01.076 "name": "BaseBdev4", 00:19:01.076 "uuid": "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865", 00:19:01.076 "is_configured": true, 00:19:01.076 "data_offset": 0, 00:19:01.076 "data_size": 65536 00:19:01.076 } 00:19:01.076 ] 00:19:01.076 }' 00:19:01.076 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.076 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:01.334 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:01.592 [2024-05-15 02:19:49.479303] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:01.592 "name": "Existed_Raid", 00:19:01.592 "aliases": [ 00:19:01.592 "9a2b5420-1261-11ef-99fd-bfc7c66e2865" 00:19:01.592 ], 00:19:01.592 "product_name": "Raid Volume", 00:19:01.592 "block_size": 512, 00:19:01.592 "num_blocks": 262144, 00:19:01.592 "uuid": "9a2b5420-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "assigned_rate_limits": { 00:19:01.592 "rw_ios_per_sec": 0, 00:19:01.592 "rw_mbytes_per_sec": 0, 00:19:01.592 "r_mbytes_per_sec": 0, 00:19:01.592 "w_mbytes_per_sec": 0 00:19:01.592 }, 00:19:01.592 "claimed": false, 00:19:01.592 "zoned": false, 00:19:01.592 "supported_io_types": { 00:19:01.592 "read": true, 00:19:01.592 "write": true, 00:19:01.592 "unmap": true, 00:19:01.592 "write_zeroes": true, 00:19:01.592 "flush": true, 00:19:01.592 "reset": true, 00:19:01.592 "compare": false, 00:19:01.592 "compare_and_write": false, 00:19:01.592 "abort": false, 00:19:01.592 "nvme_admin": false, 00:19:01.592 "nvme_io": false 00:19:01.592 }, 00:19:01.592 "memory_domains": [ 00:19:01.592 { 00:19:01.592 "dma_device_id": "system", 00:19:01.592 "dma_device_type": 1 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.592 "dma_device_type": 2 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "system", 00:19:01.592 "dma_device_type": 1 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.592 "dma_device_type": 2 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "system", 00:19:01.592 "dma_device_type": 1 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.592 "dma_device_type": 2 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "system", 00:19:01.592 "dma_device_type": 1 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.592 "dma_device_type": 2 00:19:01.592 } 00:19:01.592 ], 00:19:01.592 "driver_specific": { 00:19:01.592 "raid": { 00:19:01.592 "uuid": "9a2b5420-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "strip_size_kb": 64, 00:19:01.592 "state": "online", 00:19:01.592 "raid_level": "concat", 00:19:01.592 "superblock": false, 00:19:01.592 "num_base_bdevs": 4, 00:19:01.592 "num_base_bdevs_discovered": 4, 00:19:01.592 "num_base_bdevs_operational": 4, 00:19:01.592 "base_bdevs_list": [ 00:19:01.592 { 00:19:01.592 "name": "BaseBdev1", 00:19:01.592 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "is_configured": true, 00:19:01.592 "data_offset": 0, 00:19:01.592 "data_size": 65536 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "name": "BaseBdev2", 00:19:01.592 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "is_configured": true, 00:19:01.592 "data_offset": 0, 00:19:01.592 "data_size": 65536 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "name": "BaseBdev3", 00:19:01.592 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "is_configured": true, 00:19:01.592 "data_offset": 0, 00:19:01.592 "data_size": 65536 00:19:01.592 }, 00:19:01.592 { 00:19:01.592 "name": "BaseBdev4", 00:19:01.592 "uuid": "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865", 00:19:01.592 "is_configured": true, 00:19:01.592 "data_offset": 0, 00:19:01.592 "data_size": 65536 00:19:01.592 } 00:19:01.592 ] 00:19:01.592 } 00:19:01.592 } 00:19:01.592 }' 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:01.592 BaseBdev2 00:19:01.592 BaseBdev3 00:19:01.592 BaseBdev4' 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:01.592 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:01.850 "name": "BaseBdev1", 00:19:01.850 "aliases": [ 00:19:01.850 "96e19315-1261-11ef-99fd-bfc7c66e2865" 00:19:01.850 ], 00:19:01.850 "product_name": "Malloc disk", 00:19:01.850 "block_size": 512, 00:19:01.850 "num_blocks": 65536, 00:19:01.850 "uuid": "96e19315-1261-11ef-99fd-bfc7c66e2865", 00:19:01.850 "assigned_rate_limits": { 00:19:01.850 "rw_ios_per_sec": 0, 00:19:01.850 "rw_mbytes_per_sec": 0, 00:19:01.850 "r_mbytes_per_sec": 0, 00:19:01.850 "w_mbytes_per_sec": 0 00:19:01.850 }, 00:19:01.850 "claimed": true, 00:19:01.850 "claim_type": "exclusive_write", 00:19:01.850 "zoned": false, 00:19:01.850 "supported_io_types": { 00:19:01.850 "read": true, 00:19:01.850 "write": true, 00:19:01.850 "unmap": true, 00:19:01.850 "write_zeroes": true, 00:19:01.850 "flush": true, 00:19:01.850 "reset": true, 00:19:01.850 "compare": false, 00:19:01.850 "compare_and_write": false, 00:19:01.850 "abort": true, 00:19:01.850 "nvme_admin": false, 00:19:01.850 "nvme_io": false 00:19:01.850 }, 00:19:01.850 "memory_domains": [ 00:19:01.850 { 00:19:01.850 "dma_device_id": "system", 00:19:01.850 "dma_device_type": 1 00:19:01.850 }, 00:19:01.850 { 00:19:01.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.850 "dma_device_type": 2 00:19:01.850 } 00:19:01.850 ], 00:19:01.850 "driver_specific": {} 00:19:01.850 }' 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:01.850 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.108 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.108 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:02.108 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:02.108 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:02.108 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:02.482 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:02.482 "name": "BaseBdev2", 00:19:02.482 "aliases": [ 00:19:02.482 "987cfa84-1261-11ef-99fd-bfc7c66e2865" 00:19:02.482 ], 00:19:02.482 "product_name": "Malloc disk", 00:19:02.482 "block_size": 512, 00:19:02.482 "num_blocks": 65536, 00:19:02.482 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:19:02.482 "assigned_rate_limits": { 00:19:02.482 "rw_ios_per_sec": 0, 00:19:02.482 "rw_mbytes_per_sec": 0, 00:19:02.482 "r_mbytes_per_sec": 0, 00:19:02.482 "w_mbytes_per_sec": 0 00:19:02.482 }, 00:19:02.482 "claimed": true, 00:19:02.482 "claim_type": "exclusive_write", 00:19:02.482 "zoned": false, 00:19:02.482 "supported_io_types": { 00:19:02.482 "read": true, 00:19:02.482 "write": true, 00:19:02.483 "unmap": true, 00:19:02.483 "write_zeroes": true, 00:19:02.483 "flush": true, 00:19:02.483 "reset": true, 00:19:02.483 "compare": false, 00:19:02.483 "compare_and_write": false, 00:19:02.483 "abort": true, 00:19:02.483 "nvme_admin": false, 00:19:02.483 "nvme_io": false 00:19:02.483 }, 00:19:02.483 "memory_domains": [ 00:19:02.483 { 00:19:02.483 "dma_device_id": "system", 00:19:02.483 "dma_device_type": 1 00:19:02.483 }, 00:19:02.483 { 00:19:02.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.483 "dma_device_type": 2 00:19:02.483 } 00:19:02.483 ], 00:19:02.483 "driver_specific": {} 00:19:02.483 }' 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:02.483 "name": "BaseBdev3", 00:19:02.483 "aliases": [ 00:19:02.483 "9958b90e-1261-11ef-99fd-bfc7c66e2865" 00:19:02.483 ], 00:19:02.483 "product_name": "Malloc disk", 00:19:02.483 "block_size": 512, 00:19:02.483 "num_blocks": 65536, 00:19:02.483 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:19:02.483 "assigned_rate_limits": { 00:19:02.483 "rw_ios_per_sec": 0, 00:19:02.483 "rw_mbytes_per_sec": 0, 00:19:02.483 "r_mbytes_per_sec": 0, 00:19:02.483 "w_mbytes_per_sec": 0 00:19:02.483 }, 00:19:02.483 "claimed": true, 00:19:02.483 "claim_type": "exclusive_write", 00:19:02.483 "zoned": false, 00:19:02.483 "supported_io_types": { 00:19:02.483 "read": true, 00:19:02.483 "write": true, 00:19:02.483 "unmap": true, 00:19:02.483 "write_zeroes": true, 00:19:02.483 "flush": true, 00:19:02.483 "reset": true, 00:19:02.483 "compare": false, 00:19:02.483 "compare_and_write": false, 00:19:02.483 "abort": true, 00:19:02.483 "nvme_admin": false, 00:19:02.483 "nvme_io": false 00:19:02.483 }, 00:19:02.483 "memory_domains": [ 00:19:02.483 { 00:19:02.483 "dma_device_id": "system", 00:19:02.483 "dma_device_type": 1 00:19:02.483 }, 00:19:02.483 { 00:19:02.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.483 "dma_device_type": 2 00:19:02.483 } 00:19:02.483 ], 00:19:02.483 "driver_specific": {} 00:19:02.483 }' 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:02.483 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:02.742 "name": "BaseBdev4", 00:19:02.742 "aliases": [ 00:19:02.742 "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865" 00:19:02.742 ], 00:19:02.742 "product_name": "Malloc disk", 00:19:02.742 "block_size": 512, 00:19:02.742 "num_blocks": 65536, 00:19:02.742 "uuid": "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865", 00:19:02.742 "assigned_rate_limits": { 00:19:02.742 "rw_ios_per_sec": 0, 00:19:02.742 "rw_mbytes_per_sec": 0, 00:19:02.742 "r_mbytes_per_sec": 0, 00:19:02.742 "w_mbytes_per_sec": 0 00:19:02.742 }, 00:19:02.742 "claimed": true, 00:19:02.742 "claim_type": "exclusive_write", 00:19:02.742 "zoned": false, 00:19:02.742 "supported_io_types": { 00:19:02.742 "read": true, 00:19:02.742 "write": true, 00:19:02.742 "unmap": true, 00:19:02.742 "write_zeroes": true, 00:19:02.742 "flush": true, 00:19:02.742 "reset": true, 00:19:02.742 "compare": false, 00:19:02.742 "compare_and_write": false, 00:19:02.742 "abort": true, 00:19:02.742 "nvme_admin": false, 00:19:02.742 "nvme_io": false 00:19:02.742 }, 00:19:02.742 "memory_domains": [ 00:19:02.742 { 00:19:02.742 "dma_device_id": "system", 00:19:02.742 "dma_device_type": 1 00:19:02.742 }, 00:19:02.742 { 00:19:02.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.742 "dma_device_type": 2 00:19:02.742 } 00:19:02.742 ], 00:19:02.742 "driver_specific": {} 00:19:02.742 }' 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:02.742 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:03.000 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:03.000 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:03.000 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:03.000 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:03.000 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:03.258 [2024-05-15 02:19:51.023383] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.258 [2024-05-15 02:19:51.023417] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.258 [2024-05-15 02:19:51.023458] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.258 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.515 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.515 "name": "Existed_Raid", 00:19:03.515 "uuid": "9a2b5420-1261-11ef-99fd-bfc7c66e2865", 00:19:03.515 "strip_size_kb": 64, 00:19:03.515 "state": "offline", 00:19:03.515 "raid_level": "concat", 00:19:03.515 "superblock": false, 00:19:03.515 "num_base_bdevs": 4, 00:19:03.515 "num_base_bdevs_discovered": 3, 00:19:03.515 "num_base_bdevs_operational": 3, 00:19:03.515 "base_bdevs_list": [ 00:19:03.515 { 00:19:03.515 "name": null, 00:19:03.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.515 "is_configured": false, 00:19:03.515 "data_offset": 0, 00:19:03.515 "data_size": 65536 00:19:03.515 }, 00:19:03.515 { 00:19:03.515 "name": "BaseBdev2", 00:19:03.515 "uuid": "987cfa84-1261-11ef-99fd-bfc7c66e2865", 00:19:03.515 "is_configured": true, 00:19:03.515 "data_offset": 0, 00:19:03.515 "data_size": 65536 00:19:03.515 }, 00:19:03.515 { 00:19:03.515 "name": "BaseBdev3", 00:19:03.515 "uuid": "9958b90e-1261-11ef-99fd-bfc7c66e2865", 00:19:03.515 "is_configured": true, 00:19:03.515 "data_offset": 0, 00:19:03.515 "data_size": 65536 00:19:03.515 }, 00:19:03.515 { 00:19:03.515 "name": "BaseBdev4", 00:19:03.515 "uuid": "9a2b4dc9-1261-11ef-99fd-bfc7c66e2865", 00:19:03.515 "is_configured": true, 00:19:03.515 "data_offset": 0, 00:19:03.515 "data_size": 65536 00:19:03.515 } 00:19:03.515 ] 00:19:03.515 }' 00:19:03.515 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.515 02:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.774 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:03.774 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:03.774 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:03.774 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.031 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:04.031 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:04.031 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:04.288 [2024-05-15 02:19:52.116419] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:04.288 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:04.288 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:04.288 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:04.288 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.546 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:04.546 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:04.547 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:04.804 [2024-05-15 02:19:52.657331] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:04.804 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:04.804 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:04.804 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.804 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:05.061 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:05.062 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:05.062 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:05.366 [2024-05-15 02:19:53.226234] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:05.366 [2024-05-15 02:19:53.226276] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa05a00 name Existed_Raid, state offline 00:19:05.366 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:05.366 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:05.366 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.366 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:05.630 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:05.888 BaseBdev2 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:05.888 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.146 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.404 [ 00:19:06.404 { 00:19:06.404 "name": "BaseBdev2", 00:19:06.404 "aliases": [ 00:19:06.404 "9da05053-1261-11ef-99fd-bfc7c66e2865" 00:19:06.404 ], 00:19:06.404 "product_name": "Malloc disk", 00:19:06.404 "block_size": 512, 00:19:06.404 "num_blocks": 65536, 00:19:06.404 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:06.404 "assigned_rate_limits": { 00:19:06.404 "rw_ios_per_sec": 0, 00:19:06.404 "rw_mbytes_per_sec": 0, 00:19:06.404 "r_mbytes_per_sec": 0, 00:19:06.404 "w_mbytes_per_sec": 0 00:19:06.404 }, 00:19:06.404 "claimed": false, 00:19:06.404 "zoned": false, 00:19:06.404 "supported_io_types": { 00:19:06.404 "read": true, 00:19:06.404 "write": true, 00:19:06.404 "unmap": true, 00:19:06.404 "write_zeroes": true, 00:19:06.404 "flush": true, 00:19:06.404 "reset": true, 00:19:06.404 "compare": false, 00:19:06.404 "compare_and_write": false, 00:19:06.404 "abort": true, 00:19:06.404 "nvme_admin": false, 00:19:06.404 "nvme_io": false 00:19:06.404 }, 00:19:06.404 "memory_domains": [ 00:19:06.404 { 00:19:06.404 "dma_device_id": "system", 00:19:06.404 "dma_device_type": 1 00:19:06.404 }, 00:19:06.404 { 00:19:06.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.404 "dma_device_type": 2 00:19:06.404 } 00:19:06.404 ], 00:19:06.404 "driver_specific": {} 00:19:06.404 } 00:19:06.404 ] 00:19:06.404 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:06.404 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:06.404 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:06.404 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:06.663 BaseBdev3 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:06.663 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.922 02:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:07.180 [ 00:19:07.180 { 00:19:07.180 "name": "BaseBdev3", 00:19:07.180 "aliases": [ 00:19:07.180 "9e238c65-1261-11ef-99fd-bfc7c66e2865" 00:19:07.180 ], 00:19:07.180 "product_name": "Malloc disk", 00:19:07.180 "block_size": 512, 00:19:07.180 "num_blocks": 65536, 00:19:07.180 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:07.180 "assigned_rate_limits": { 00:19:07.180 "rw_ios_per_sec": 0, 00:19:07.180 "rw_mbytes_per_sec": 0, 00:19:07.180 "r_mbytes_per_sec": 0, 00:19:07.180 "w_mbytes_per_sec": 0 00:19:07.180 }, 00:19:07.180 "claimed": false, 00:19:07.180 "zoned": false, 00:19:07.180 "supported_io_types": { 00:19:07.180 "read": true, 00:19:07.180 "write": true, 00:19:07.180 "unmap": true, 00:19:07.180 "write_zeroes": true, 00:19:07.180 "flush": true, 00:19:07.180 "reset": true, 00:19:07.180 "compare": false, 00:19:07.180 "compare_and_write": false, 00:19:07.180 "abort": true, 00:19:07.180 "nvme_admin": false, 00:19:07.180 "nvme_io": false 00:19:07.180 }, 00:19:07.180 "memory_domains": [ 00:19:07.180 { 00:19:07.180 "dma_device_id": "system", 00:19:07.180 "dma_device_type": 1 00:19:07.180 }, 00:19:07.180 { 00:19:07.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.180 "dma_device_type": 2 00:19:07.180 } 00:19:07.180 ], 00:19:07.180 "driver_specific": {} 00:19:07.180 } 00:19:07.180 ] 00:19:07.180 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:07.180 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:07.180 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:07.180 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:07.438 BaseBdev4 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:07.438 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.695 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:07.999 [ 00:19:07.999 { 00:19:07.999 "name": "BaseBdev4", 00:19:07.999 "aliases": [ 00:19:07.999 "9e99f68e-1261-11ef-99fd-bfc7c66e2865" 00:19:07.999 ], 00:19:07.999 "product_name": "Malloc disk", 00:19:07.999 "block_size": 512, 00:19:07.999 "num_blocks": 65536, 00:19:07.999 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:07.999 "assigned_rate_limits": { 00:19:07.999 "rw_ios_per_sec": 0, 00:19:07.999 "rw_mbytes_per_sec": 0, 00:19:07.999 "r_mbytes_per_sec": 0, 00:19:07.999 "w_mbytes_per_sec": 0 00:19:07.999 }, 00:19:07.999 "claimed": false, 00:19:07.999 "zoned": false, 00:19:07.999 "supported_io_types": { 00:19:07.999 "read": true, 00:19:07.999 "write": true, 00:19:07.999 "unmap": true, 00:19:07.999 "write_zeroes": true, 00:19:07.999 "flush": true, 00:19:07.999 "reset": true, 00:19:07.999 "compare": false, 00:19:07.999 "compare_and_write": false, 00:19:07.999 "abort": true, 00:19:07.999 "nvme_admin": false, 00:19:07.999 "nvme_io": false 00:19:07.999 }, 00:19:07.999 "memory_domains": [ 00:19:07.999 { 00:19:07.999 "dma_device_id": "system", 00:19:07.999 "dma_device_type": 1 00:19:07.999 }, 00:19:07.999 { 00:19:07.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.999 "dma_device_type": 2 00:19:07.999 } 00:19:07.999 ], 00:19:07.999 "driver_specific": {} 00:19:07.999 } 00:19:07.999 ] 00:19:07.999 02:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:07.999 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:07.999 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:07.999 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:08.277 [2024-05-15 02:19:56.215325] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.277 [2024-05-15 02:19:56.215402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.277 [2024-05-15 02:19:56.215413] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.277 [2024-05-15 02:19:56.215888] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.277 [2024-05-15 02:19:56.215920] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.277 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.843 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.843 "name": "Existed_Raid", 00:19:08.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.843 "strip_size_kb": 64, 00:19:08.843 "state": "configuring", 00:19:08.843 "raid_level": "concat", 00:19:08.843 "superblock": false, 00:19:08.843 "num_base_bdevs": 4, 00:19:08.843 "num_base_bdevs_discovered": 3, 00:19:08.843 "num_base_bdevs_operational": 4, 00:19:08.843 "base_bdevs_list": [ 00:19:08.843 { 00:19:08.843 "name": "BaseBdev1", 00:19:08.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.843 "is_configured": false, 00:19:08.843 "data_offset": 0, 00:19:08.843 "data_size": 0 00:19:08.843 }, 00:19:08.843 { 00:19:08.843 "name": "BaseBdev2", 00:19:08.843 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:08.843 "is_configured": true, 00:19:08.843 "data_offset": 0, 00:19:08.843 "data_size": 65536 00:19:08.843 }, 00:19:08.843 { 00:19:08.843 "name": "BaseBdev3", 00:19:08.843 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:08.843 "is_configured": true, 00:19:08.843 "data_offset": 0, 00:19:08.843 "data_size": 65536 00:19:08.843 }, 00:19:08.843 { 00:19:08.843 "name": "BaseBdev4", 00:19:08.843 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:08.843 "is_configured": true, 00:19:08.843 "data_offset": 0, 00:19:08.843 "data_size": 65536 00:19:08.843 } 00:19:08.843 ] 00:19:08.843 }' 00:19:08.843 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.843 02:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.101 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:09.359 [2024-05-15 02:19:57.167419] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.359 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.618 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.618 "name": "Existed_Raid", 00:19:09.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.618 "strip_size_kb": 64, 00:19:09.618 "state": "configuring", 00:19:09.618 "raid_level": "concat", 00:19:09.618 "superblock": false, 00:19:09.618 "num_base_bdevs": 4, 00:19:09.618 "num_base_bdevs_discovered": 2, 00:19:09.618 "num_base_bdevs_operational": 4, 00:19:09.618 "base_bdevs_list": [ 00:19:09.618 { 00:19:09.618 "name": "BaseBdev1", 00:19:09.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.618 "is_configured": false, 00:19:09.618 "data_offset": 0, 00:19:09.618 "data_size": 0 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": null, 00:19:09.618 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:09.618 "is_configured": false, 00:19:09.618 "data_offset": 0, 00:19:09.618 "data_size": 65536 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": "BaseBdev3", 00:19:09.618 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 0, 00:19:09.618 "data_size": 65536 00:19:09.618 }, 00:19:09.618 { 00:19:09.618 "name": "BaseBdev4", 00:19:09.618 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:09.618 "is_configured": true, 00:19:09.618 "data_offset": 0, 00:19:09.618 "data_size": 65536 00:19:09.618 } 00:19:09.618 ] 00:19:09.618 }' 00:19:09.618 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.618 02:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.876 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:09.876 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.443 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:19:10.443 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:10.702 [2024-05-15 02:19:58.471628] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.702 BaseBdev1 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:10.702 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.960 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:11.218 [ 00:19:11.218 { 00:19:11.218 "name": "BaseBdev1", 00:19:11.218 "aliases": [ 00:19:11.218 "a071cd9d-1261-11ef-99fd-bfc7c66e2865" 00:19:11.218 ], 00:19:11.218 "product_name": "Malloc disk", 00:19:11.218 "block_size": 512, 00:19:11.218 "num_blocks": 65536, 00:19:11.218 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:11.218 "assigned_rate_limits": { 00:19:11.218 "rw_ios_per_sec": 0, 00:19:11.218 "rw_mbytes_per_sec": 0, 00:19:11.218 "r_mbytes_per_sec": 0, 00:19:11.218 "w_mbytes_per_sec": 0 00:19:11.218 }, 00:19:11.218 "claimed": true, 00:19:11.218 "claim_type": "exclusive_write", 00:19:11.218 "zoned": false, 00:19:11.218 "supported_io_types": { 00:19:11.218 "read": true, 00:19:11.218 "write": true, 00:19:11.218 "unmap": true, 00:19:11.218 "write_zeroes": true, 00:19:11.218 "flush": true, 00:19:11.218 "reset": true, 00:19:11.218 "compare": false, 00:19:11.218 "compare_and_write": false, 00:19:11.218 "abort": true, 00:19:11.218 "nvme_admin": false, 00:19:11.218 "nvme_io": false 00:19:11.218 }, 00:19:11.218 "memory_domains": [ 00:19:11.218 { 00:19:11.218 "dma_device_id": "system", 00:19:11.218 "dma_device_type": 1 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.218 "dma_device_type": 2 00:19:11.218 } 00:19:11.218 ], 00:19:11.218 "driver_specific": {} 00:19:11.218 } 00:19:11.218 ] 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.218 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.476 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.476 "name": "Existed_Raid", 00:19:11.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.476 "strip_size_kb": 64, 00:19:11.476 "state": "configuring", 00:19:11.476 "raid_level": "concat", 00:19:11.476 "superblock": false, 00:19:11.476 "num_base_bdevs": 4, 00:19:11.476 "num_base_bdevs_discovered": 3, 00:19:11.476 "num_base_bdevs_operational": 4, 00:19:11.476 "base_bdevs_list": [ 00:19:11.476 { 00:19:11.476 "name": "BaseBdev1", 00:19:11.476 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:11.476 "is_configured": true, 00:19:11.476 "data_offset": 0, 00:19:11.476 "data_size": 65536 00:19:11.476 }, 00:19:11.476 { 00:19:11.476 "name": null, 00:19:11.476 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:11.476 "is_configured": false, 00:19:11.476 "data_offset": 0, 00:19:11.476 "data_size": 65536 00:19:11.476 }, 00:19:11.476 { 00:19:11.476 "name": "BaseBdev3", 00:19:11.476 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:11.476 "is_configured": true, 00:19:11.476 "data_offset": 0, 00:19:11.476 "data_size": 65536 00:19:11.476 }, 00:19:11.476 { 00:19:11.476 "name": "BaseBdev4", 00:19:11.476 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:11.476 "is_configured": true, 00:19:11.476 "data_offset": 0, 00:19:11.476 "data_size": 65536 00:19:11.476 } 00:19:11.476 ] 00:19:11.476 }' 00:19:11.476 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.476 02:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.042 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.042 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:12.301 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:12.301 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:12.559 [2024-05-15 02:20:00.483696] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.559 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.817 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.817 "name": "Existed_Raid", 00:19:12.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.817 "strip_size_kb": 64, 00:19:12.817 "state": "configuring", 00:19:12.817 "raid_level": "concat", 00:19:12.817 "superblock": false, 00:19:12.817 "num_base_bdevs": 4, 00:19:12.817 "num_base_bdevs_discovered": 2, 00:19:12.817 "num_base_bdevs_operational": 4, 00:19:12.817 "base_bdevs_list": [ 00:19:12.817 { 00:19:12.817 "name": "BaseBdev1", 00:19:12.817 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:12.817 "is_configured": true, 00:19:12.817 "data_offset": 0, 00:19:12.817 "data_size": 65536 00:19:12.817 }, 00:19:12.817 { 00:19:12.817 "name": null, 00:19:12.817 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:12.817 "is_configured": false, 00:19:12.817 "data_offset": 0, 00:19:12.817 "data_size": 65536 00:19:12.817 }, 00:19:12.817 { 00:19:12.817 "name": null, 00:19:12.817 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:12.817 "is_configured": false, 00:19:12.817 "data_offset": 0, 00:19:12.817 "data_size": 65536 00:19:12.817 }, 00:19:12.817 { 00:19:12.817 "name": "BaseBdev4", 00:19:12.817 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:12.817 "is_configured": true, 00:19:12.817 "data_offset": 0, 00:19:12.817 "data_size": 65536 00:19:12.817 } 00:19:12.817 ] 00:19:12.817 }' 00:19:12.817 02:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.817 02:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.383 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.383 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:13.950 [2024-05-15 02:20:01.911748] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.950 02:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.208 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.208 "name": "Existed_Raid", 00:19:14.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.208 "strip_size_kb": 64, 00:19:14.208 "state": "configuring", 00:19:14.208 "raid_level": "concat", 00:19:14.208 "superblock": false, 00:19:14.208 "num_base_bdevs": 4, 00:19:14.208 "num_base_bdevs_discovered": 3, 00:19:14.208 "num_base_bdevs_operational": 4, 00:19:14.208 "base_bdevs_list": [ 00:19:14.208 { 00:19:14.208 "name": "BaseBdev1", 00:19:14.208 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:14.208 "is_configured": true, 00:19:14.208 "data_offset": 0, 00:19:14.208 "data_size": 65536 00:19:14.208 }, 00:19:14.208 { 00:19:14.208 "name": null, 00:19:14.208 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:14.208 "is_configured": false, 00:19:14.208 "data_offset": 0, 00:19:14.208 "data_size": 65536 00:19:14.208 }, 00:19:14.208 { 00:19:14.208 "name": "BaseBdev3", 00:19:14.208 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:14.208 "is_configured": true, 00:19:14.208 "data_offset": 0, 00:19:14.208 "data_size": 65536 00:19:14.208 }, 00:19:14.208 { 00:19:14.208 "name": "BaseBdev4", 00:19:14.208 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:14.208 "is_configured": true, 00:19:14.208 "data_offset": 0, 00:19:14.208 "data_size": 65536 00:19:14.208 } 00:19:14.208 ] 00:19:14.208 }' 00:19:14.208 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.208 02:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.775 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.775 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:15.034 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:19:15.034 02:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:15.292 [2024-05-15 02:20:03.115840] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.292 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.550 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.550 "name": "Existed_Raid", 00:19:15.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.550 "strip_size_kb": 64, 00:19:15.550 "state": "configuring", 00:19:15.550 "raid_level": "concat", 00:19:15.551 "superblock": false, 00:19:15.551 "num_base_bdevs": 4, 00:19:15.551 "num_base_bdevs_discovered": 2, 00:19:15.551 "num_base_bdevs_operational": 4, 00:19:15.551 "base_bdevs_list": [ 00:19:15.551 { 00:19:15.551 "name": null, 00:19:15.551 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:15.551 "is_configured": false, 00:19:15.551 "data_offset": 0, 00:19:15.551 "data_size": 65536 00:19:15.551 }, 00:19:15.551 { 00:19:15.551 "name": null, 00:19:15.551 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:15.551 "is_configured": false, 00:19:15.551 "data_offset": 0, 00:19:15.551 "data_size": 65536 00:19:15.551 }, 00:19:15.551 { 00:19:15.551 "name": "BaseBdev3", 00:19:15.551 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:15.551 "is_configured": true, 00:19:15.551 "data_offset": 0, 00:19:15.551 "data_size": 65536 00:19:15.551 }, 00:19:15.551 { 00:19:15.551 "name": "BaseBdev4", 00:19:15.551 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:15.551 "is_configured": true, 00:19:15.551 "data_offset": 0, 00:19:15.551 "data_size": 65536 00:19:15.551 } 00:19:15.551 ] 00:19:15.551 }' 00:19:15.551 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.551 02:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.810 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.810 02:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:16.068 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:19:16.068 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:16.326 [2024-05-15 02:20:04.316709] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.326 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.893 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.893 "name": "Existed_Raid", 00:19:16.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.893 "strip_size_kb": 64, 00:19:16.893 "state": "configuring", 00:19:16.893 "raid_level": "concat", 00:19:16.893 "superblock": false, 00:19:16.893 "num_base_bdevs": 4, 00:19:16.893 "num_base_bdevs_discovered": 3, 00:19:16.893 "num_base_bdevs_operational": 4, 00:19:16.893 "base_bdevs_list": [ 00:19:16.893 { 00:19:16.893 "name": null, 00:19:16.893 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:16.893 "is_configured": false, 00:19:16.893 "data_offset": 0, 00:19:16.893 "data_size": 65536 00:19:16.893 }, 00:19:16.893 { 00:19:16.893 "name": "BaseBdev2", 00:19:16.893 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:16.893 "is_configured": true, 00:19:16.893 "data_offset": 0, 00:19:16.893 "data_size": 65536 00:19:16.893 }, 00:19:16.893 { 00:19:16.893 "name": "BaseBdev3", 00:19:16.893 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:16.893 "is_configured": true, 00:19:16.893 "data_offset": 0, 00:19:16.893 "data_size": 65536 00:19:16.893 }, 00:19:16.893 { 00:19:16.893 "name": "BaseBdev4", 00:19:16.893 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:16.893 "is_configured": true, 00:19:16.893 "data_offset": 0, 00:19:16.893 "data_size": 65536 00:19:16.893 } 00:19:16.893 ] 00:19:16.893 }' 00:19:16.893 02:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.893 02:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.151 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.151 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:17.409 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:19:17.409 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.409 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:17.668 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a071cd9d-1261-11ef-99fd-bfc7c66e2865 00:19:17.975 [2024-05-15 02:20:05.768958] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:17.975 [2024-05-15 02:20:05.768989] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa05f00 00:19:17.975 [2024-05-15 02:20:05.768993] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:17.975 [2024-05-15 02:20:05.769015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa68e20 00:19:17.975 [2024-05-15 02:20:05.769073] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa05f00 00:19:17.975 [2024-05-15 02:20:05.769077] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aa05f00 00:19:17.975 [2024-05-15 02:20:05.769109] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.975 NewBaseBdev 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:17.975 02:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.254 02:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:18.512 [ 00:19:18.512 { 00:19:18.512 "name": "NewBaseBdev", 00:19:18.512 "aliases": [ 00:19:18.512 "a071cd9d-1261-11ef-99fd-bfc7c66e2865" 00:19:18.512 ], 00:19:18.512 "product_name": "Malloc disk", 00:19:18.512 "block_size": 512, 00:19:18.512 "num_blocks": 65536, 00:19:18.512 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:18.512 "assigned_rate_limits": { 00:19:18.512 "rw_ios_per_sec": 0, 00:19:18.512 "rw_mbytes_per_sec": 0, 00:19:18.512 "r_mbytes_per_sec": 0, 00:19:18.512 "w_mbytes_per_sec": 0 00:19:18.512 }, 00:19:18.512 "claimed": true, 00:19:18.512 "claim_type": "exclusive_write", 00:19:18.512 "zoned": false, 00:19:18.512 "supported_io_types": { 00:19:18.512 "read": true, 00:19:18.512 "write": true, 00:19:18.512 "unmap": true, 00:19:18.512 "write_zeroes": true, 00:19:18.512 "flush": true, 00:19:18.512 "reset": true, 00:19:18.512 "compare": false, 00:19:18.512 "compare_and_write": false, 00:19:18.512 "abort": true, 00:19:18.512 "nvme_admin": false, 00:19:18.512 "nvme_io": false 00:19:18.512 }, 00:19:18.512 "memory_domains": [ 00:19:18.512 { 00:19:18.512 "dma_device_id": "system", 00:19:18.512 "dma_device_type": 1 00:19:18.512 }, 00:19:18.512 { 00:19:18.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.512 "dma_device_type": 2 00:19:18.512 } 00:19:18.512 ], 00:19:18.512 "driver_specific": {} 00:19:18.512 } 00:19:18.512 ] 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.512 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.771 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.771 "name": "Existed_Raid", 00:19:18.771 "uuid": "a4cb4f88-1261-11ef-99fd-bfc7c66e2865", 00:19:18.771 "strip_size_kb": 64, 00:19:18.771 "state": "online", 00:19:18.771 "raid_level": "concat", 00:19:18.771 "superblock": false, 00:19:18.771 "num_base_bdevs": 4, 00:19:18.771 "num_base_bdevs_discovered": 4, 00:19:18.771 "num_base_bdevs_operational": 4, 00:19:18.771 "base_bdevs_list": [ 00:19:18.771 { 00:19:18.771 "name": "NewBaseBdev", 00:19:18.771 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:18.771 "is_configured": true, 00:19:18.771 "data_offset": 0, 00:19:18.771 "data_size": 65536 00:19:18.771 }, 00:19:18.771 { 00:19:18.771 "name": "BaseBdev2", 00:19:18.771 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:18.771 "is_configured": true, 00:19:18.771 "data_offset": 0, 00:19:18.771 "data_size": 65536 00:19:18.771 }, 00:19:18.771 { 00:19:18.771 "name": "BaseBdev3", 00:19:18.771 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:18.771 "is_configured": true, 00:19:18.771 "data_offset": 0, 00:19:18.771 "data_size": 65536 00:19:18.771 }, 00:19:18.771 { 00:19:18.771 "name": "BaseBdev4", 00:19:18.771 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:18.771 "is_configured": true, 00:19:18.771 "data_offset": 0, 00:19:18.771 "data_size": 65536 00:19:18.771 } 00:19:18.771 ] 00:19:18.771 }' 00:19:18.771 02:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.771 02:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.337 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:19:19.337 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:19.338 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:19.596 [2024-05-15 02:20:07.452987] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.596 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:19.596 "name": "Existed_Raid", 00:19:19.596 "aliases": [ 00:19:19.596 "a4cb4f88-1261-11ef-99fd-bfc7c66e2865" 00:19:19.596 ], 00:19:19.596 "product_name": "Raid Volume", 00:19:19.596 "block_size": 512, 00:19:19.596 "num_blocks": 262144, 00:19:19.596 "uuid": "a4cb4f88-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "assigned_rate_limits": { 00:19:19.596 "rw_ios_per_sec": 0, 00:19:19.596 "rw_mbytes_per_sec": 0, 00:19:19.596 "r_mbytes_per_sec": 0, 00:19:19.596 "w_mbytes_per_sec": 0 00:19:19.596 }, 00:19:19.596 "claimed": false, 00:19:19.596 "zoned": false, 00:19:19.596 "supported_io_types": { 00:19:19.596 "read": true, 00:19:19.596 "write": true, 00:19:19.596 "unmap": true, 00:19:19.596 "write_zeroes": true, 00:19:19.596 "flush": true, 00:19:19.596 "reset": true, 00:19:19.596 "compare": false, 00:19:19.596 "compare_and_write": false, 00:19:19.596 "abort": false, 00:19:19.596 "nvme_admin": false, 00:19:19.596 "nvme_io": false 00:19:19.596 }, 00:19:19.596 "memory_domains": [ 00:19:19.596 { 00:19:19.596 "dma_device_id": "system", 00:19:19.596 "dma_device_type": 1 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.596 "dma_device_type": 2 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "system", 00:19:19.596 "dma_device_type": 1 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.596 "dma_device_type": 2 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "system", 00:19:19.596 "dma_device_type": 1 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.596 "dma_device_type": 2 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "system", 00:19:19.596 "dma_device_type": 1 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.596 "dma_device_type": 2 00:19:19.596 } 00:19:19.596 ], 00:19:19.596 "driver_specific": { 00:19:19.596 "raid": { 00:19:19.596 "uuid": "a4cb4f88-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "strip_size_kb": 64, 00:19:19.596 "state": "online", 00:19:19.596 "raid_level": "concat", 00:19:19.596 "superblock": false, 00:19:19.596 "num_base_bdevs": 4, 00:19:19.596 "num_base_bdevs_discovered": 4, 00:19:19.596 "num_base_bdevs_operational": 4, 00:19:19.596 "base_bdevs_list": [ 00:19:19.596 { 00:19:19.596 "name": "NewBaseBdev", 00:19:19.596 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "is_configured": true, 00:19:19.596 "data_offset": 0, 00:19:19.596 "data_size": 65536 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "name": "BaseBdev2", 00:19:19.596 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "is_configured": true, 00:19:19.596 "data_offset": 0, 00:19:19.596 "data_size": 65536 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "name": "BaseBdev3", 00:19:19.596 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "is_configured": true, 00:19:19.596 "data_offset": 0, 00:19:19.596 "data_size": 65536 00:19:19.596 }, 00:19:19.596 { 00:19:19.596 "name": "BaseBdev4", 00:19:19.596 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:19.596 "is_configured": true, 00:19:19.596 "data_offset": 0, 00:19:19.596 "data_size": 65536 00:19:19.596 } 00:19:19.596 ] 00:19:19.596 } 00:19:19.596 } 00:19:19.596 }' 00:19:19.596 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.596 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:19:19.597 BaseBdev2 00:19:19.597 BaseBdev3 00:19:19.597 BaseBdev4' 00:19:19.597 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:19.597 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:19.597 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:19.855 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:19.855 "name": "NewBaseBdev", 00:19:19.855 "aliases": [ 00:19:19.855 "a071cd9d-1261-11ef-99fd-bfc7c66e2865" 00:19:19.855 ], 00:19:19.855 "product_name": "Malloc disk", 00:19:19.855 "block_size": 512, 00:19:19.855 "num_blocks": 65536, 00:19:19.855 "uuid": "a071cd9d-1261-11ef-99fd-bfc7c66e2865", 00:19:19.855 "assigned_rate_limits": { 00:19:19.855 "rw_ios_per_sec": 0, 00:19:19.855 "rw_mbytes_per_sec": 0, 00:19:19.855 "r_mbytes_per_sec": 0, 00:19:19.855 "w_mbytes_per_sec": 0 00:19:19.855 }, 00:19:19.855 "claimed": true, 00:19:19.855 "claim_type": "exclusive_write", 00:19:19.855 "zoned": false, 00:19:19.855 "supported_io_types": { 00:19:19.855 "read": true, 00:19:19.855 "write": true, 00:19:19.855 "unmap": true, 00:19:19.855 "write_zeroes": true, 00:19:19.855 "flush": true, 00:19:19.855 "reset": true, 00:19:19.855 "compare": false, 00:19:19.855 "compare_and_write": false, 00:19:19.855 "abort": true, 00:19:19.855 "nvme_admin": false, 00:19:19.855 "nvme_io": false 00:19:19.855 }, 00:19:19.855 "memory_domains": [ 00:19:19.855 { 00:19:19.855 "dma_device_id": "system", 00:19:19.855 "dma_device_type": 1 00:19:19.855 }, 00:19:19.855 { 00:19:19.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.855 "dma_device_type": 2 00:19:19.855 } 00:19:19.855 ], 00:19:19.855 "driver_specific": {} 00:19:19.855 }' 00:19:19.855 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:19.855 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:19.855 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:19.856 02:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:20.114 "name": "BaseBdev2", 00:19:20.114 "aliases": [ 00:19:20.114 "9da05053-1261-11ef-99fd-bfc7c66e2865" 00:19:20.114 ], 00:19:20.114 "product_name": "Malloc disk", 00:19:20.114 "block_size": 512, 00:19:20.114 "num_blocks": 65536, 00:19:20.114 "uuid": "9da05053-1261-11ef-99fd-bfc7c66e2865", 00:19:20.114 "assigned_rate_limits": { 00:19:20.114 "rw_ios_per_sec": 0, 00:19:20.114 "rw_mbytes_per_sec": 0, 00:19:20.114 "r_mbytes_per_sec": 0, 00:19:20.114 "w_mbytes_per_sec": 0 00:19:20.114 }, 00:19:20.114 "claimed": true, 00:19:20.114 "claim_type": "exclusive_write", 00:19:20.114 "zoned": false, 00:19:20.114 "supported_io_types": { 00:19:20.114 "read": true, 00:19:20.114 "write": true, 00:19:20.114 "unmap": true, 00:19:20.114 "write_zeroes": true, 00:19:20.114 "flush": true, 00:19:20.114 "reset": true, 00:19:20.114 "compare": false, 00:19:20.114 "compare_and_write": false, 00:19:20.114 "abort": true, 00:19:20.114 "nvme_admin": false, 00:19:20.114 "nvme_io": false 00:19:20.114 }, 00:19:20.114 "memory_domains": [ 00:19:20.114 { 00:19:20.114 "dma_device_id": "system", 00:19:20.114 "dma_device_type": 1 00:19:20.114 }, 00:19:20.114 { 00:19:20.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.114 "dma_device_type": 2 00:19:20.114 } 00:19:20.114 ], 00:19:20.114 "driver_specific": {} 00:19:20.114 }' 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.114 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:20.373 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:20.632 "name": "BaseBdev3", 00:19:20.632 "aliases": [ 00:19:20.632 "9e238c65-1261-11ef-99fd-bfc7c66e2865" 00:19:20.632 ], 00:19:20.632 "product_name": "Malloc disk", 00:19:20.632 "block_size": 512, 00:19:20.632 "num_blocks": 65536, 00:19:20.632 "uuid": "9e238c65-1261-11ef-99fd-bfc7c66e2865", 00:19:20.632 "assigned_rate_limits": { 00:19:20.632 "rw_ios_per_sec": 0, 00:19:20.632 "rw_mbytes_per_sec": 0, 00:19:20.632 "r_mbytes_per_sec": 0, 00:19:20.632 "w_mbytes_per_sec": 0 00:19:20.632 }, 00:19:20.632 "claimed": true, 00:19:20.632 "claim_type": "exclusive_write", 00:19:20.632 "zoned": false, 00:19:20.632 "supported_io_types": { 00:19:20.632 "read": true, 00:19:20.632 "write": true, 00:19:20.632 "unmap": true, 00:19:20.632 "write_zeroes": true, 00:19:20.632 "flush": true, 00:19:20.632 "reset": true, 00:19:20.632 "compare": false, 00:19:20.632 "compare_and_write": false, 00:19:20.632 "abort": true, 00:19:20.632 "nvme_admin": false, 00:19:20.632 "nvme_io": false 00:19:20.632 }, 00:19:20.632 "memory_domains": [ 00:19:20.632 { 00:19:20.632 "dma_device_id": "system", 00:19:20.632 "dma_device_type": 1 00:19:20.632 }, 00:19:20.632 { 00:19:20.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.632 "dma_device_type": 2 00:19:20.632 } 00:19:20.632 ], 00:19:20.632 "driver_specific": {} 00:19:20.632 }' 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:20.632 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:20.890 "name": "BaseBdev4", 00:19:20.890 "aliases": [ 00:19:20.890 "9e99f68e-1261-11ef-99fd-bfc7c66e2865" 00:19:20.890 ], 00:19:20.890 "product_name": "Malloc disk", 00:19:20.890 "block_size": 512, 00:19:20.890 "num_blocks": 65536, 00:19:20.890 "uuid": "9e99f68e-1261-11ef-99fd-bfc7c66e2865", 00:19:20.890 "assigned_rate_limits": { 00:19:20.890 "rw_ios_per_sec": 0, 00:19:20.890 "rw_mbytes_per_sec": 0, 00:19:20.890 "r_mbytes_per_sec": 0, 00:19:20.890 "w_mbytes_per_sec": 0 00:19:20.890 }, 00:19:20.890 "claimed": true, 00:19:20.890 "claim_type": "exclusive_write", 00:19:20.890 "zoned": false, 00:19:20.890 "supported_io_types": { 00:19:20.890 "read": true, 00:19:20.890 "write": true, 00:19:20.890 "unmap": true, 00:19:20.890 "write_zeroes": true, 00:19:20.890 "flush": true, 00:19:20.890 "reset": true, 00:19:20.890 "compare": false, 00:19:20.890 "compare_and_write": false, 00:19:20.890 "abort": true, 00:19:20.890 "nvme_admin": false, 00:19:20.890 "nvme_io": false 00:19:20.890 }, 00:19:20.890 "memory_domains": [ 00:19:20.890 { 00:19:20.890 "dma_device_id": "system", 00:19:20.890 "dma_device_type": 1 00:19:20.890 }, 00:19:20.890 { 00:19:20.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.890 "dma_device_type": 2 00:19:20.890 } 00:19:20.890 ], 00:19:20.890 "driver_specific": {} 00:19:20.890 }' 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:20.890 02:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:21.149 [2024-05-15 02:20:08.985032] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.149 [2024-05-15 02:20:08.985063] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.149 [2024-05-15 02:20:08.985086] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.149 [2024-05-15 02:20:08.985101] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.149 [2024-05-15 02:20:08.985106] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa05f00 name Existed_Raid, state offline 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 59286 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 59286 ']' 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 59286 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 59286 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:21.149 killing process with pid 59286 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59286' 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 59286 00:19:21.149 [2024-05-15 02:20:09.014978] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.149 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 59286 00:19:21.149 [2024-05-15 02:20:09.033987] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:19:21.407 00:19:21.407 real 0m29.490s 00:19:21.407 user 0m54.546s 00:19:21.407 sys 0m3.600s 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:21.407 ************************************ 00:19:21.407 END TEST raid_state_function_test 00:19:21.407 ************************************ 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.407 02:20:09 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:21.407 02:20:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:21.407 02:20:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:21.407 02:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.407 ************************************ 00:19:21.407 START TEST raid_state_function_test_sb 00:19:21.407 ************************************ 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:19:21.407 Process raid pid: 60113 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=60113 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 60113' 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 60113 /var/tmp/spdk-raid.sock 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 60113 ']' 00:19:21.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.407 02:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.407 [2024-05-15 02:20:09.234144] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:21.407 [2024-05-15 02:20:09.234383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:21.973 EAL: TSC is not safe to use in SMP mode 00:19:21.973 EAL: TSC is not invariant 00:19:21.973 [2024-05-15 02:20:09.712139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.973 [2024-05-15 02:20:09.796031] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:21.973 [2024-05-15 02:20:09.798220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.973 [2024-05-15 02:20:09.798939] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.973 [2024-05-15 02:20:09.798953] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.540 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.540 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:22.540 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:22.799 [2024-05-15 02:20:10.582188] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.799 [2024-05-15 02:20:10.582275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.799 [2024-05-15 02:20:10.582287] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.799 [2024-05-15 02:20:10.582315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.799 [2024-05-15 02:20:10.582322] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.799 [2024-05-15 02:20:10.582340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.799 [2024-05-15 02:20:10.582350] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:22.799 [2024-05-15 02:20:10.582369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.799 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.060 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.060 "name": "Existed_Raid", 00:19:23.060 "uuid": "a7a9be5b-1261-11ef-99fd-bfc7c66e2865", 00:19:23.060 "strip_size_kb": 64, 00:19:23.060 "state": "configuring", 00:19:23.060 "raid_level": "concat", 00:19:23.060 "superblock": true, 00:19:23.060 "num_base_bdevs": 4, 00:19:23.060 "num_base_bdevs_discovered": 0, 00:19:23.060 "num_base_bdevs_operational": 4, 00:19:23.060 "base_bdevs_list": [ 00:19:23.060 { 00:19:23.060 "name": "BaseBdev1", 00:19:23.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.060 "is_configured": false, 00:19:23.060 "data_offset": 0, 00:19:23.060 "data_size": 0 00:19:23.060 }, 00:19:23.060 { 00:19:23.060 "name": "BaseBdev2", 00:19:23.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.060 "is_configured": false, 00:19:23.060 "data_offset": 0, 00:19:23.060 "data_size": 0 00:19:23.060 }, 00:19:23.060 { 00:19:23.060 "name": "BaseBdev3", 00:19:23.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.060 "is_configured": false, 00:19:23.060 "data_offset": 0, 00:19:23.060 "data_size": 0 00:19:23.060 }, 00:19:23.060 { 00:19:23.060 "name": "BaseBdev4", 00:19:23.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.060 "is_configured": false, 00:19:23.060 "data_offset": 0, 00:19:23.060 "data_size": 0 00:19:23.060 } 00:19:23.060 ] 00:19:23.060 }' 00:19:23.060 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.060 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.318 02:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:23.576 [2024-05-15 02:20:11.482249] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.576 [2024-05-15 02:20:11.482282] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829dc3500 name Existed_Raid, state configuring 00:19:23.576 02:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:23.834 [2024-05-15 02:20:11.750272] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.834 [2024-05-15 02:20:11.750334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.834 [2024-05-15 02:20:11.750338] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.834 [2024-05-15 02:20:11.750347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.834 [2024-05-15 02:20:11.750350] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:23.834 [2024-05-15 02:20:11.750358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.834 [2024-05-15 02:20:11.750361] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:23.834 [2024-05-15 02:20:11.750368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:23.834 02:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.094 [2024-05-15 02:20:12.019259] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.094 BaseBdev1 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:24.094 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.352 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.610 [ 00:19:24.610 { 00:19:24.610 "name": "BaseBdev1", 00:19:24.610 "aliases": [ 00:19:24.610 "a884e00f-1261-11ef-99fd-bfc7c66e2865" 00:19:24.610 ], 00:19:24.610 "product_name": "Malloc disk", 00:19:24.610 "block_size": 512, 00:19:24.610 "num_blocks": 65536, 00:19:24.610 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:24.610 "assigned_rate_limits": { 00:19:24.610 "rw_ios_per_sec": 0, 00:19:24.610 "rw_mbytes_per_sec": 0, 00:19:24.610 "r_mbytes_per_sec": 0, 00:19:24.610 "w_mbytes_per_sec": 0 00:19:24.610 }, 00:19:24.610 "claimed": true, 00:19:24.610 "claim_type": "exclusive_write", 00:19:24.610 "zoned": false, 00:19:24.610 "supported_io_types": { 00:19:24.610 "read": true, 00:19:24.610 "write": true, 00:19:24.610 "unmap": true, 00:19:24.610 "write_zeroes": true, 00:19:24.610 "flush": true, 00:19:24.610 "reset": true, 00:19:24.610 "compare": false, 00:19:24.610 "compare_and_write": false, 00:19:24.610 "abort": true, 00:19:24.610 "nvme_admin": false, 00:19:24.610 "nvme_io": false 00:19:24.610 }, 00:19:24.610 "memory_domains": [ 00:19:24.610 { 00:19:24.610 "dma_device_id": "system", 00:19:24.610 "dma_device_type": 1 00:19:24.610 }, 00:19:24.610 { 00:19:24.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.610 "dma_device_type": 2 00:19:24.610 } 00:19:24.610 ], 00:19:24.610 "driver_specific": {} 00:19:24.610 } 00:19:24.610 ] 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.610 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.611 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.611 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.868 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.868 "name": "Existed_Raid", 00:19:24.868 "uuid": "a85bfacc-1261-11ef-99fd-bfc7c66e2865", 00:19:24.868 "strip_size_kb": 64, 00:19:24.868 "state": "configuring", 00:19:24.868 "raid_level": "concat", 00:19:24.868 "superblock": true, 00:19:24.868 "num_base_bdevs": 4, 00:19:24.868 "num_base_bdevs_discovered": 1, 00:19:24.868 "num_base_bdevs_operational": 4, 00:19:24.868 "base_bdevs_list": [ 00:19:24.868 { 00:19:24.868 "name": "BaseBdev1", 00:19:24.868 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:24.868 "is_configured": true, 00:19:24.868 "data_offset": 2048, 00:19:24.868 "data_size": 63488 00:19:24.868 }, 00:19:24.868 { 00:19:24.868 "name": "BaseBdev2", 00:19:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.868 "is_configured": false, 00:19:24.868 "data_offset": 0, 00:19:24.868 "data_size": 0 00:19:24.868 }, 00:19:24.868 { 00:19:24.868 "name": "BaseBdev3", 00:19:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.868 "is_configured": false, 00:19:24.868 "data_offset": 0, 00:19:24.868 "data_size": 0 00:19:24.868 }, 00:19:24.868 { 00:19:24.868 "name": "BaseBdev4", 00:19:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.868 "is_configured": false, 00:19:24.868 "data_offset": 0, 00:19:24.868 "data_size": 0 00:19:24.868 } 00:19:24.868 ] 00:19:24.868 }' 00:19:24.868 02:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.868 02:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.126 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.404 [2024-05-15 02:20:13.354351] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.404 [2024-05-15 02:20:13.354389] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829dc3500 name Existed_Raid, state configuring 00:19:25.404 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:25.690 [2024-05-15 02:20:13.650385] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.690 [2024-05-15 02:20:13.651073] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.690 [2024-05-15 02:20:13.651117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.690 [2024-05-15 02:20:13.651122] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.690 [2024-05-15 02:20:13.651129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.690 [2024-05-15 02:20:13.651133] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:25.690 [2024-05-15 02:20:13.651139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.690 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.949 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.949 "name": "Existed_Raid", 00:19:25.949 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:25.949 "strip_size_kb": 64, 00:19:25.949 "state": "configuring", 00:19:25.949 "raid_level": "concat", 00:19:25.949 "superblock": true, 00:19:25.949 "num_base_bdevs": 4, 00:19:25.949 "num_base_bdevs_discovered": 1, 00:19:25.949 "num_base_bdevs_operational": 4, 00:19:25.949 "base_bdevs_list": [ 00:19:25.949 { 00:19:25.949 "name": "BaseBdev1", 00:19:25.949 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:25.949 "is_configured": true, 00:19:25.949 "data_offset": 2048, 00:19:25.949 "data_size": 63488 00:19:25.949 }, 00:19:25.949 { 00:19:25.949 "name": "BaseBdev2", 00:19:25.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.949 "is_configured": false, 00:19:25.949 "data_offset": 0, 00:19:25.949 "data_size": 0 00:19:25.949 }, 00:19:25.949 { 00:19:25.949 "name": "BaseBdev3", 00:19:25.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.949 "is_configured": false, 00:19:25.949 "data_offset": 0, 00:19:25.949 "data_size": 0 00:19:25.949 }, 00:19:25.949 { 00:19:25.949 "name": "BaseBdev4", 00:19:25.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.949 "is_configured": false, 00:19:25.949 "data_offset": 0, 00:19:25.949 "data_size": 0 00:19:25.949 } 00:19:25.949 ] 00:19:25.949 }' 00:19:25.949 02:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.949 02:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:26.516 [2024-05-15 02:20:14.462554] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.516 BaseBdev2 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:26.516 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.774 02:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:27.032 [ 00:19:27.032 { 00:19:27.032 "name": "BaseBdev2", 00:19:27.032 "aliases": [ 00:19:27.032 "a9f9d314-1261-11ef-99fd-bfc7c66e2865" 00:19:27.032 ], 00:19:27.032 "product_name": "Malloc disk", 00:19:27.032 "block_size": 512, 00:19:27.032 "num_blocks": 65536, 00:19:27.032 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:27.032 "assigned_rate_limits": { 00:19:27.032 "rw_ios_per_sec": 0, 00:19:27.032 "rw_mbytes_per_sec": 0, 00:19:27.032 "r_mbytes_per_sec": 0, 00:19:27.032 "w_mbytes_per_sec": 0 00:19:27.032 }, 00:19:27.032 "claimed": true, 00:19:27.032 "claim_type": "exclusive_write", 00:19:27.032 "zoned": false, 00:19:27.032 "supported_io_types": { 00:19:27.032 "read": true, 00:19:27.032 "write": true, 00:19:27.032 "unmap": true, 00:19:27.032 "write_zeroes": true, 00:19:27.032 "flush": true, 00:19:27.032 "reset": true, 00:19:27.032 "compare": false, 00:19:27.032 "compare_and_write": false, 00:19:27.032 "abort": true, 00:19:27.032 "nvme_admin": false, 00:19:27.032 "nvme_io": false 00:19:27.032 }, 00:19:27.032 "memory_domains": [ 00:19:27.032 { 00:19:27.032 "dma_device_id": "system", 00:19:27.032 "dma_device_type": 1 00:19:27.032 }, 00:19:27.032 { 00:19:27.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.032 "dma_device_type": 2 00:19:27.032 } 00:19:27.032 ], 00:19:27.032 "driver_specific": {} 00:19:27.032 } 00:19:27.032 ] 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.032 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.603 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.603 "name": "Existed_Raid", 00:19:27.603 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:27.603 "strip_size_kb": 64, 00:19:27.604 "state": "configuring", 00:19:27.604 "raid_level": "concat", 00:19:27.604 "superblock": true, 00:19:27.604 "num_base_bdevs": 4, 00:19:27.604 "num_base_bdevs_discovered": 2, 00:19:27.604 "num_base_bdevs_operational": 4, 00:19:27.604 "base_bdevs_list": [ 00:19:27.604 { 00:19:27.604 "name": "BaseBdev1", 00:19:27.604 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:27.604 "is_configured": true, 00:19:27.604 "data_offset": 2048, 00:19:27.604 "data_size": 63488 00:19:27.604 }, 00:19:27.604 { 00:19:27.604 "name": "BaseBdev2", 00:19:27.604 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:27.604 "is_configured": true, 00:19:27.604 "data_offset": 2048, 00:19:27.604 "data_size": 63488 00:19:27.604 }, 00:19:27.604 { 00:19:27.604 "name": "BaseBdev3", 00:19:27.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.604 "is_configured": false, 00:19:27.604 "data_offset": 0, 00:19:27.604 "data_size": 0 00:19:27.604 }, 00:19:27.604 { 00:19:27.604 "name": "BaseBdev4", 00:19:27.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.604 "is_configured": false, 00:19:27.604 "data_offset": 0, 00:19:27.604 "data_size": 0 00:19:27.604 } 00:19:27.604 ] 00:19:27.604 }' 00:19:27.604 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.604 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.863 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.149 [2024-05-15 02:20:15.898622] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.149 BaseBdev3 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:28.149 02:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.428 [ 00:19:28.428 { 00:19:28.428 "name": "BaseBdev3", 00:19:28.428 "aliases": [ 00:19:28.428 "aad4f43d-1261-11ef-99fd-bfc7c66e2865" 00:19:28.428 ], 00:19:28.428 "product_name": "Malloc disk", 00:19:28.428 "block_size": 512, 00:19:28.428 "num_blocks": 65536, 00:19:28.428 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:28.428 "assigned_rate_limits": { 00:19:28.428 "rw_ios_per_sec": 0, 00:19:28.428 "rw_mbytes_per_sec": 0, 00:19:28.428 "r_mbytes_per_sec": 0, 00:19:28.428 "w_mbytes_per_sec": 0 00:19:28.428 }, 00:19:28.428 "claimed": true, 00:19:28.428 "claim_type": "exclusive_write", 00:19:28.428 "zoned": false, 00:19:28.428 "supported_io_types": { 00:19:28.428 "read": true, 00:19:28.428 "write": true, 00:19:28.428 "unmap": true, 00:19:28.428 "write_zeroes": true, 00:19:28.428 "flush": true, 00:19:28.428 "reset": true, 00:19:28.428 "compare": false, 00:19:28.428 "compare_and_write": false, 00:19:28.428 "abort": true, 00:19:28.428 "nvme_admin": false, 00:19:28.428 "nvme_io": false 00:19:28.428 }, 00:19:28.428 "memory_domains": [ 00:19:28.428 { 00:19:28.428 "dma_device_id": "system", 00:19:28.428 "dma_device_type": 1 00:19:28.428 }, 00:19:28.428 { 00:19:28.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.428 "dma_device_type": 2 00:19:28.428 } 00:19:28.428 ], 00:19:28.428 "driver_specific": {} 00:19:28.428 } 00:19:28.428 ] 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.428 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.687 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.687 "name": "Existed_Raid", 00:19:28.687 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:28.687 "strip_size_kb": 64, 00:19:28.687 "state": "configuring", 00:19:28.687 "raid_level": "concat", 00:19:28.687 "superblock": true, 00:19:28.687 "num_base_bdevs": 4, 00:19:28.687 "num_base_bdevs_discovered": 3, 00:19:28.687 "num_base_bdevs_operational": 4, 00:19:28.687 "base_bdevs_list": [ 00:19:28.687 { 00:19:28.687 "name": "BaseBdev1", 00:19:28.687 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:28.687 "is_configured": true, 00:19:28.687 "data_offset": 2048, 00:19:28.687 "data_size": 63488 00:19:28.687 }, 00:19:28.687 { 00:19:28.687 "name": "BaseBdev2", 00:19:28.687 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:28.687 "is_configured": true, 00:19:28.687 "data_offset": 2048, 00:19:28.687 "data_size": 63488 00:19:28.687 }, 00:19:28.687 { 00:19:28.687 "name": "BaseBdev3", 00:19:28.687 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:28.687 "is_configured": true, 00:19:28.687 "data_offset": 2048, 00:19:28.687 "data_size": 63488 00:19:28.687 }, 00:19:28.687 { 00:19:28.687 "name": "BaseBdev4", 00:19:28.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.687 "is_configured": false, 00:19:28.687 "data_offset": 0, 00:19:28.687 "data_size": 0 00:19:28.687 } 00:19:28.687 ] 00:19:28.687 }' 00:19:28.687 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.687 02:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.254 02:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:29.254 [2024-05-15 02:20:17.190706] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:29.254 [2024-05-15 02:20:17.190776] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x829dc3a00 00:19:29.254 [2024-05-15 02:20:17.190781] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:29.254 [2024-05-15 02:20:17.190801] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e26ec0 00:19:29.254 [2024-05-15 02:20:17.190844] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829dc3a00 00:19:29.254 [2024-05-15 02:20:17.190847] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829dc3a00 00:19:29.254 [2024-05-15 02:20:17.190865] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.254 BaseBdev4 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:29.254 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.512 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:30.076 [ 00:19:30.076 { 00:19:30.076 "name": "BaseBdev4", 00:19:30.076 "aliases": [ 00:19:30.076 "ab9a1bff-1261-11ef-99fd-bfc7c66e2865" 00:19:30.076 ], 00:19:30.076 "product_name": "Malloc disk", 00:19:30.076 "block_size": 512, 00:19:30.076 "num_blocks": 65536, 00:19:30.076 "uuid": "ab9a1bff-1261-11ef-99fd-bfc7c66e2865", 00:19:30.076 "assigned_rate_limits": { 00:19:30.076 "rw_ios_per_sec": 0, 00:19:30.076 "rw_mbytes_per_sec": 0, 00:19:30.076 "r_mbytes_per_sec": 0, 00:19:30.076 "w_mbytes_per_sec": 0 00:19:30.076 }, 00:19:30.076 "claimed": true, 00:19:30.076 "claim_type": "exclusive_write", 00:19:30.076 "zoned": false, 00:19:30.076 "supported_io_types": { 00:19:30.076 "read": true, 00:19:30.076 "write": true, 00:19:30.076 "unmap": true, 00:19:30.076 "write_zeroes": true, 00:19:30.076 "flush": true, 00:19:30.076 "reset": true, 00:19:30.076 "compare": false, 00:19:30.076 "compare_and_write": false, 00:19:30.076 "abort": true, 00:19:30.076 "nvme_admin": false, 00:19:30.076 "nvme_io": false 00:19:30.076 }, 00:19:30.076 "memory_domains": [ 00:19:30.076 { 00:19:30.076 "dma_device_id": "system", 00:19:30.076 "dma_device_type": 1 00:19:30.076 }, 00:19:30.076 { 00:19:30.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.076 "dma_device_type": 2 00:19:30.076 } 00:19:30.076 ], 00:19:30.076 "driver_specific": {} 00:19:30.076 } 00:19:30.076 ] 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.076 02:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.334 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.334 "name": "Existed_Raid", 00:19:30.334 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:30.334 "strip_size_kb": 64, 00:19:30.334 "state": "online", 00:19:30.334 "raid_level": "concat", 00:19:30.334 "superblock": true, 00:19:30.334 "num_base_bdevs": 4, 00:19:30.334 "num_base_bdevs_discovered": 4, 00:19:30.334 "num_base_bdevs_operational": 4, 00:19:30.334 "base_bdevs_list": [ 00:19:30.334 { 00:19:30.334 "name": "BaseBdev1", 00:19:30.334 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:30.334 "is_configured": true, 00:19:30.334 "data_offset": 2048, 00:19:30.334 "data_size": 63488 00:19:30.334 }, 00:19:30.334 { 00:19:30.334 "name": "BaseBdev2", 00:19:30.334 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:30.334 "is_configured": true, 00:19:30.334 "data_offset": 2048, 00:19:30.334 "data_size": 63488 00:19:30.334 }, 00:19:30.334 { 00:19:30.334 "name": "BaseBdev3", 00:19:30.334 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:30.334 "is_configured": true, 00:19:30.334 "data_offset": 2048, 00:19:30.334 "data_size": 63488 00:19:30.334 }, 00:19:30.334 { 00:19:30.334 "name": "BaseBdev4", 00:19:30.334 "uuid": "ab9a1bff-1261-11ef-99fd-bfc7c66e2865", 00:19:30.334 "is_configured": true, 00:19:30.334 "data_offset": 2048, 00:19:30.334 "data_size": 63488 00:19:30.334 } 00:19:30.334 ] 00:19:30.334 }' 00:19:30.334 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.334 02:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:30.634 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:30.634 [2024-05-15 02:20:18.650723] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:30.894 "name": "Existed_Raid", 00:19:30.894 "aliases": [ 00:19:30.894 "a97de9d8-1261-11ef-99fd-bfc7c66e2865" 00:19:30.894 ], 00:19:30.894 "product_name": "Raid Volume", 00:19:30.894 "block_size": 512, 00:19:30.894 "num_blocks": 253952, 00:19:30.894 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "assigned_rate_limits": { 00:19:30.894 "rw_ios_per_sec": 0, 00:19:30.894 "rw_mbytes_per_sec": 0, 00:19:30.894 "r_mbytes_per_sec": 0, 00:19:30.894 "w_mbytes_per_sec": 0 00:19:30.894 }, 00:19:30.894 "claimed": false, 00:19:30.894 "zoned": false, 00:19:30.894 "supported_io_types": { 00:19:30.894 "read": true, 00:19:30.894 "write": true, 00:19:30.894 "unmap": true, 00:19:30.894 "write_zeroes": true, 00:19:30.894 "flush": true, 00:19:30.894 "reset": true, 00:19:30.894 "compare": false, 00:19:30.894 "compare_and_write": false, 00:19:30.894 "abort": false, 00:19:30.894 "nvme_admin": false, 00:19:30.894 "nvme_io": false 00:19:30.894 }, 00:19:30.894 "memory_domains": [ 00:19:30.894 { 00:19:30.894 "dma_device_id": "system", 00:19:30.894 "dma_device_type": 1 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.894 "dma_device_type": 2 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "system", 00:19:30.894 "dma_device_type": 1 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.894 "dma_device_type": 2 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "system", 00:19:30.894 "dma_device_type": 1 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.894 "dma_device_type": 2 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "system", 00:19:30.894 "dma_device_type": 1 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.894 "dma_device_type": 2 00:19:30.894 } 00:19:30.894 ], 00:19:30.894 "driver_specific": { 00:19:30.894 "raid": { 00:19:30.894 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "strip_size_kb": 64, 00:19:30.894 "state": "online", 00:19:30.894 "raid_level": "concat", 00:19:30.894 "superblock": true, 00:19:30.894 "num_base_bdevs": 4, 00:19:30.894 "num_base_bdevs_discovered": 4, 00:19:30.894 "num_base_bdevs_operational": 4, 00:19:30.894 "base_bdevs_list": [ 00:19:30.894 { 00:19:30.894 "name": "BaseBdev1", 00:19:30.894 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "is_configured": true, 00:19:30.894 "data_offset": 2048, 00:19:30.894 "data_size": 63488 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "name": "BaseBdev2", 00:19:30.894 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "is_configured": true, 00:19:30.894 "data_offset": 2048, 00:19:30.894 "data_size": 63488 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "name": "BaseBdev3", 00:19:30.894 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "is_configured": true, 00:19:30.894 "data_offset": 2048, 00:19:30.894 "data_size": 63488 00:19:30.894 }, 00:19:30.894 { 00:19:30.894 "name": "BaseBdev4", 00:19:30.894 "uuid": "ab9a1bff-1261-11ef-99fd-bfc7c66e2865", 00:19:30.894 "is_configured": true, 00:19:30.894 "data_offset": 2048, 00:19:30.894 "data_size": 63488 00:19:30.894 } 00:19:30.894 ] 00:19:30.894 } 00:19:30.894 } 00:19:30.894 }' 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:30.894 BaseBdev2 00:19:30.894 BaseBdev3 00:19:30.894 BaseBdev4' 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:30.894 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:31.153 "name": "BaseBdev1", 00:19:31.153 "aliases": [ 00:19:31.153 "a884e00f-1261-11ef-99fd-bfc7c66e2865" 00:19:31.153 ], 00:19:31.153 "product_name": "Malloc disk", 00:19:31.153 "block_size": 512, 00:19:31.153 "num_blocks": 65536, 00:19:31.153 "uuid": "a884e00f-1261-11ef-99fd-bfc7c66e2865", 00:19:31.153 "assigned_rate_limits": { 00:19:31.153 "rw_ios_per_sec": 0, 00:19:31.153 "rw_mbytes_per_sec": 0, 00:19:31.153 "r_mbytes_per_sec": 0, 00:19:31.153 "w_mbytes_per_sec": 0 00:19:31.153 }, 00:19:31.153 "claimed": true, 00:19:31.153 "claim_type": "exclusive_write", 00:19:31.153 "zoned": false, 00:19:31.153 "supported_io_types": { 00:19:31.153 "read": true, 00:19:31.153 "write": true, 00:19:31.153 "unmap": true, 00:19:31.153 "write_zeroes": true, 00:19:31.153 "flush": true, 00:19:31.153 "reset": true, 00:19:31.153 "compare": false, 00:19:31.153 "compare_and_write": false, 00:19:31.153 "abort": true, 00:19:31.153 "nvme_admin": false, 00:19:31.153 "nvme_io": false 00:19:31.153 }, 00:19:31.153 "memory_domains": [ 00:19:31.153 { 00:19:31.153 "dma_device_id": "system", 00:19:31.153 "dma_device_type": 1 00:19:31.153 }, 00:19:31.153 { 00:19:31.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.153 "dma_device_type": 2 00:19:31.153 } 00:19:31.153 ], 00:19:31.153 "driver_specific": {} 00:19:31.153 }' 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.153 02:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:31.153 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:31.412 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:31.412 "name": "BaseBdev2", 00:19:31.412 "aliases": [ 00:19:31.412 "a9f9d314-1261-11ef-99fd-bfc7c66e2865" 00:19:31.412 ], 00:19:31.412 "product_name": "Malloc disk", 00:19:31.412 "block_size": 512, 00:19:31.412 "num_blocks": 65536, 00:19:31.412 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:31.412 "assigned_rate_limits": { 00:19:31.412 "rw_ios_per_sec": 0, 00:19:31.412 "rw_mbytes_per_sec": 0, 00:19:31.412 "r_mbytes_per_sec": 0, 00:19:31.412 "w_mbytes_per_sec": 0 00:19:31.412 }, 00:19:31.412 "claimed": true, 00:19:31.412 "claim_type": "exclusive_write", 00:19:31.412 "zoned": false, 00:19:31.412 "supported_io_types": { 00:19:31.412 "read": true, 00:19:31.412 "write": true, 00:19:31.412 "unmap": true, 00:19:31.412 "write_zeroes": true, 00:19:31.412 "flush": true, 00:19:31.412 "reset": true, 00:19:31.412 "compare": false, 00:19:31.412 "compare_and_write": false, 00:19:31.412 "abort": true, 00:19:31.412 "nvme_admin": false, 00:19:31.412 "nvme_io": false 00:19:31.412 }, 00:19:31.412 "memory_domains": [ 00:19:31.412 { 00:19:31.412 "dma_device_id": "system", 00:19:31.412 "dma_device_type": 1 00:19:31.412 }, 00:19:31.413 { 00:19:31.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.413 "dma_device_type": 2 00:19:31.413 } 00:19:31.413 ], 00:19:31.413 "driver_specific": {} 00:19:31.413 }' 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:31.413 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:31.671 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:31.671 "name": "BaseBdev3", 00:19:31.671 "aliases": [ 00:19:31.671 "aad4f43d-1261-11ef-99fd-bfc7c66e2865" 00:19:31.671 ], 00:19:31.671 "product_name": "Malloc disk", 00:19:31.671 "block_size": 512, 00:19:31.671 "num_blocks": 65536, 00:19:31.671 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:31.671 "assigned_rate_limits": { 00:19:31.671 "rw_ios_per_sec": 0, 00:19:31.671 "rw_mbytes_per_sec": 0, 00:19:31.671 "r_mbytes_per_sec": 0, 00:19:31.671 "w_mbytes_per_sec": 0 00:19:31.671 }, 00:19:31.671 "claimed": true, 00:19:31.671 "claim_type": "exclusive_write", 00:19:31.671 "zoned": false, 00:19:31.671 "supported_io_types": { 00:19:31.671 "read": true, 00:19:31.671 "write": true, 00:19:31.671 "unmap": true, 00:19:31.671 "write_zeroes": true, 00:19:31.671 "flush": true, 00:19:31.671 "reset": true, 00:19:31.671 "compare": false, 00:19:31.671 "compare_and_write": false, 00:19:31.671 "abort": true, 00:19:31.671 "nvme_admin": false, 00:19:31.671 "nvme_io": false 00:19:31.671 }, 00:19:31.671 "memory_domains": [ 00:19:31.671 { 00:19:31.671 "dma_device_id": "system", 00:19:31.671 "dma_device_type": 1 00:19:31.671 }, 00:19:31.671 { 00:19:31.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.671 "dma_device_type": 2 00:19:31.671 } 00:19:31.671 ], 00:19:31.671 "driver_specific": {} 00:19:31.671 }' 00:19:31.671 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.671 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.671 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:31.672 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.672 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.672 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:31.930 02:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:32.187 "name": "BaseBdev4", 00:19:32.187 "aliases": [ 00:19:32.187 "ab9a1bff-1261-11ef-99fd-bfc7c66e2865" 00:19:32.187 ], 00:19:32.187 "product_name": "Malloc disk", 00:19:32.187 "block_size": 512, 00:19:32.187 "num_blocks": 65536, 00:19:32.187 "uuid": "ab9a1bff-1261-11ef-99fd-bfc7c66e2865", 00:19:32.187 "assigned_rate_limits": { 00:19:32.187 "rw_ios_per_sec": 0, 00:19:32.187 "rw_mbytes_per_sec": 0, 00:19:32.187 "r_mbytes_per_sec": 0, 00:19:32.187 "w_mbytes_per_sec": 0 00:19:32.187 }, 00:19:32.187 "claimed": true, 00:19:32.187 "claim_type": "exclusive_write", 00:19:32.187 "zoned": false, 00:19:32.187 "supported_io_types": { 00:19:32.187 "read": true, 00:19:32.187 "write": true, 00:19:32.187 "unmap": true, 00:19:32.187 "write_zeroes": true, 00:19:32.187 "flush": true, 00:19:32.187 "reset": true, 00:19:32.187 "compare": false, 00:19:32.187 "compare_and_write": false, 00:19:32.187 "abort": true, 00:19:32.187 "nvme_admin": false, 00:19:32.187 "nvme_io": false 00:19:32.187 }, 00:19:32.187 "memory_domains": [ 00:19:32.187 { 00:19:32.187 "dma_device_id": "system", 00:19:32.187 "dma_device_type": 1 00:19:32.187 }, 00:19:32.187 { 00:19:32.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.187 "dma_device_type": 2 00:19:32.187 } 00:19:32.187 ], 00:19:32.187 "driver_specific": {} 00:19:32.187 }' 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:32.187 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:32.444 [2024-05-15 02:20:20.326814] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.444 [2024-05-15 02:20:20.326848] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.444 [2024-05-15 02:20:20.326870] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.444 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.701 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.701 "name": "Existed_Raid", 00:19:32.701 "uuid": "a97de9d8-1261-11ef-99fd-bfc7c66e2865", 00:19:32.701 "strip_size_kb": 64, 00:19:32.701 "state": "offline", 00:19:32.701 "raid_level": "concat", 00:19:32.701 "superblock": true, 00:19:32.701 "num_base_bdevs": 4, 00:19:32.701 "num_base_bdevs_discovered": 3, 00:19:32.701 "num_base_bdevs_operational": 3, 00:19:32.701 "base_bdevs_list": [ 00:19:32.701 { 00:19:32.701 "name": null, 00:19:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.701 "is_configured": false, 00:19:32.701 "data_offset": 2048, 00:19:32.701 "data_size": 63488 00:19:32.701 }, 00:19:32.701 { 00:19:32.701 "name": "BaseBdev2", 00:19:32.701 "uuid": "a9f9d314-1261-11ef-99fd-bfc7c66e2865", 00:19:32.701 "is_configured": true, 00:19:32.701 "data_offset": 2048, 00:19:32.701 "data_size": 63488 00:19:32.701 }, 00:19:32.701 { 00:19:32.701 "name": "BaseBdev3", 00:19:32.701 "uuid": "aad4f43d-1261-11ef-99fd-bfc7c66e2865", 00:19:32.701 "is_configured": true, 00:19:32.701 "data_offset": 2048, 00:19:32.701 "data_size": 63488 00:19:32.701 }, 00:19:32.701 { 00:19:32.701 "name": "BaseBdev4", 00:19:32.701 "uuid": "ab9a1bff-1261-11ef-99fd-bfc7c66e2865", 00:19:32.701 "is_configured": true, 00:19:32.701 "data_offset": 2048, 00:19:32.701 "data_size": 63488 00:19:32.701 } 00:19:32.701 ] 00:19:32.701 }' 00:19:32.701 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.701 02:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.029 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.029 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.029 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.029 02:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:33.288 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:33.288 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.288 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:33.546 [2024-05-15 02:20:21.399691] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.546 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.546 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.546 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.546 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:33.804 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:33.804 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.804 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:34.062 [2024-05-15 02:20:21.852469] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:34.062 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.062 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.062 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.062 02:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:34.320 [2024-05-15 02:20:22.297211] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:34.320 [2024-05-15 02:20:22.297242] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829dc3a00 name Existed_Raid, state offline 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.320 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:34.578 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:34.836 BaseBdev2 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:34.836 02:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.096 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:35.355 [ 00:19:35.355 { 00:19:35.355 "name": "BaseBdev2", 00:19:35.355 "aliases": [ 00:19:35.355 "aef259e3-1261-11ef-99fd-bfc7c66e2865" 00:19:35.355 ], 00:19:35.355 "product_name": "Malloc disk", 00:19:35.355 "block_size": 512, 00:19:35.355 "num_blocks": 65536, 00:19:35.355 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:35.355 "assigned_rate_limits": { 00:19:35.355 "rw_ios_per_sec": 0, 00:19:35.355 "rw_mbytes_per_sec": 0, 00:19:35.355 "r_mbytes_per_sec": 0, 00:19:35.355 "w_mbytes_per_sec": 0 00:19:35.355 }, 00:19:35.355 "claimed": false, 00:19:35.355 "zoned": false, 00:19:35.355 "supported_io_types": { 00:19:35.355 "read": true, 00:19:35.355 "write": true, 00:19:35.355 "unmap": true, 00:19:35.355 "write_zeroes": true, 00:19:35.355 "flush": true, 00:19:35.355 "reset": true, 00:19:35.355 "compare": false, 00:19:35.355 "compare_and_write": false, 00:19:35.355 "abort": true, 00:19:35.355 "nvme_admin": false, 00:19:35.355 "nvme_io": false 00:19:35.355 }, 00:19:35.355 "memory_domains": [ 00:19:35.355 { 00:19:35.355 "dma_device_id": "system", 00:19:35.355 "dma_device_type": 1 00:19:35.355 }, 00:19:35.355 { 00:19:35.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.355 "dma_device_type": 2 00:19:35.355 } 00:19:35.355 ], 00:19:35.355 "driver_specific": {} 00:19:35.355 } 00:19:35.355 ] 00:19:35.355 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:35.355 02:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:35.355 02:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:35.355 02:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:35.628 BaseBdev3 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:35.628 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.885 02:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:36.463 [ 00:19:36.463 { 00:19:36.463 "name": "BaseBdev3", 00:19:36.463 "aliases": [ 00:19:36.463 "af5d2b39-1261-11ef-99fd-bfc7c66e2865" 00:19:36.463 ], 00:19:36.463 "product_name": "Malloc disk", 00:19:36.463 "block_size": 512, 00:19:36.463 "num_blocks": 65536, 00:19:36.463 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:36.463 "assigned_rate_limits": { 00:19:36.463 "rw_ios_per_sec": 0, 00:19:36.463 "rw_mbytes_per_sec": 0, 00:19:36.463 "r_mbytes_per_sec": 0, 00:19:36.463 "w_mbytes_per_sec": 0 00:19:36.463 }, 00:19:36.463 "claimed": false, 00:19:36.463 "zoned": false, 00:19:36.463 "supported_io_types": { 00:19:36.463 "read": true, 00:19:36.463 "write": true, 00:19:36.463 "unmap": true, 00:19:36.463 "write_zeroes": true, 00:19:36.463 "flush": true, 00:19:36.463 "reset": true, 00:19:36.463 "compare": false, 00:19:36.463 "compare_and_write": false, 00:19:36.463 "abort": true, 00:19:36.463 "nvme_admin": false, 00:19:36.463 "nvme_io": false 00:19:36.463 }, 00:19:36.463 "memory_domains": [ 00:19:36.463 { 00:19:36.463 "dma_device_id": "system", 00:19:36.463 "dma_device_type": 1 00:19:36.463 }, 00:19:36.463 { 00:19:36.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.463 "dma_device_type": 2 00:19:36.463 } 00:19:36.463 ], 00:19:36.463 "driver_specific": {} 00:19:36.463 } 00:19:36.463 ] 00:19:36.463 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:36.463 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:36.463 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:36.463 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:36.463 BaseBdev4 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.743 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:37.001 [ 00:19:37.001 { 00:19:37.001 "name": "BaseBdev4", 00:19:37.001 "aliases": [ 00:19:37.001 "aff17e31-1261-11ef-99fd-bfc7c66e2865" 00:19:37.001 ], 00:19:37.001 "product_name": "Malloc disk", 00:19:37.001 "block_size": 512, 00:19:37.001 "num_blocks": 65536, 00:19:37.001 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:37.001 "assigned_rate_limits": { 00:19:37.001 "rw_ios_per_sec": 0, 00:19:37.001 "rw_mbytes_per_sec": 0, 00:19:37.001 "r_mbytes_per_sec": 0, 00:19:37.001 "w_mbytes_per_sec": 0 00:19:37.001 }, 00:19:37.001 "claimed": false, 00:19:37.001 "zoned": false, 00:19:37.001 "supported_io_types": { 00:19:37.001 "read": true, 00:19:37.001 "write": true, 00:19:37.001 "unmap": true, 00:19:37.001 "write_zeroes": true, 00:19:37.001 "flush": true, 00:19:37.001 "reset": true, 00:19:37.001 "compare": false, 00:19:37.001 "compare_and_write": false, 00:19:37.001 "abort": true, 00:19:37.001 "nvme_admin": false, 00:19:37.001 "nvme_io": false 00:19:37.001 }, 00:19:37.001 "memory_domains": [ 00:19:37.001 { 00:19:37.001 "dma_device_id": "system", 00:19:37.001 "dma_device_type": 1 00:19:37.001 }, 00:19:37.001 { 00:19:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.001 "dma_device_type": 2 00:19:37.001 } 00:19:37.001 ], 00:19:37.001 "driver_specific": {} 00:19:37.001 } 00:19:37.001 ] 00:19:37.001 02:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:37.001 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:37.001 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:37.001 02:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:37.259 [2024-05-15 02:20:25.182221] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.259 [2024-05-15 02:20:25.182300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.259 [2024-05-15 02:20:25.182313] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.259 [2024-05-15 02:20:25.182906] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:37.259 [2024-05-15 02:20:25.182948] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.259 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.517 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.517 "name": "Existed_Raid", 00:19:37.517 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:37.517 "strip_size_kb": 64, 00:19:37.517 "state": "configuring", 00:19:37.517 "raid_level": "concat", 00:19:37.517 "superblock": true, 00:19:37.517 "num_base_bdevs": 4, 00:19:37.517 "num_base_bdevs_discovered": 3, 00:19:37.517 "num_base_bdevs_operational": 4, 00:19:37.517 "base_bdevs_list": [ 00:19:37.517 { 00:19:37.517 "name": "BaseBdev1", 00:19:37.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.517 "is_configured": false, 00:19:37.517 "data_offset": 0, 00:19:37.517 "data_size": 0 00:19:37.517 }, 00:19:37.517 { 00:19:37.517 "name": "BaseBdev2", 00:19:37.517 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:37.517 "is_configured": true, 00:19:37.517 "data_offset": 2048, 00:19:37.517 "data_size": 63488 00:19:37.517 }, 00:19:37.517 { 00:19:37.517 "name": "BaseBdev3", 00:19:37.517 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:37.517 "is_configured": true, 00:19:37.517 "data_offset": 2048, 00:19:37.517 "data_size": 63488 00:19:37.517 }, 00:19:37.517 { 00:19:37.517 "name": "BaseBdev4", 00:19:37.517 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:37.517 "is_configured": true, 00:19:37.517 "data_offset": 2048, 00:19:37.517 "data_size": 63488 00:19:37.517 } 00:19:37.517 ] 00:19:37.517 }' 00:19:37.517 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.517 02:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.081 02:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:38.339 [2024-05-15 02:20:26.146271] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.339 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.597 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.597 "name": "Existed_Raid", 00:19:38.597 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:38.597 "strip_size_kb": 64, 00:19:38.597 "state": "configuring", 00:19:38.597 "raid_level": "concat", 00:19:38.597 "superblock": true, 00:19:38.597 "num_base_bdevs": 4, 00:19:38.597 "num_base_bdevs_discovered": 2, 00:19:38.597 "num_base_bdevs_operational": 4, 00:19:38.597 "base_bdevs_list": [ 00:19:38.597 { 00:19:38.597 "name": "BaseBdev1", 00:19:38.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.597 "is_configured": false, 00:19:38.597 "data_offset": 0, 00:19:38.597 "data_size": 0 00:19:38.597 }, 00:19:38.597 { 00:19:38.597 "name": null, 00:19:38.597 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:38.597 "is_configured": false, 00:19:38.597 "data_offset": 2048, 00:19:38.597 "data_size": 63488 00:19:38.597 }, 00:19:38.597 { 00:19:38.597 "name": "BaseBdev3", 00:19:38.597 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:38.597 "is_configured": true, 00:19:38.597 "data_offset": 2048, 00:19:38.597 "data_size": 63488 00:19:38.597 }, 00:19:38.597 { 00:19:38.597 "name": "BaseBdev4", 00:19:38.597 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:38.597 "is_configured": true, 00:19:38.597 "data_offset": 2048, 00:19:38.597 "data_size": 63488 00:19:38.597 } 00:19:38.597 ] 00:19:38.597 }' 00:19:38.597 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.597 02:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.855 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.855 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:39.113 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:19:39.113 02:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.371 [2024-05-15 02:20:27.178432] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.371 BaseBdev1 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:39.371 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.631 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:39.889 [ 00:19:39.889 { 00:19:39.889 "name": "BaseBdev1", 00:19:39.889 "aliases": [ 00:19:39.889 "b18e1d6c-1261-11ef-99fd-bfc7c66e2865" 00:19:39.889 ], 00:19:39.889 "product_name": "Malloc disk", 00:19:39.889 "block_size": 512, 00:19:39.889 "num_blocks": 65536, 00:19:39.889 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:39.889 "assigned_rate_limits": { 00:19:39.889 "rw_ios_per_sec": 0, 00:19:39.889 "rw_mbytes_per_sec": 0, 00:19:39.889 "r_mbytes_per_sec": 0, 00:19:39.889 "w_mbytes_per_sec": 0 00:19:39.889 }, 00:19:39.889 "claimed": true, 00:19:39.889 "claim_type": "exclusive_write", 00:19:39.889 "zoned": false, 00:19:39.889 "supported_io_types": { 00:19:39.889 "read": true, 00:19:39.889 "write": true, 00:19:39.889 "unmap": true, 00:19:39.889 "write_zeroes": true, 00:19:39.889 "flush": true, 00:19:39.889 "reset": true, 00:19:39.889 "compare": false, 00:19:39.889 "compare_and_write": false, 00:19:39.889 "abort": true, 00:19:39.889 "nvme_admin": false, 00:19:39.889 "nvme_io": false 00:19:39.889 }, 00:19:39.889 "memory_domains": [ 00:19:39.889 { 00:19:39.889 "dma_device_id": "system", 00:19:39.889 "dma_device_type": 1 00:19:39.889 }, 00:19:39.889 { 00:19:39.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.889 "dma_device_type": 2 00:19:39.889 } 00:19:39.889 ], 00:19:39.889 "driver_specific": {} 00:19:39.889 } 00:19:39.889 ] 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.889 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.890 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.890 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.890 02:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.148 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.148 "name": "Existed_Raid", 00:19:40.148 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:40.148 "strip_size_kb": 64, 00:19:40.148 "state": "configuring", 00:19:40.148 "raid_level": "concat", 00:19:40.148 "superblock": true, 00:19:40.148 "num_base_bdevs": 4, 00:19:40.148 "num_base_bdevs_discovered": 3, 00:19:40.148 "num_base_bdevs_operational": 4, 00:19:40.148 "base_bdevs_list": [ 00:19:40.148 { 00:19:40.148 "name": "BaseBdev1", 00:19:40.148 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:40.148 "is_configured": true, 00:19:40.148 "data_offset": 2048, 00:19:40.148 "data_size": 63488 00:19:40.148 }, 00:19:40.148 { 00:19:40.148 "name": null, 00:19:40.148 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:40.148 "is_configured": false, 00:19:40.148 "data_offset": 2048, 00:19:40.148 "data_size": 63488 00:19:40.148 }, 00:19:40.148 { 00:19:40.148 "name": "BaseBdev3", 00:19:40.148 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:40.148 "is_configured": true, 00:19:40.148 "data_offset": 2048, 00:19:40.148 "data_size": 63488 00:19:40.148 }, 00:19:40.148 { 00:19:40.148 "name": "BaseBdev4", 00:19:40.148 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:40.148 "is_configured": true, 00:19:40.148 "data_offset": 2048, 00:19:40.148 "data_size": 63488 00:19:40.148 } 00:19:40.148 ] 00:19:40.148 }' 00:19:40.148 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.148 02:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.407 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.407 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:40.667 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:40.667 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:40.927 [2024-05-15 02:20:28.922423] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.927 02:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.495 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.495 "name": "Existed_Raid", 00:19:41.495 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:41.495 "strip_size_kb": 64, 00:19:41.495 "state": "configuring", 00:19:41.495 "raid_level": "concat", 00:19:41.495 "superblock": true, 00:19:41.495 "num_base_bdevs": 4, 00:19:41.495 "num_base_bdevs_discovered": 2, 00:19:41.495 "num_base_bdevs_operational": 4, 00:19:41.495 "base_bdevs_list": [ 00:19:41.495 { 00:19:41.495 "name": "BaseBdev1", 00:19:41.495 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:41.495 "is_configured": true, 00:19:41.495 "data_offset": 2048, 00:19:41.495 "data_size": 63488 00:19:41.495 }, 00:19:41.495 { 00:19:41.495 "name": null, 00:19:41.495 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:41.495 "is_configured": false, 00:19:41.495 "data_offset": 2048, 00:19:41.495 "data_size": 63488 00:19:41.495 }, 00:19:41.495 { 00:19:41.495 "name": null, 00:19:41.495 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:41.495 "is_configured": false, 00:19:41.495 "data_offset": 2048, 00:19:41.495 "data_size": 63488 00:19:41.495 }, 00:19:41.495 { 00:19:41.495 "name": "BaseBdev4", 00:19:41.495 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:41.495 "is_configured": true, 00:19:41.495 "data_offset": 2048, 00:19:41.495 "data_size": 63488 00:19:41.495 } 00:19:41.495 ] 00:19:41.495 }' 00:19:41.495 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.495 02:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.754 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.754 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:42.049 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:19:42.049 02:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:42.340 [2024-05-15 02:20:30.198532] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.340 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.599 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.599 "name": "Existed_Raid", 00:19:42.599 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:42.599 "strip_size_kb": 64, 00:19:42.599 "state": "configuring", 00:19:42.599 "raid_level": "concat", 00:19:42.599 "superblock": true, 00:19:42.599 "num_base_bdevs": 4, 00:19:42.599 "num_base_bdevs_discovered": 3, 00:19:42.599 "num_base_bdevs_operational": 4, 00:19:42.599 "base_bdevs_list": [ 00:19:42.599 { 00:19:42.599 "name": "BaseBdev1", 00:19:42.599 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:42.599 "is_configured": true, 00:19:42.599 "data_offset": 2048, 00:19:42.599 "data_size": 63488 00:19:42.599 }, 00:19:42.599 { 00:19:42.599 "name": null, 00:19:42.599 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:42.599 "is_configured": false, 00:19:42.599 "data_offset": 2048, 00:19:42.599 "data_size": 63488 00:19:42.599 }, 00:19:42.599 { 00:19:42.599 "name": "BaseBdev3", 00:19:42.599 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:42.599 "is_configured": true, 00:19:42.599 "data_offset": 2048, 00:19:42.599 "data_size": 63488 00:19:42.599 }, 00:19:42.599 { 00:19:42.599 "name": "BaseBdev4", 00:19:42.599 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:42.599 "is_configured": true, 00:19:42.599 "data_offset": 2048, 00:19:42.599 "data_size": 63488 00:19:42.599 } 00:19:42.599 ] 00:19:42.599 }' 00:19:42.600 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.600 02:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.858 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.858 02:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:43.427 [2024-05-15 02:20:31.374596] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.427 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.688 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.688 "name": "Existed_Raid", 00:19:43.688 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:43.688 "strip_size_kb": 64, 00:19:43.688 "state": "configuring", 00:19:43.688 "raid_level": "concat", 00:19:43.688 "superblock": true, 00:19:43.688 "num_base_bdevs": 4, 00:19:43.688 "num_base_bdevs_discovered": 2, 00:19:43.688 "num_base_bdevs_operational": 4, 00:19:43.688 "base_bdevs_list": [ 00:19:43.688 { 00:19:43.688 "name": null, 00:19:43.688 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:43.688 "is_configured": false, 00:19:43.688 "data_offset": 2048, 00:19:43.688 "data_size": 63488 00:19:43.688 }, 00:19:43.688 { 00:19:43.688 "name": null, 00:19:43.688 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:43.688 "is_configured": false, 00:19:43.688 "data_offset": 2048, 00:19:43.688 "data_size": 63488 00:19:43.688 }, 00:19:43.688 { 00:19:43.688 "name": "BaseBdev3", 00:19:43.688 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:43.688 "is_configured": true, 00:19:43.688 "data_offset": 2048, 00:19:43.688 "data_size": 63488 00:19:43.688 }, 00:19:43.688 { 00:19:43.688 "name": "BaseBdev4", 00:19:43.688 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:43.688 "is_configured": true, 00:19:43.688 "data_offset": 2048, 00:19:43.688 "data_size": 63488 00:19:43.688 } 00:19:43.688 ] 00:19:43.688 }' 00:19:43.688 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.688 02:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.947 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.947 02:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:44.205 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:19:44.205 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:44.463 [2024-05-15 02:20:32.403481] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.463 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.722 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.722 "name": "Existed_Raid", 00:19:44.722 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:44.722 "strip_size_kb": 64, 00:19:44.722 "state": "configuring", 00:19:44.722 "raid_level": "concat", 00:19:44.722 "superblock": true, 00:19:44.722 "num_base_bdevs": 4, 00:19:44.722 "num_base_bdevs_discovered": 3, 00:19:44.722 "num_base_bdevs_operational": 4, 00:19:44.722 "base_bdevs_list": [ 00:19:44.722 { 00:19:44.722 "name": null, 00:19:44.722 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:44.722 "is_configured": false, 00:19:44.722 "data_offset": 2048, 00:19:44.722 "data_size": 63488 00:19:44.722 }, 00:19:44.722 { 00:19:44.722 "name": "BaseBdev2", 00:19:44.722 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:44.722 "is_configured": true, 00:19:44.722 "data_offset": 2048, 00:19:44.722 "data_size": 63488 00:19:44.722 }, 00:19:44.722 { 00:19:44.722 "name": "BaseBdev3", 00:19:44.722 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:44.722 "is_configured": true, 00:19:44.722 "data_offset": 2048, 00:19:44.722 "data_size": 63488 00:19:44.722 }, 00:19:44.722 { 00:19:44.722 "name": "BaseBdev4", 00:19:44.722 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:44.722 "is_configured": true, 00:19:44.722 "data_offset": 2048, 00:19:44.722 "data_size": 63488 00:19:44.722 } 00:19:44.722 ] 00:19:44.722 }' 00:19:44.722 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.722 02:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.981 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:44.981 02:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.240 02:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:19:45.240 02:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.240 02:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:45.499 02:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b18e1d6c-1261-11ef-99fd-bfc7c66e2865 00:19:45.757 [2024-05-15 02:20:33.643681] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:45.757 [2024-05-15 02:20:33.643732] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x829dc3f00 00:19:45.757 [2024-05-15 02:20:33.643738] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:45.757 [2024-05-15 02:20:33.643756] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e26e20 00:19:45.757 [2024-05-15 02:20:33.643792] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829dc3f00 00:19:45.757 [2024-05-15 02:20:33.643795] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829dc3f00 00:19:45.757 [2024-05-15 02:20:33.643811] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.757 NewBaseBdev 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:45.757 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.015 02:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:46.274 [ 00:19:46.274 { 00:19:46.274 "name": "NewBaseBdev", 00:19:46.274 "aliases": [ 00:19:46.274 "b18e1d6c-1261-11ef-99fd-bfc7c66e2865" 00:19:46.274 ], 00:19:46.274 "product_name": "Malloc disk", 00:19:46.274 "block_size": 512, 00:19:46.274 "num_blocks": 65536, 00:19:46.274 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:46.274 "assigned_rate_limits": { 00:19:46.274 "rw_ios_per_sec": 0, 00:19:46.274 "rw_mbytes_per_sec": 0, 00:19:46.274 "r_mbytes_per_sec": 0, 00:19:46.274 "w_mbytes_per_sec": 0 00:19:46.274 }, 00:19:46.274 "claimed": true, 00:19:46.274 "claim_type": "exclusive_write", 00:19:46.274 "zoned": false, 00:19:46.274 "supported_io_types": { 00:19:46.274 "read": true, 00:19:46.274 "write": true, 00:19:46.274 "unmap": true, 00:19:46.274 "write_zeroes": true, 00:19:46.274 "flush": true, 00:19:46.274 "reset": true, 00:19:46.274 "compare": false, 00:19:46.274 "compare_and_write": false, 00:19:46.274 "abort": true, 00:19:46.274 "nvme_admin": false, 00:19:46.274 "nvme_io": false 00:19:46.274 }, 00:19:46.274 "memory_domains": [ 00:19:46.274 { 00:19:46.274 "dma_device_id": "system", 00:19:46.274 "dma_device_type": 1 00:19:46.274 }, 00:19:46.274 { 00:19:46.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.274 "dma_device_type": 2 00:19:46.274 } 00:19:46.274 ], 00:19:46.274 "driver_specific": {} 00:19:46.274 } 00:19:46.274 ] 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.275 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.533 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.533 "name": "Existed_Raid", 00:19:46.533 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:46.533 "strip_size_kb": 64, 00:19:46.533 "state": "online", 00:19:46.533 "raid_level": "concat", 00:19:46.533 "superblock": true, 00:19:46.533 "num_base_bdevs": 4, 00:19:46.533 "num_base_bdevs_discovered": 4, 00:19:46.533 "num_base_bdevs_operational": 4, 00:19:46.533 "base_bdevs_list": [ 00:19:46.533 { 00:19:46.533 "name": "NewBaseBdev", 00:19:46.533 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:46.533 "is_configured": true, 00:19:46.533 "data_offset": 2048, 00:19:46.533 "data_size": 63488 00:19:46.533 }, 00:19:46.533 { 00:19:46.533 "name": "BaseBdev2", 00:19:46.533 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:46.533 "is_configured": true, 00:19:46.533 "data_offset": 2048, 00:19:46.533 "data_size": 63488 00:19:46.533 }, 00:19:46.533 { 00:19:46.533 "name": "BaseBdev3", 00:19:46.533 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:46.533 "is_configured": true, 00:19:46.533 "data_offset": 2048, 00:19:46.533 "data_size": 63488 00:19:46.533 }, 00:19:46.533 { 00:19:46.533 "name": "BaseBdev4", 00:19:46.533 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:46.533 "is_configured": true, 00:19:46.533 "data_offset": 2048, 00:19:46.533 "data_size": 63488 00:19:46.533 } 00:19:46.533 ] 00:19:46.533 }' 00:19:46.533 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.533 02:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:19:47.115 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:47.116 02:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:47.116 [2024-05-15 02:20:35.111692] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.116 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:47.116 "name": "Existed_Raid", 00:19:47.116 "aliases": [ 00:19:47.116 "b05d882d-1261-11ef-99fd-bfc7c66e2865" 00:19:47.116 ], 00:19:47.116 "product_name": "Raid Volume", 00:19:47.116 "block_size": 512, 00:19:47.116 "num_blocks": 253952, 00:19:47.116 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "assigned_rate_limits": { 00:19:47.116 "rw_ios_per_sec": 0, 00:19:47.116 "rw_mbytes_per_sec": 0, 00:19:47.116 "r_mbytes_per_sec": 0, 00:19:47.116 "w_mbytes_per_sec": 0 00:19:47.116 }, 00:19:47.116 "claimed": false, 00:19:47.116 "zoned": false, 00:19:47.116 "supported_io_types": { 00:19:47.116 "read": true, 00:19:47.116 "write": true, 00:19:47.116 "unmap": true, 00:19:47.116 "write_zeroes": true, 00:19:47.116 "flush": true, 00:19:47.116 "reset": true, 00:19:47.116 "compare": false, 00:19:47.116 "compare_and_write": false, 00:19:47.116 "abort": false, 00:19:47.116 "nvme_admin": false, 00:19:47.116 "nvme_io": false 00:19:47.116 }, 00:19:47.116 "memory_domains": [ 00:19:47.116 { 00:19:47.116 "dma_device_id": "system", 00:19:47.116 "dma_device_type": 1 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.116 "dma_device_type": 2 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "system", 00:19:47.116 "dma_device_type": 1 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.116 "dma_device_type": 2 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "system", 00:19:47.116 "dma_device_type": 1 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.116 "dma_device_type": 2 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "system", 00:19:47.116 "dma_device_type": 1 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.116 "dma_device_type": 2 00:19:47.116 } 00:19:47.116 ], 00:19:47.116 "driver_specific": { 00:19:47.116 "raid": { 00:19:47.116 "uuid": "b05d882d-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "strip_size_kb": 64, 00:19:47.116 "state": "online", 00:19:47.116 "raid_level": "concat", 00:19:47.116 "superblock": true, 00:19:47.116 "num_base_bdevs": 4, 00:19:47.116 "num_base_bdevs_discovered": 4, 00:19:47.116 "num_base_bdevs_operational": 4, 00:19:47.116 "base_bdevs_list": [ 00:19:47.116 { 00:19:47.116 "name": "NewBaseBdev", 00:19:47.116 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "is_configured": true, 00:19:47.116 "data_offset": 2048, 00:19:47.116 "data_size": 63488 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "name": "BaseBdev2", 00:19:47.116 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "is_configured": true, 00:19:47.116 "data_offset": 2048, 00:19:47.116 "data_size": 63488 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "name": "BaseBdev3", 00:19:47.116 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "is_configured": true, 00:19:47.116 "data_offset": 2048, 00:19:47.116 "data_size": 63488 00:19:47.116 }, 00:19:47.116 { 00:19:47.116 "name": "BaseBdev4", 00:19:47.116 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:47.116 "is_configured": true, 00:19:47.116 "data_offset": 2048, 00:19:47.116 "data_size": 63488 00:19:47.116 } 00:19:47.116 ] 00:19:47.116 } 00:19:47.116 } 00:19:47.116 }' 00:19:47.374 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.374 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:19:47.374 BaseBdev2 00:19:47.374 BaseBdev3 00:19:47.374 BaseBdev4' 00:19:47.374 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:47.374 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:47.374 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:47.632 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:47.632 "name": "NewBaseBdev", 00:19:47.632 "aliases": [ 00:19:47.632 "b18e1d6c-1261-11ef-99fd-bfc7c66e2865" 00:19:47.632 ], 00:19:47.632 "product_name": "Malloc disk", 00:19:47.632 "block_size": 512, 00:19:47.632 "num_blocks": 65536, 00:19:47.632 "uuid": "b18e1d6c-1261-11ef-99fd-bfc7c66e2865", 00:19:47.632 "assigned_rate_limits": { 00:19:47.632 "rw_ios_per_sec": 0, 00:19:47.632 "rw_mbytes_per_sec": 0, 00:19:47.632 "r_mbytes_per_sec": 0, 00:19:47.632 "w_mbytes_per_sec": 0 00:19:47.632 }, 00:19:47.632 "claimed": true, 00:19:47.633 "claim_type": "exclusive_write", 00:19:47.633 "zoned": false, 00:19:47.633 "supported_io_types": { 00:19:47.633 "read": true, 00:19:47.633 "write": true, 00:19:47.633 "unmap": true, 00:19:47.633 "write_zeroes": true, 00:19:47.633 "flush": true, 00:19:47.633 "reset": true, 00:19:47.633 "compare": false, 00:19:47.633 "compare_and_write": false, 00:19:47.633 "abort": true, 00:19:47.633 "nvme_admin": false, 00:19:47.633 "nvme_io": false 00:19:47.633 }, 00:19:47.633 "memory_domains": [ 00:19:47.633 { 00:19:47.633 "dma_device_id": "system", 00:19:47.633 "dma_device_type": 1 00:19:47.633 }, 00:19:47.633 { 00:19:47.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.633 "dma_device_type": 2 00:19:47.633 } 00:19:47.633 ], 00:19:47.633 "driver_specific": {} 00:19:47.633 }' 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:47.633 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:47.891 "name": "BaseBdev2", 00:19:47.891 "aliases": [ 00:19:47.891 "aef259e3-1261-11ef-99fd-bfc7c66e2865" 00:19:47.891 ], 00:19:47.891 "product_name": "Malloc disk", 00:19:47.891 "block_size": 512, 00:19:47.891 "num_blocks": 65536, 00:19:47.891 "uuid": "aef259e3-1261-11ef-99fd-bfc7c66e2865", 00:19:47.891 "assigned_rate_limits": { 00:19:47.891 "rw_ios_per_sec": 0, 00:19:47.891 "rw_mbytes_per_sec": 0, 00:19:47.891 "r_mbytes_per_sec": 0, 00:19:47.891 "w_mbytes_per_sec": 0 00:19:47.891 }, 00:19:47.891 "claimed": true, 00:19:47.891 "claim_type": "exclusive_write", 00:19:47.891 "zoned": false, 00:19:47.891 "supported_io_types": { 00:19:47.891 "read": true, 00:19:47.891 "write": true, 00:19:47.891 "unmap": true, 00:19:47.891 "write_zeroes": true, 00:19:47.891 "flush": true, 00:19:47.891 "reset": true, 00:19:47.891 "compare": false, 00:19:47.891 "compare_and_write": false, 00:19:47.891 "abort": true, 00:19:47.891 "nvme_admin": false, 00:19:47.891 "nvme_io": false 00:19:47.891 }, 00:19:47.891 "memory_domains": [ 00:19:47.891 { 00:19:47.891 "dma_device_id": "system", 00:19:47.891 "dma_device_type": 1 00:19:47.891 }, 00:19:47.891 { 00:19:47.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.891 "dma_device_type": 2 00:19:47.891 } 00:19:47.891 ], 00:19:47.891 "driver_specific": {} 00:19:47.891 }' 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:47.891 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:47.892 02:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:48.461 "name": "BaseBdev3", 00:19:48.461 "aliases": [ 00:19:48.461 "af5d2b39-1261-11ef-99fd-bfc7c66e2865" 00:19:48.461 ], 00:19:48.461 "product_name": "Malloc disk", 00:19:48.461 "block_size": 512, 00:19:48.461 "num_blocks": 65536, 00:19:48.461 "uuid": "af5d2b39-1261-11ef-99fd-bfc7c66e2865", 00:19:48.461 "assigned_rate_limits": { 00:19:48.461 "rw_ios_per_sec": 0, 00:19:48.461 "rw_mbytes_per_sec": 0, 00:19:48.461 "r_mbytes_per_sec": 0, 00:19:48.461 "w_mbytes_per_sec": 0 00:19:48.461 }, 00:19:48.461 "claimed": true, 00:19:48.461 "claim_type": "exclusive_write", 00:19:48.461 "zoned": false, 00:19:48.461 "supported_io_types": { 00:19:48.461 "read": true, 00:19:48.461 "write": true, 00:19:48.461 "unmap": true, 00:19:48.461 "write_zeroes": true, 00:19:48.461 "flush": true, 00:19:48.461 "reset": true, 00:19:48.461 "compare": false, 00:19:48.461 "compare_and_write": false, 00:19:48.461 "abort": true, 00:19:48.461 "nvme_admin": false, 00:19:48.461 "nvme_io": false 00:19:48.461 }, 00:19:48.461 "memory_domains": [ 00:19:48.461 { 00:19:48.461 "dma_device_id": "system", 00:19:48.461 "dma_device_type": 1 00:19:48.461 }, 00:19:48.461 { 00:19:48.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.461 "dma_device_type": 2 00:19:48.461 } 00:19:48.461 ], 00:19:48.461 "driver_specific": {} 00:19:48.461 }' 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:48.461 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:48.720 "name": "BaseBdev4", 00:19:48.720 "aliases": [ 00:19:48.720 "aff17e31-1261-11ef-99fd-bfc7c66e2865" 00:19:48.720 ], 00:19:48.720 "product_name": "Malloc disk", 00:19:48.720 "block_size": 512, 00:19:48.720 "num_blocks": 65536, 00:19:48.720 "uuid": "aff17e31-1261-11ef-99fd-bfc7c66e2865", 00:19:48.720 "assigned_rate_limits": { 00:19:48.720 "rw_ios_per_sec": 0, 00:19:48.720 "rw_mbytes_per_sec": 0, 00:19:48.720 "r_mbytes_per_sec": 0, 00:19:48.720 "w_mbytes_per_sec": 0 00:19:48.720 }, 00:19:48.720 "claimed": true, 00:19:48.720 "claim_type": "exclusive_write", 00:19:48.720 "zoned": false, 00:19:48.720 "supported_io_types": { 00:19:48.720 "read": true, 00:19:48.720 "write": true, 00:19:48.720 "unmap": true, 00:19:48.720 "write_zeroes": true, 00:19:48.720 "flush": true, 00:19:48.720 "reset": true, 00:19:48.720 "compare": false, 00:19:48.720 "compare_and_write": false, 00:19:48.720 "abort": true, 00:19:48.720 "nvme_admin": false, 00:19:48.720 "nvme_io": false 00:19:48.720 }, 00:19:48.720 "memory_domains": [ 00:19:48.720 { 00:19:48.720 "dma_device_id": "system", 00:19:48.720 "dma_device_type": 1 00:19:48.720 }, 00:19:48.720 { 00:19:48.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.720 "dma_device_type": 2 00:19:48.720 } 00:19:48.720 ], 00:19:48.720 "driver_specific": {} 00:19:48.720 }' 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:48.720 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:48.978 [2024-05-15 02:20:36.920459] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:48.978 [2024-05-15 02:20:36.920492] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.978 [2024-05-15 02:20:36.920516] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.978 [2024-05-15 02:20:36.920531] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.979 [2024-05-15 02:20:36.920536] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829dc3f00 name Existed_Raid, state offline 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 60113 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 60113 ']' 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 60113 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 60113 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:48.979 killing process with pid 60113 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60113' 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 60113 00:19:48.979 [2024-05-15 02:20:36.950842] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.979 02:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 60113 00:19:48.979 [2024-05-15 02:20:36.970252] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.236 02:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:19:49.236 00:19:49.237 real 0m27.902s 00:19:49.237 user 0m51.151s 00:19:49.237 sys 0m3.830s 00:19:49.237 02:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:49.237 02:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.237 ************************************ 00:19:49.237 END TEST raid_state_function_test_sb 00:19:49.237 ************************************ 00:19:49.237 02:20:37 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:49.237 02:20:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:49.237 02:20:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.237 02:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.237 ************************************ 00:19:49.237 START TEST raid_superblock_test 00:19:49.237 ************************************ 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60935 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60935 /var/tmp/spdk-raid.sock 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 60935 ']' 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:49.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:49.237 02:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.237 [2024-05-15 02:20:37.171632] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:49.237 [2024-05-15 02:20:37.171895] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:49.803 EAL: TSC is not safe to use in SMP mode 00:19:49.803 EAL: TSC is not invariant 00:19:49.803 [2024-05-15 02:20:37.655347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.803 [2024-05-15 02:20:37.739506] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:49.803 [2024-05-15 02:20:37.741792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.803 [2024-05-15 02:20:37.742608] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:49.803 [2024-05-15 02:20:37.742627] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:50.736 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:50.994 malloc1 00:19:50.994 02:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:51.251 [2024-05-15 02:20:39.065960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:51.251 [2024-05-15 02:20:39.066030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.251 [2024-05-15 02:20:39.066639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaed780 00:19:51.251 [2024-05-15 02:20:39.066672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.251 [2024-05-15 02:20:39.067594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.251 [2024-05-15 02:20:39.067656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:51.251 pt1 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:51.251 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:51.507 malloc2 00:19:51.507 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:51.765 [2024-05-15 02:20:39.537983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:51.765 [2024-05-15 02:20:39.538078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.765 [2024-05-15 02:20:39.538116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaedc80 00:19:51.765 [2024-05-15 02:20:39.538132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.765 [2024-05-15 02:20:39.538707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.765 [2024-05-15 02:20:39.538756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:51.765 pt2 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:51.765 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:52.022 malloc3 00:19:52.022 02:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:52.284 [2024-05-15 02:20:40.106027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:52.284 [2024-05-15 02:20:40.106100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.284 [2024-05-15 02:20:40.106128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaee180 00:19:52.284 [2024-05-15 02:20:40.106137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.284 [2024-05-15 02:20:40.106702] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.284 [2024-05-15 02:20:40.106732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:52.284 pt3 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:52.284 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:52.542 malloc4 00:19:52.542 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:52.801 [2024-05-15 02:20:40.630052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:52.801 [2024-05-15 02:20:40.630120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.801 [2024-05-15 02:20:40.630150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaee680 00:19:52.801 [2024-05-15 02:20:40.630159] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.801 [2024-05-15 02:20:40.630683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.801 [2024-05-15 02:20:40.630713] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:52.801 pt4 00:19:52.801 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:52.801 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:52.801 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:53.059 [2024-05-15 02:20:40.970085] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:53.059 [2024-05-15 02:20:40.970556] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.059 [2024-05-15 02:20:40.970572] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:53.059 [2024-05-15 02:20:40.970582] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:53.059 [2024-05-15 02:20:40.970638] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aaee900 00:19:53.059 [2024-05-15 02:20:40.970644] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:53.059 [2024-05-15 02:20:40.970677] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab50e20 00:19:53.059 [2024-05-15 02:20:40.970737] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aaee900 00:19:53.059 [2024-05-15 02:20:40.970741] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aaee900 00:19:53.059 [2024-05-15 02:20:40.970765] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.059 02:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.317 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.317 "name": "raid_bdev1", 00:19:53.317 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:19:53.317 "strip_size_kb": 64, 00:19:53.317 "state": "online", 00:19:53.317 "raid_level": "concat", 00:19:53.317 "superblock": true, 00:19:53.317 "num_base_bdevs": 4, 00:19:53.317 "num_base_bdevs_discovered": 4, 00:19:53.317 "num_base_bdevs_operational": 4, 00:19:53.317 "base_bdevs_list": [ 00:19:53.317 { 00:19:53.317 "name": "pt1", 00:19:53.317 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:19:53.317 "is_configured": true, 00:19:53.317 "data_offset": 2048, 00:19:53.317 "data_size": 63488 00:19:53.317 }, 00:19:53.317 { 00:19:53.317 "name": "pt2", 00:19:53.317 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:19:53.317 "is_configured": true, 00:19:53.317 "data_offset": 2048, 00:19:53.317 "data_size": 63488 00:19:53.317 }, 00:19:53.317 { 00:19:53.317 "name": "pt3", 00:19:53.317 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:19:53.317 "is_configured": true, 00:19:53.317 "data_offset": 2048, 00:19:53.317 "data_size": 63488 00:19:53.317 }, 00:19:53.317 { 00:19:53.317 "name": "pt4", 00:19:53.317 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:19:53.317 "is_configured": true, 00:19:53.317 "data_offset": 2048, 00:19:53.317 "data_size": 63488 00:19:53.317 } 00:19:53.317 ] 00:19:53.317 }' 00:19:53.317 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.317 02:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:53.576 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:53.833 [2024-05-15 02:20:41.830179] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.091 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:54.091 "name": "raid_bdev1", 00:19:54.091 "aliases": [ 00:19:54.091 "b9c6917a-1261-11ef-99fd-bfc7c66e2865" 00:19:54.091 ], 00:19:54.091 "product_name": "Raid Volume", 00:19:54.091 "block_size": 512, 00:19:54.091 "num_blocks": 253952, 00:19:54.091 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:19:54.091 "assigned_rate_limits": { 00:19:54.091 "rw_ios_per_sec": 0, 00:19:54.091 "rw_mbytes_per_sec": 0, 00:19:54.091 "r_mbytes_per_sec": 0, 00:19:54.091 "w_mbytes_per_sec": 0 00:19:54.091 }, 00:19:54.091 "claimed": false, 00:19:54.091 "zoned": false, 00:19:54.091 "supported_io_types": { 00:19:54.091 "read": true, 00:19:54.091 "write": true, 00:19:54.091 "unmap": true, 00:19:54.091 "write_zeroes": true, 00:19:54.091 "flush": true, 00:19:54.091 "reset": true, 00:19:54.091 "compare": false, 00:19:54.091 "compare_and_write": false, 00:19:54.091 "abort": false, 00:19:54.091 "nvme_admin": false, 00:19:54.091 "nvme_io": false 00:19:54.091 }, 00:19:54.091 "memory_domains": [ 00:19:54.091 { 00:19:54.092 "dma_device_id": "system", 00:19:54.092 "dma_device_type": 1 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.092 "dma_device_type": 2 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "system", 00:19:54.092 "dma_device_type": 1 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.092 "dma_device_type": 2 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "system", 00:19:54.092 "dma_device_type": 1 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.092 "dma_device_type": 2 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "system", 00:19:54.092 "dma_device_type": 1 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.092 "dma_device_type": 2 00:19:54.092 } 00:19:54.092 ], 00:19:54.092 "driver_specific": { 00:19:54.092 "raid": { 00:19:54.092 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:19:54.092 "strip_size_kb": 64, 00:19:54.092 "state": "online", 00:19:54.092 "raid_level": "concat", 00:19:54.092 "superblock": true, 00:19:54.092 "num_base_bdevs": 4, 00:19:54.092 "num_base_bdevs_discovered": 4, 00:19:54.092 "num_base_bdevs_operational": 4, 00:19:54.092 "base_bdevs_list": [ 00:19:54.092 { 00:19:54.092 "name": "pt1", 00:19:54.092 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:19:54.092 "is_configured": true, 00:19:54.092 "data_offset": 2048, 00:19:54.092 "data_size": 63488 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "name": "pt2", 00:19:54.092 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:19:54.092 "is_configured": true, 00:19:54.092 "data_offset": 2048, 00:19:54.092 "data_size": 63488 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "name": "pt3", 00:19:54.092 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:19:54.092 "is_configured": true, 00:19:54.092 "data_offset": 2048, 00:19:54.092 "data_size": 63488 00:19:54.092 }, 00:19:54.092 { 00:19:54.092 "name": "pt4", 00:19:54.092 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:19:54.092 "is_configured": true, 00:19:54.092 "data_offset": 2048, 00:19:54.092 "data_size": 63488 00:19:54.092 } 00:19:54.092 ] 00:19:54.092 } 00:19:54.092 } 00:19:54.092 }' 00:19:54.092 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:54.092 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:54.092 pt2 00:19:54.092 pt3 00:19:54.092 pt4' 00:19:54.092 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:54.092 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:54.092 02:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:54.350 "name": "pt1", 00:19:54.350 "aliases": [ 00:19:54.350 "f99fca5f-8c74-d050-bb3c-6b810ff63bd3" 00:19:54.350 ], 00:19:54.350 "product_name": "passthru", 00:19:54.350 "block_size": 512, 00:19:54.350 "num_blocks": 65536, 00:19:54.350 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:19:54.350 "assigned_rate_limits": { 00:19:54.350 "rw_ios_per_sec": 0, 00:19:54.350 "rw_mbytes_per_sec": 0, 00:19:54.350 "r_mbytes_per_sec": 0, 00:19:54.350 "w_mbytes_per_sec": 0 00:19:54.350 }, 00:19:54.350 "claimed": true, 00:19:54.350 "claim_type": "exclusive_write", 00:19:54.350 "zoned": false, 00:19:54.350 "supported_io_types": { 00:19:54.350 "read": true, 00:19:54.350 "write": true, 00:19:54.350 "unmap": true, 00:19:54.350 "write_zeroes": true, 00:19:54.350 "flush": true, 00:19:54.350 "reset": true, 00:19:54.350 "compare": false, 00:19:54.350 "compare_and_write": false, 00:19:54.350 "abort": true, 00:19:54.350 "nvme_admin": false, 00:19:54.350 "nvme_io": false 00:19:54.350 }, 00:19:54.350 "memory_domains": [ 00:19:54.350 { 00:19:54.350 "dma_device_id": "system", 00:19:54.350 "dma_device_type": 1 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.350 "dma_device_type": 2 00:19:54.350 } 00:19:54.350 ], 00:19:54.350 "driver_specific": { 00:19:54.350 "passthru": { 00:19:54.350 "name": "pt1", 00:19:54.350 "base_bdev_name": "malloc1" 00:19:54.350 } 00:19:54.350 } 00:19:54.350 }' 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.350 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.351 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:54.351 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:54.351 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:54.351 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:54.610 "name": "pt2", 00:19:54.610 "aliases": [ 00:19:54.610 "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0" 00:19:54.610 ], 00:19:54.610 "product_name": "passthru", 00:19:54.610 "block_size": 512, 00:19:54.610 "num_blocks": 65536, 00:19:54.610 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:19:54.610 "assigned_rate_limits": { 00:19:54.610 "rw_ios_per_sec": 0, 00:19:54.610 "rw_mbytes_per_sec": 0, 00:19:54.610 "r_mbytes_per_sec": 0, 00:19:54.610 "w_mbytes_per_sec": 0 00:19:54.610 }, 00:19:54.610 "claimed": true, 00:19:54.610 "claim_type": "exclusive_write", 00:19:54.610 "zoned": false, 00:19:54.610 "supported_io_types": { 00:19:54.610 "read": true, 00:19:54.610 "write": true, 00:19:54.610 "unmap": true, 00:19:54.610 "write_zeroes": true, 00:19:54.610 "flush": true, 00:19:54.610 "reset": true, 00:19:54.610 "compare": false, 00:19:54.610 "compare_and_write": false, 00:19:54.610 "abort": true, 00:19:54.610 "nvme_admin": false, 00:19:54.610 "nvme_io": false 00:19:54.610 }, 00:19:54.610 "memory_domains": [ 00:19:54.610 { 00:19:54.610 "dma_device_id": "system", 00:19:54.610 "dma_device_type": 1 00:19:54.610 }, 00:19:54.610 { 00:19:54.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.610 "dma_device_type": 2 00:19:54.610 } 00:19:54.610 ], 00:19:54.610 "driver_specific": { 00:19:54.610 "passthru": { 00:19:54.610 "name": "pt2", 00:19:54.610 "base_bdev_name": "malloc2" 00:19:54.610 } 00:19:54.610 } 00:19:54.610 }' 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:54.610 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:54.868 "name": "pt3", 00:19:54.868 "aliases": [ 00:19:54.868 "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96" 00:19:54.868 ], 00:19:54.868 "product_name": "passthru", 00:19:54.868 "block_size": 512, 00:19:54.868 "num_blocks": 65536, 00:19:54.868 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:19:54.868 "assigned_rate_limits": { 00:19:54.868 "rw_ios_per_sec": 0, 00:19:54.868 "rw_mbytes_per_sec": 0, 00:19:54.868 "r_mbytes_per_sec": 0, 00:19:54.868 "w_mbytes_per_sec": 0 00:19:54.868 }, 00:19:54.868 "claimed": true, 00:19:54.868 "claim_type": "exclusive_write", 00:19:54.868 "zoned": false, 00:19:54.868 "supported_io_types": { 00:19:54.868 "read": true, 00:19:54.868 "write": true, 00:19:54.868 "unmap": true, 00:19:54.868 "write_zeroes": true, 00:19:54.868 "flush": true, 00:19:54.868 "reset": true, 00:19:54.868 "compare": false, 00:19:54.868 "compare_and_write": false, 00:19:54.868 "abort": true, 00:19:54.868 "nvme_admin": false, 00:19:54.868 "nvme_io": false 00:19:54.868 }, 00:19:54.868 "memory_domains": [ 00:19:54.868 { 00:19:54.868 "dma_device_id": "system", 00:19:54.868 "dma_device_type": 1 00:19:54.868 }, 00:19:54.868 { 00:19:54.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.868 "dma_device_type": 2 00:19:54.868 } 00:19:54.868 ], 00:19:54.868 "driver_specific": { 00:19:54.868 "passthru": { 00:19:54.868 "name": "pt3", 00:19:54.868 "base_bdev_name": "malloc3" 00:19:54.868 } 00:19:54.868 } 00:19:54.868 }' 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:54.868 02:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:55.148 "name": "pt4", 00:19:55.148 "aliases": [ 00:19:55.148 "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3" 00:19:55.148 ], 00:19:55.148 "product_name": "passthru", 00:19:55.148 "block_size": 512, 00:19:55.148 "num_blocks": 65536, 00:19:55.148 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:19:55.148 "assigned_rate_limits": { 00:19:55.148 "rw_ios_per_sec": 0, 00:19:55.148 "rw_mbytes_per_sec": 0, 00:19:55.148 "r_mbytes_per_sec": 0, 00:19:55.148 "w_mbytes_per_sec": 0 00:19:55.148 }, 00:19:55.148 "claimed": true, 00:19:55.148 "claim_type": "exclusive_write", 00:19:55.148 "zoned": false, 00:19:55.148 "supported_io_types": { 00:19:55.148 "read": true, 00:19:55.148 "write": true, 00:19:55.148 "unmap": true, 00:19:55.148 "write_zeroes": true, 00:19:55.148 "flush": true, 00:19:55.148 "reset": true, 00:19:55.148 "compare": false, 00:19:55.148 "compare_and_write": false, 00:19:55.148 "abort": true, 00:19:55.148 "nvme_admin": false, 00:19:55.148 "nvme_io": false 00:19:55.148 }, 00:19:55.148 "memory_domains": [ 00:19:55.148 { 00:19:55.148 "dma_device_id": "system", 00:19:55.148 "dma_device_type": 1 00:19:55.148 }, 00:19:55.148 { 00:19:55.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.148 "dma_device_type": 2 00:19:55.148 } 00:19:55.148 ], 00:19:55.148 "driver_specific": { 00:19:55.148 "passthru": { 00:19:55.148 "name": "pt4", 00:19:55.148 "base_bdev_name": "malloc4" 00:19:55.148 } 00:19:55.148 } 00:19:55.148 }' 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:55.148 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:55.406 [2024-05-15 02:20:43.326265] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.406 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9c6917a-1261-11ef-99fd-bfc7c66e2865 00:19:55.406 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b9c6917a-1261-11ef-99fd-bfc7c66e2865 ']' 00:19:55.406 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:55.663 [2024-05-15 02:20:43.566226] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.663 [2024-05-15 02:20:43.566257] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.663 [2024-05-15 02:20:43.566279] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.663 [2024-05-15 02:20:43.566295] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.663 [2024-05-15 02:20:43.566300] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aaee900 name raid_bdev1, state offline 00:19:55.663 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:55.663 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.920 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:55.920 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:55.920 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.920 02:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:56.179 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.179 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:56.439 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.439 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:56.697 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.697 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:56.957 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:56.957 02:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:57.216 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:57.475 [2024-05-15 02:20:45.378350] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:57.475 [2024-05-15 02:20:45.378819] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:57.475 [2024-05-15 02:20:45.378831] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:57.475 [2024-05-15 02:20:45.378839] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:57.475 [2024-05-15 02:20:45.378851] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:57.475 [2024-05-15 02:20:45.378893] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:57.475 [2024-05-15 02:20:45.378904] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:57.475 [2024-05-15 02:20:45.378913] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:57.475 [2024-05-15 02:20:45.378921] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.476 [2024-05-15 02:20:45.378925] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aaee680 name raid_bdev1, state configuring 00:19:57.476 request: 00:19:57.476 { 00:19:57.476 "name": "raid_bdev1", 00:19:57.476 "raid_level": "concat", 00:19:57.476 "base_bdevs": [ 00:19:57.476 "malloc1", 00:19:57.476 "malloc2", 00:19:57.476 "malloc3", 00:19:57.476 "malloc4" 00:19:57.476 ], 00:19:57.476 "superblock": false, 00:19:57.476 "strip_size_kb": 64, 00:19:57.476 "method": "bdev_raid_create", 00:19:57.476 "req_id": 1 00:19:57.476 } 00:19:57.476 Got JSON-RPC error response 00:19:57.476 response: 00:19:57.476 { 00:19:57.476 "code": -17, 00:19:57.476 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:57.476 } 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.476 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:57.734 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:57.734 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:57.734 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:57.993 [2024-05-15 02:20:45.862385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:57.993 [2024-05-15 02:20:45.862451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.993 [2024-05-15 02:20:45.862480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaee180 00:19:57.993 [2024-05-15 02:20:45.862497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.993 [2024-05-15 02:20:45.863031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.993 [2024-05-15 02:20:45.863057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:57.993 [2024-05-15 02:20:45.863079] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:57.993 [2024-05-15 02:20:45.863090] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:57.993 pt1 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.993 02:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.253 02:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.253 "name": "raid_bdev1", 00:19:58.253 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:19:58.253 "strip_size_kb": 64, 00:19:58.253 "state": "configuring", 00:19:58.253 "raid_level": "concat", 00:19:58.253 "superblock": true, 00:19:58.253 "num_base_bdevs": 4, 00:19:58.253 "num_base_bdevs_discovered": 1, 00:19:58.253 "num_base_bdevs_operational": 4, 00:19:58.253 "base_bdevs_list": [ 00:19:58.253 { 00:19:58.253 "name": "pt1", 00:19:58.253 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:19:58.253 "is_configured": true, 00:19:58.253 "data_offset": 2048, 00:19:58.253 "data_size": 63488 00:19:58.253 }, 00:19:58.253 { 00:19:58.253 "name": null, 00:19:58.253 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:19:58.253 "is_configured": false, 00:19:58.253 "data_offset": 2048, 00:19:58.253 "data_size": 63488 00:19:58.253 }, 00:19:58.253 { 00:19:58.253 "name": null, 00:19:58.253 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:19:58.253 "is_configured": false, 00:19:58.253 "data_offset": 2048, 00:19:58.253 "data_size": 63488 00:19:58.253 }, 00:19:58.253 { 00:19:58.253 "name": null, 00:19:58.253 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:19:58.253 "is_configured": false, 00:19:58.253 "data_offset": 2048, 00:19:58.253 "data_size": 63488 00:19:58.253 } 00:19:58.253 ] 00:19:58.253 }' 00:19:58.253 02:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.253 02:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.513 02:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:58.513 02:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.773 [2024-05-15 02:20:46.742456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.773 [2024-05-15 02:20:46.742525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.773 [2024-05-15 02:20:46.742570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaed780 00:19:58.773 [2024-05-15 02:20:46.742579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.773 [2024-05-15 02:20:46.742679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.773 [2024-05-15 02:20:46.742688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.773 [2024-05-15 02:20:46.742711] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.773 [2024-05-15 02:20:46.742719] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.773 pt2 00:19:58.773 02:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:59.340 [2024-05-15 02:20:47.102490] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.340 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.599 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.599 "name": "raid_bdev1", 00:19:59.599 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:19:59.599 "strip_size_kb": 64, 00:19:59.599 "state": "configuring", 00:19:59.599 "raid_level": "concat", 00:19:59.599 "superblock": true, 00:19:59.599 "num_base_bdevs": 4, 00:19:59.599 "num_base_bdevs_discovered": 1, 00:19:59.599 "num_base_bdevs_operational": 4, 00:19:59.599 "base_bdevs_list": [ 00:19:59.599 { 00:19:59.599 "name": "pt1", 00:19:59.599 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:19:59.599 "is_configured": true, 00:19:59.599 "data_offset": 2048, 00:19:59.599 "data_size": 63488 00:19:59.599 }, 00:19:59.599 { 00:19:59.599 "name": null, 00:19:59.599 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:19:59.599 "is_configured": false, 00:19:59.599 "data_offset": 2048, 00:19:59.599 "data_size": 63488 00:19:59.599 }, 00:19:59.599 { 00:19:59.599 "name": null, 00:19:59.599 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:19:59.599 "is_configured": false, 00:19:59.599 "data_offset": 2048, 00:19:59.599 "data_size": 63488 00:19:59.599 }, 00:19:59.599 { 00:19:59.599 "name": null, 00:19:59.599 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:19:59.599 "is_configured": false, 00:19:59.599 "data_offset": 2048, 00:19:59.599 "data_size": 63488 00:19:59.599 } 00:19:59.599 ] 00:19:59.599 }' 00:19:59.599 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.599 02:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.969 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:59.969 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.969 02:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.230 [2024-05-15 02:20:48.042544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.230 [2024-05-15 02:20:48.042619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.230 [2024-05-15 02:20:48.042651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaed780 00:20:00.230 [2024-05-15 02:20:48.042659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.230 [2024-05-15 02:20:48.042763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.230 [2024-05-15 02:20:48.042773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.230 [2024-05-15 02:20:48.042795] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:00.230 [2024-05-15 02:20:48.042803] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.230 pt2 00:20:00.230 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:00.230 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:00.230 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:00.489 [2024-05-15 02:20:48.382562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:00.489 [2024-05-15 02:20:48.382635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.489 [2024-05-15 02:20:48.382664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaeeb80 00:20:00.489 [2024-05-15 02:20:48.382672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.489 [2024-05-15 02:20:48.382775] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.490 [2024-05-15 02:20:48.382784] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:00.490 [2024-05-15 02:20:48.382805] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:00.490 [2024-05-15 02:20:48.382813] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:00.490 pt3 00:20:00.490 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:00.490 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:00.490 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:00.749 [2024-05-15 02:20:48.642572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:00.749 [2024-05-15 02:20:48.642660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.749 [2024-05-15 02:20:48.642688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aaee900 00:20:00.749 [2024-05-15 02:20:48.642697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.749 [2024-05-15 02:20:48.642800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.749 [2024-05-15 02:20:48.642810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:00.749 [2024-05-15 02:20:48.642831] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:00.749 [2024-05-15 02:20:48.642839] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:00.749 [2024-05-15 02:20:48.642867] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aaedc80 00:20:00.749 [2024-05-15 02:20:48.642871] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:00.749 [2024-05-15 02:20:48.642893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab50e20 00:20:00.749 [2024-05-15 02:20:48.642937] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aaedc80 00:20:00.749 [2024-05-15 02:20:48.642941] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aaedc80 00:20:00.749 [2024-05-15 02:20:48.642958] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.749 pt4 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.749 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.009 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.009 "name": "raid_bdev1", 00:20:01.009 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:20:01.009 "strip_size_kb": 64, 00:20:01.009 "state": "online", 00:20:01.009 "raid_level": "concat", 00:20:01.009 "superblock": true, 00:20:01.009 "num_base_bdevs": 4, 00:20:01.009 "num_base_bdevs_discovered": 4, 00:20:01.009 "num_base_bdevs_operational": 4, 00:20:01.009 "base_bdevs_list": [ 00:20:01.009 { 00:20:01.009 "name": "pt1", 00:20:01.009 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:20:01.009 "is_configured": true, 00:20:01.010 "data_offset": 2048, 00:20:01.010 "data_size": 63488 00:20:01.010 }, 00:20:01.010 { 00:20:01.010 "name": "pt2", 00:20:01.010 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:20:01.010 "is_configured": true, 00:20:01.010 "data_offset": 2048, 00:20:01.010 "data_size": 63488 00:20:01.010 }, 00:20:01.010 { 00:20:01.010 "name": "pt3", 00:20:01.010 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:20:01.010 "is_configured": true, 00:20:01.010 "data_offset": 2048, 00:20:01.010 "data_size": 63488 00:20:01.010 }, 00:20:01.010 { 00:20:01.010 "name": "pt4", 00:20:01.010 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:20:01.010 "is_configured": true, 00:20:01.010 "data_offset": 2048, 00:20:01.010 "data_size": 63488 00:20:01.010 } 00:20:01.010 ] 00:20:01.010 }' 00:20:01.010 02:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.010 02:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:01.270 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:01.530 [2024-05-15 02:20:49.538682] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.789 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:01.789 "name": "raid_bdev1", 00:20:01.789 "aliases": [ 00:20:01.789 "b9c6917a-1261-11ef-99fd-bfc7c66e2865" 00:20:01.789 ], 00:20:01.789 "product_name": "Raid Volume", 00:20:01.789 "block_size": 512, 00:20:01.789 "num_blocks": 253952, 00:20:01.789 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:20:01.789 "assigned_rate_limits": { 00:20:01.789 "rw_ios_per_sec": 0, 00:20:01.789 "rw_mbytes_per_sec": 0, 00:20:01.789 "r_mbytes_per_sec": 0, 00:20:01.789 "w_mbytes_per_sec": 0 00:20:01.789 }, 00:20:01.789 "claimed": false, 00:20:01.789 "zoned": false, 00:20:01.789 "supported_io_types": { 00:20:01.789 "read": true, 00:20:01.789 "write": true, 00:20:01.789 "unmap": true, 00:20:01.789 "write_zeroes": true, 00:20:01.789 "flush": true, 00:20:01.789 "reset": true, 00:20:01.789 "compare": false, 00:20:01.789 "compare_and_write": false, 00:20:01.789 "abort": false, 00:20:01.789 "nvme_admin": false, 00:20:01.789 "nvme_io": false 00:20:01.789 }, 00:20:01.789 "memory_domains": [ 00:20:01.789 { 00:20:01.789 "dma_device_id": "system", 00:20:01.789 "dma_device_type": 1 00:20:01.789 }, 00:20:01.789 { 00:20:01.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.789 "dma_device_type": 2 00:20:01.789 }, 00:20:01.789 { 00:20:01.789 "dma_device_id": "system", 00:20:01.789 "dma_device_type": 1 00:20:01.789 }, 00:20:01.789 { 00:20:01.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.789 "dma_device_type": 2 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "dma_device_id": "system", 00:20:01.790 "dma_device_type": 1 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.790 "dma_device_type": 2 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "dma_device_id": "system", 00:20:01.790 "dma_device_type": 1 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.790 "dma_device_type": 2 00:20:01.790 } 00:20:01.790 ], 00:20:01.790 "driver_specific": { 00:20:01.790 "raid": { 00:20:01.790 "uuid": "b9c6917a-1261-11ef-99fd-bfc7c66e2865", 00:20:01.790 "strip_size_kb": 64, 00:20:01.790 "state": "online", 00:20:01.790 "raid_level": "concat", 00:20:01.790 "superblock": true, 00:20:01.790 "num_base_bdevs": 4, 00:20:01.790 "num_base_bdevs_discovered": 4, 00:20:01.790 "num_base_bdevs_operational": 4, 00:20:01.790 "base_bdevs_list": [ 00:20:01.790 { 00:20:01.790 "name": "pt1", 00:20:01.790 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:20:01.790 "is_configured": true, 00:20:01.790 "data_offset": 2048, 00:20:01.790 "data_size": 63488 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "name": "pt2", 00:20:01.790 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:20:01.790 "is_configured": true, 00:20:01.790 "data_offset": 2048, 00:20:01.790 "data_size": 63488 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "name": "pt3", 00:20:01.790 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:20:01.790 "is_configured": true, 00:20:01.790 "data_offset": 2048, 00:20:01.790 "data_size": 63488 00:20:01.790 }, 00:20:01.790 { 00:20:01.790 "name": "pt4", 00:20:01.790 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:20:01.790 "is_configured": true, 00:20:01.790 "data_offset": 2048, 00:20:01.790 "data_size": 63488 00:20:01.790 } 00:20:01.790 ] 00:20:01.790 } 00:20:01.790 } 00:20:01.790 }' 00:20:01.790 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.790 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:20:01.790 pt2 00:20:01.790 pt3 00:20:01.790 pt4' 00:20:01.790 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:01.790 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:01.790 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:02.049 "name": "pt1", 00:20:02.049 "aliases": [ 00:20:02.049 "f99fca5f-8c74-d050-bb3c-6b810ff63bd3" 00:20:02.049 ], 00:20:02.049 "product_name": "passthru", 00:20:02.049 "block_size": 512, 00:20:02.049 "num_blocks": 65536, 00:20:02.049 "uuid": "f99fca5f-8c74-d050-bb3c-6b810ff63bd3", 00:20:02.049 "assigned_rate_limits": { 00:20:02.049 "rw_ios_per_sec": 0, 00:20:02.049 "rw_mbytes_per_sec": 0, 00:20:02.049 "r_mbytes_per_sec": 0, 00:20:02.049 "w_mbytes_per_sec": 0 00:20:02.049 }, 00:20:02.049 "claimed": true, 00:20:02.049 "claim_type": "exclusive_write", 00:20:02.049 "zoned": false, 00:20:02.049 "supported_io_types": { 00:20:02.049 "read": true, 00:20:02.049 "write": true, 00:20:02.049 "unmap": true, 00:20:02.049 "write_zeroes": true, 00:20:02.049 "flush": true, 00:20:02.049 "reset": true, 00:20:02.049 "compare": false, 00:20:02.049 "compare_and_write": false, 00:20:02.049 "abort": true, 00:20:02.049 "nvme_admin": false, 00:20:02.049 "nvme_io": false 00:20:02.049 }, 00:20:02.049 "memory_domains": [ 00:20:02.049 { 00:20:02.049 "dma_device_id": "system", 00:20:02.049 "dma_device_type": 1 00:20:02.049 }, 00:20:02.049 { 00:20:02.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.049 "dma_device_type": 2 00:20:02.049 } 00:20:02.049 ], 00:20:02.049 "driver_specific": { 00:20:02.049 "passthru": { 00:20:02.049 "name": "pt1", 00:20:02.049 "base_bdev_name": "malloc1" 00:20:02.049 } 00:20:02.049 } 00:20:02.049 }' 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:02.049 02:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:02.308 "name": "pt2", 00:20:02.308 "aliases": [ 00:20:02.308 "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0" 00:20:02.308 ], 00:20:02.308 "product_name": "passthru", 00:20:02.308 "block_size": 512, 00:20:02.308 "num_blocks": 65536, 00:20:02.308 "uuid": "61bdaf2c-aeb4-0653-9c3e-3d4ba6a6b6e0", 00:20:02.308 "assigned_rate_limits": { 00:20:02.308 "rw_ios_per_sec": 0, 00:20:02.308 "rw_mbytes_per_sec": 0, 00:20:02.308 "r_mbytes_per_sec": 0, 00:20:02.308 "w_mbytes_per_sec": 0 00:20:02.308 }, 00:20:02.308 "claimed": true, 00:20:02.308 "claim_type": "exclusive_write", 00:20:02.308 "zoned": false, 00:20:02.308 "supported_io_types": { 00:20:02.308 "read": true, 00:20:02.308 "write": true, 00:20:02.308 "unmap": true, 00:20:02.308 "write_zeroes": true, 00:20:02.308 "flush": true, 00:20:02.308 "reset": true, 00:20:02.308 "compare": false, 00:20:02.308 "compare_and_write": false, 00:20:02.308 "abort": true, 00:20:02.308 "nvme_admin": false, 00:20:02.308 "nvme_io": false 00:20:02.308 }, 00:20:02.308 "memory_domains": [ 00:20:02.308 { 00:20:02.308 "dma_device_id": "system", 00:20:02.308 "dma_device_type": 1 00:20:02.308 }, 00:20:02.308 { 00:20:02.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.308 "dma_device_type": 2 00:20:02.308 } 00:20:02.308 ], 00:20:02.308 "driver_specific": { 00:20:02.308 "passthru": { 00:20:02.308 "name": "pt2", 00:20:02.308 "base_bdev_name": "malloc2" 00:20:02.308 } 00:20:02.308 } 00:20:02.308 }' 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.308 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:02.567 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:02.826 "name": "pt3", 00:20:02.826 "aliases": [ 00:20:02.826 "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96" 00:20:02.826 ], 00:20:02.826 "product_name": "passthru", 00:20:02.826 "block_size": 512, 00:20:02.826 "num_blocks": 65536, 00:20:02.826 "uuid": "5a2dc2be-d409-e459-b3e0-bfaa1e7d4d96", 00:20:02.826 "assigned_rate_limits": { 00:20:02.826 "rw_ios_per_sec": 0, 00:20:02.826 "rw_mbytes_per_sec": 0, 00:20:02.826 "r_mbytes_per_sec": 0, 00:20:02.826 "w_mbytes_per_sec": 0 00:20:02.826 }, 00:20:02.826 "claimed": true, 00:20:02.826 "claim_type": "exclusive_write", 00:20:02.826 "zoned": false, 00:20:02.826 "supported_io_types": { 00:20:02.826 "read": true, 00:20:02.826 "write": true, 00:20:02.826 "unmap": true, 00:20:02.826 "write_zeroes": true, 00:20:02.826 "flush": true, 00:20:02.826 "reset": true, 00:20:02.826 "compare": false, 00:20:02.826 "compare_and_write": false, 00:20:02.826 "abort": true, 00:20:02.826 "nvme_admin": false, 00:20:02.826 "nvme_io": false 00:20:02.826 }, 00:20:02.826 "memory_domains": [ 00:20:02.826 { 00:20:02.826 "dma_device_id": "system", 00:20:02.826 "dma_device_type": 1 00:20:02.826 }, 00:20:02.826 { 00:20:02.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.826 "dma_device_type": 2 00:20:02.826 } 00:20:02.826 ], 00:20:02.826 "driver_specific": { 00:20:02.826 "passthru": { 00:20:02.826 "name": "pt3", 00:20:02.826 "base_bdev_name": "malloc3" 00:20:02.826 } 00:20:02.826 } 00:20:02.826 }' 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:02.826 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:03.084 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:03.084 "name": "pt4", 00:20:03.084 "aliases": [ 00:20:03.084 "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3" 00:20:03.084 ], 00:20:03.084 "product_name": "passthru", 00:20:03.084 "block_size": 512, 00:20:03.084 "num_blocks": 65536, 00:20:03.084 "uuid": "51fd53f3-831a-6d5a-99cb-8a96fe85e2b3", 00:20:03.084 "assigned_rate_limits": { 00:20:03.084 "rw_ios_per_sec": 0, 00:20:03.084 "rw_mbytes_per_sec": 0, 00:20:03.084 "r_mbytes_per_sec": 0, 00:20:03.084 "w_mbytes_per_sec": 0 00:20:03.084 }, 00:20:03.084 "claimed": true, 00:20:03.084 "claim_type": "exclusive_write", 00:20:03.084 "zoned": false, 00:20:03.084 "supported_io_types": { 00:20:03.084 "read": true, 00:20:03.084 "write": true, 00:20:03.084 "unmap": true, 00:20:03.084 "write_zeroes": true, 00:20:03.084 "flush": true, 00:20:03.084 "reset": true, 00:20:03.084 "compare": false, 00:20:03.084 "compare_and_write": false, 00:20:03.084 "abort": true, 00:20:03.084 "nvme_admin": false, 00:20:03.084 "nvme_io": false 00:20:03.084 }, 00:20:03.084 "memory_domains": [ 00:20:03.084 { 00:20:03.084 "dma_device_id": "system", 00:20:03.084 "dma_device_type": 1 00:20:03.084 }, 00:20:03.084 { 00:20:03.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.084 "dma_device_type": 2 00:20:03.084 } 00:20:03.084 ], 00:20:03.084 "driver_specific": { 00:20:03.084 "passthru": { 00:20:03.084 "name": "pt4", 00:20:03.084 "base_bdev_name": "malloc4" 00:20:03.084 } 00:20:03.084 } 00:20:03.084 }' 00:20:03.084 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:03.084 02:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.084 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:03.342 [2024-05-15 02:20:51.298774] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b9c6917a-1261-11ef-99fd-bfc7c66e2865 '!=' b9c6917a-1261-11ef-99fd-bfc7c66e2865 ']' 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60935 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 60935 ']' 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 60935 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 60935 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:03.342 killing process with pid 60935 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60935' 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 60935 00:20:03.342 [2024-05-15 02:20:51.328385] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.342 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 60935 00:20:03.342 [2024-05-15 02:20:51.328449] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.342 [2024-05-15 02:20:51.328485] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.342 [2024-05-15 02:20:51.328497] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aaedc80 name raid_bdev1, state offline 00:20:03.343 [2024-05-15 02:20:51.347918] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.602 02:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:03.602 00:20:03.602 real 0m14.336s 00:20:03.602 user 0m25.682s 00:20:03.602 sys 0m2.184s 00:20:03.602 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:03.602 02:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.602 ************************************ 00:20:03.602 END TEST raid_superblock_test 00:20:03.602 ************************************ 00:20:03.602 02:20:51 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:20:03.602 02:20:51 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:03.602 02:20:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:03.602 02:20:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:03.602 02:20:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.602 ************************************ 00:20:03.602 START TEST raid_state_function_test 00:20:03.602 ************************************ 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=61334 00:20:03.602 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 61334' 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:03.603 Process raid pid: 61334 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 61334 /var/tmp/spdk-raid.sock 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 61334 ']' 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:03.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.603 02:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.603 [2024-05-15 02:20:51.546317] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:03.603 [2024-05-15 02:20:51.546583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:04.170 EAL: TSC is not safe to use in SMP mode 00:20:04.170 EAL: TSC is not invariant 00:20:04.170 [2024-05-15 02:20:52.037708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.170 [2024-05-15 02:20:52.144918] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:04.170 [2024-05-15 02:20:52.147669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.170 [2024-05-15 02:20:52.148779] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.170 [2024-05-15 02:20:52.148806] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.738 02:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.738 02:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:20:04.738 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:05.014 [2024-05-15 02:20:52.839498] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:05.014 [2024-05-15 02:20:52.839572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:05.014 [2024-05-15 02:20:52.839583] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.014 [2024-05-15 02:20:52.839596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.014 [2024-05-15 02:20:52.839601] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:05.014 [2024-05-15 02:20:52.839613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:05.014 [2024-05-15 02:20:52.839619] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:05.014 [2024-05-15 02:20:52.839635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.014 02:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.272 02:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.272 "name": "Existed_Raid", 00:20:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.272 "strip_size_kb": 0, 00:20:05.272 "state": "configuring", 00:20:05.272 "raid_level": "raid1", 00:20:05.272 "superblock": false, 00:20:05.272 "num_base_bdevs": 4, 00:20:05.272 "num_base_bdevs_discovered": 0, 00:20:05.272 "num_base_bdevs_operational": 4, 00:20:05.272 "base_bdevs_list": [ 00:20:05.272 { 00:20:05.272 "name": "BaseBdev1", 00:20:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.272 "is_configured": false, 00:20:05.272 "data_offset": 0, 00:20:05.272 "data_size": 0 00:20:05.272 }, 00:20:05.272 { 00:20:05.272 "name": "BaseBdev2", 00:20:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.272 "is_configured": false, 00:20:05.272 "data_offset": 0, 00:20:05.272 "data_size": 0 00:20:05.272 }, 00:20:05.272 { 00:20:05.272 "name": "BaseBdev3", 00:20:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.272 "is_configured": false, 00:20:05.272 "data_offset": 0, 00:20:05.272 "data_size": 0 00:20:05.272 }, 00:20:05.272 { 00:20:05.272 "name": "BaseBdev4", 00:20:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.272 "is_configured": false, 00:20:05.272 "data_offset": 0, 00:20:05.272 "data_size": 0 00:20:05.272 } 00:20:05.272 ] 00:20:05.272 }' 00:20:05.272 02:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.272 02:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.529 02:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:05.787 [2024-05-15 02:20:53.687535] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.787 [2024-05-15 02:20:53.687573] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeb500 name Existed_Raid, state configuring 00:20:05.787 02:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:06.045 [2024-05-15 02:20:53.939578] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.045 [2024-05-15 02:20:53.939667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.045 [2024-05-15 02:20:53.939676] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.045 [2024-05-15 02:20:53.939696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.045 [2024-05-15 02:20:53.939706] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:06.045 [2024-05-15 02:20:53.939720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:06.045 [2024-05-15 02:20:53.939727] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:06.045 [2024-05-15 02:20:53.939750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:06.045 02:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:06.303 [2024-05-15 02:20:54.272613] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.303 BaseBdev1 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:06.303 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.560 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:06.818 [ 00:20:06.818 { 00:20:06.818 "name": "BaseBdev1", 00:20:06.818 "aliases": [ 00:20:06.818 "c1b43873-1261-11ef-99fd-bfc7c66e2865" 00:20:06.818 ], 00:20:06.818 "product_name": "Malloc disk", 00:20:06.818 "block_size": 512, 00:20:06.818 "num_blocks": 65536, 00:20:06.818 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:06.818 "assigned_rate_limits": { 00:20:06.818 "rw_ios_per_sec": 0, 00:20:06.818 "rw_mbytes_per_sec": 0, 00:20:06.818 "r_mbytes_per_sec": 0, 00:20:06.818 "w_mbytes_per_sec": 0 00:20:06.818 }, 00:20:06.818 "claimed": true, 00:20:06.818 "claim_type": "exclusive_write", 00:20:06.818 "zoned": false, 00:20:06.818 "supported_io_types": { 00:20:06.818 "read": true, 00:20:06.818 "write": true, 00:20:06.818 "unmap": true, 00:20:06.818 "write_zeroes": true, 00:20:06.818 "flush": true, 00:20:06.818 "reset": true, 00:20:06.818 "compare": false, 00:20:06.818 "compare_and_write": false, 00:20:06.818 "abort": true, 00:20:06.819 "nvme_admin": false, 00:20:06.819 "nvme_io": false 00:20:06.819 }, 00:20:06.819 "memory_domains": [ 00:20:06.819 { 00:20:06.819 "dma_device_id": "system", 00:20:06.819 "dma_device_type": 1 00:20:06.819 }, 00:20:06.819 { 00:20:06.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.819 "dma_device_type": 2 00:20:06.819 } 00:20:06.819 ], 00:20:06.819 "driver_specific": {} 00:20:06.819 } 00:20:06.819 ] 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.819 02:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.076 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.076 "name": "Existed_Raid", 00:20:07.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.076 "strip_size_kb": 0, 00:20:07.076 "state": "configuring", 00:20:07.076 "raid_level": "raid1", 00:20:07.076 "superblock": false, 00:20:07.076 "num_base_bdevs": 4, 00:20:07.076 "num_base_bdevs_discovered": 1, 00:20:07.076 "num_base_bdevs_operational": 4, 00:20:07.076 "base_bdevs_list": [ 00:20:07.076 { 00:20:07.076 "name": "BaseBdev1", 00:20:07.076 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:07.076 "is_configured": true, 00:20:07.076 "data_offset": 0, 00:20:07.076 "data_size": 65536 00:20:07.076 }, 00:20:07.076 { 00:20:07.076 "name": "BaseBdev2", 00:20:07.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.076 "is_configured": false, 00:20:07.076 "data_offset": 0, 00:20:07.076 "data_size": 0 00:20:07.076 }, 00:20:07.076 { 00:20:07.076 "name": "BaseBdev3", 00:20:07.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.076 "is_configured": false, 00:20:07.076 "data_offset": 0, 00:20:07.076 "data_size": 0 00:20:07.076 }, 00:20:07.076 { 00:20:07.076 "name": "BaseBdev4", 00:20:07.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.076 "is_configured": false, 00:20:07.076 "data_offset": 0, 00:20:07.076 "data_size": 0 00:20:07.076 } 00:20:07.076 ] 00:20:07.076 }' 00:20:07.076 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.076 02:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.334 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:07.899 [2024-05-15 02:20:55.655650] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.899 [2024-05-15 02:20:55.655694] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeb500 name Existed_Raid, state configuring 00:20:07.899 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:08.157 [2024-05-15 02:20:55.971695] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.157 [2024-05-15 02:20:55.972441] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.157 [2024-05-15 02:20:55.972497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.157 [2024-05-15 02:20:55.972502] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.157 [2024-05-15 02:20:55.972511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.157 [2024-05-15 02:20:55.972515] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:08.157 [2024-05-15 02:20:55.972522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.157 02:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.416 02:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.416 "name": "Existed_Raid", 00:20:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.416 "strip_size_kb": 0, 00:20:08.416 "state": "configuring", 00:20:08.416 "raid_level": "raid1", 00:20:08.416 "superblock": false, 00:20:08.416 "num_base_bdevs": 4, 00:20:08.416 "num_base_bdevs_discovered": 1, 00:20:08.416 "num_base_bdevs_operational": 4, 00:20:08.416 "base_bdevs_list": [ 00:20:08.416 { 00:20:08.416 "name": "BaseBdev1", 00:20:08.416 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:08.416 "is_configured": true, 00:20:08.416 "data_offset": 0, 00:20:08.416 "data_size": 65536 00:20:08.416 }, 00:20:08.416 { 00:20:08.416 "name": "BaseBdev2", 00:20:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.416 "is_configured": false, 00:20:08.416 "data_offset": 0, 00:20:08.416 "data_size": 0 00:20:08.416 }, 00:20:08.416 { 00:20:08.416 "name": "BaseBdev3", 00:20:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.416 "is_configured": false, 00:20:08.416 "data_offset": 0, 00:20:08.416 "data_size": 0 00:20:08.416 }, 00:20:08.416 { 00:20:08.416 "name": "BaseBdev4", 00:20:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.416 "is_configured": false, 00:20:08.416 "data_offset": 0, 00:20:08.416 "data_size": 0 00:20:08.416 } 00:20:08.416 ] 00:20:08.416 }' 00:20:08.416 02:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.416 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.674 02:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:08.932 [2024-05-15 02:20:56.927882] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.932 BaseBdev2 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:08.932 02:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.499 [ 00:20:09.499 { 00:20:09.499 "name": "BaseBdev2", 00:20:09.499 "aliases": [ 00:20:09.499 "c349840a-1261-11ef-99fd-bfc7c66e2865" 00:20:09.499 ], 00:20:09.499 "product_name": "Malloc disk", 00:20:09.499 "block_size": 512, 00:20:09.499 "num_blocks": 65536, 00:20:09.499 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:09.499 "assigned_rate_limits": { 00:20:09.499 "rw_ios_per_sec": 0, 00:20:09.499 "rw_mbytes_per_sec": 0, 00:20:09.499 "r_mbytes_per_sec": 0, 00:20:09.499 "w_mbytes_per_sec": 0 00:20:09.499 }, 00:20:09.499 "claimed": true, 00:20:09.499 "claim_type": "exclusive_write", 00:20:09.499 "zoned": false, 00:20:09.499 "supported_io_types": { 00:20:09.499 "read": true, 00:20:09.499 "write": true, 00:20:09.499 "unmap": true, 00:20:09.499 "write_zeroes": true, 00:20:09.499 "flush": true, 00:20:09.499 "reset": true, 00:20:09.499 "compare": false, 00:20:09.499 "compare_and_write": false, 00:20:09.499 "abort": true, 00:20:09.499 "nvme_admin": false, 00:20:09.499 "nvme_io": false 00:20:09.499 }, 00:20:09.499 "memory_domains": [ 00:20:09.499 { 00:20:09.499 "dma_device_id": "system", 00:20:09.499 "dma_device_type": 1 00:20:09.499 }, 00:20:09.499 { 00:20:09.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.499 "dma_device_type": 2 00:20:09.499 } 00:20:09.499 ], 00:20:09.499 "driver_specific": {} 00:20:09.499 } 00:20:09.499 ] 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.499 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.500 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.066 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.066 "name": "Existed_Raid", 00:20:10.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.066 "strip_size_kb": 0, 00:20:10.066 "state": "configuring", 00:20:10.066 "raid_level": "raid1", 00:20:10.066 "superblock": false, 00:20:10.066 "num_base_bdevs": 4, 00:20:10.066 "num_base_bdevs_discovered": 2, 00:20:10.066 "num_base_bdevs_operational": 4, 00:20:10.066 "base_bdevs_list": [ 00:20:10.066 { 00:20:10.066 "name": "BaseBdev1", 00:20:10.066 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:10.066 "is_configured": true, 00:20:10.066 "data_offset": 0, 00:20:10.066 "data_size": 65536 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "name": "BaseBdev2", 00:20:10.066 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:10.066 "is_configured": true, 00:20:10.066 "data_offset": 0, 00:20:10.066 "data_size": 65536 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "name": "BaseBdev3", 00:20:10.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.066 "is_configured": false, 00:20:10.066 "data_offset": 0, 00:20:10.066 "data_size": 0 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "name": "BaseBdev4", 00:20:10.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.066 "is_configured": false, 00:20:10.066 "data_offset": 0, 00:20:10.066 "data_size": 0 00:20:10.066 } 00:20:10.066 ] 00:20:10.066 }' 00:20:10.066 02:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.066 02:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.324 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.325 [2024-05-15 02:20:58.319959] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.325 BaseBdev3 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:10.325 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.891 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.149 [ 00:20:11.149 { 00:20:11.149 "name": "BaseBdev3", 00:20:11.149 "aliases": [ 00:20:11.149 "c41dee9c-1261-11ef-99fd-bfc7c66e2865" 00:20:11.149 ], 00:20:11.149 "product_name": "Malloc disk", 00:20:11.149 "block_size": 512, 00:20:11.149 "num_blocks": 65536, 00:20:11.149 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:11.149 "assigned_rate_limits": { 00:20:11.149 "rw_ios_per_sec": 0, 00:20:11.149 "rw_mbytes_per_sec": 0, 00:20:11.149 "r_mbytes_per_sec": 0, 00:20:11.149 "w_mbytes_per_sec": 0 00:20:11.149 }, 00:20:11.149 "claimed": true, 00:20:11.149 "claim_type": "exclusive_write", 00:20:11.149 "zoned": false, 00:20:11.149 "supported_io_types": { 00:20:11.149 "read": true, 00:20:11.149 "write": true, 00:20:11.149 "unmap": true, 00:20:11.149 "write_zeroes": true, 00:20:11.149 "flush": true, 00:20:11.149 "reset": true, 00:20:11.149 "compare": false, 00:20:11.149 "compare_and_write": false, 00:20:11.149 "abort": true, 00:20:11.149 "nvme_admin": false, 00:20:11.149 "nvme_io": false 00:20:11.149 }, 00:20:11.149 "memory_domains": [ 00:20:11.149 { 00:20:11.149 "dma_device_id": "system", 00:20:11.149 "dma_device_type": 1 00:20:11.149 }, 00:20:11.149 { 00:20:11.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.149 "dma_device_type": 2 00:20:11.149 } 00:20:11.149 ], 00:20:11.150 "driver_specific": {} 00:20:11.150 } 00:20:11.150 ] 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.150 02:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.409 02:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.409 "name": "Existed_Raid", 00:20:11.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.409 "strip_size_kb": 0, 00:20:11.409 "state": "configuring", 00:20:11.409 "raid_level": "raid1", 00:20:11.409 "superblock": false, 00:20:11.409 "num_base_bdevs": 4, 00:20:11.409 "num_base_bdevs_discovered": 3, 00:20:11.409 "num_base_bdevs_operational": 4, 00:20:11.409 "base_bdevs_list": [ 00:20:11.409 { 00:20:11.409 "name": "BaseBdev1", 00:20:11.409 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:11.409 "is_configured": true, 00:20:11.409 "data_offset": 0, 00:20:11.409 "data_size": 65536 00:20:11.409 }, 00:20:11.409 { 00:20:11.409 "name": "BaseBdev2", 00:20:11.409 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:11.409 "is_configured": true, 00:20:11.409 "data_offset": 0, 00:20:11.409 "data_size": 65536 00:20:11.409 }, 00:20:11.409 { 00:20:11.409 "name": "BaseBdev3", 00:20:11.409 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:11.409 "is_configured": true, 00:20:11.409 "data_offset": 0, 00:20:11.409 "data_size": 65536 00:20:11.409 }, 00:20:11.409 { 00:20:11.409 "name": "BaseBdev4", 00:20:11.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.409 "is_configured": false, 00:20:11.409 "data_offset": 0, 00:20:11.409 "data_size": 0 00:20:11.409 } 00:20:11.409 ] 00:20:11.409 }' 00:20:11.409 02:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.409 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 02:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.926 [2024-05-15 02:20:59.932021] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.926 [2024-05-15 02:20:59.932052] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ceeba00 00:20:11.926 [2024-05-15 02:20:59.932057] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:11.926 [2024-05-15 02:20:59.932088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cf4eec0 00:20:11.926 [2024-05-15 02:20:59.932176] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ceeba00 00:20:11.926 [2024-05-15 02:20:59.932180] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ceeba00 00:20:11.926 [2024-05-15 02:20:59.932210] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.926 BaseBdev4 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:12.184 02:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:12.448 [ 00:20:12.448 { 00:20:12.448 "name": "BaseBdev4", 00:20:12.448 "aliases": [ 00:20:12.448 "c513ea63-1261-11ef-99fd-bfc7c66e2865" 00:20:12.448 ], 00:20:12.448 "product_name": "Malloc disk", 00:20:12.448 "block_size": 512, 00:20:12.448 "num_blocks": 65536, 00:20:12.448 "uuid": "c513ea63-1261-11ef-99fd-bfc7c66e2865", 00:20:12.448 "assigned_rate_limits": { 00:20:12.448 "rw_ios_per_sec": 0, 00:20:12.448 "rw_mbytes_per_sec": 0, 00:20:12.448 "r_mbytes_per_sec": 0, 00:20:12.448 "w_mbytes_per_sec": 0 00:20:12.448 }, 00:20:12.448 "claimed": true, 00:20:12.448 "claim_type": "exclusive_write", 00:20:12.448 "zoned": false, 00:20:12.448 "supported_io_types": { 00:20:12.448 "read": true, 00:20:12.448 "write": true, 00:20:12.448 "unmap": true, 00:20:12.448 "write_zeroes": true, 00:20:12.448 "flush": true, 00:20:12.448 "reset": true, 00:20:12.448 "compare": false, 00:20:12.448 "compare_and_write": false, 00:20:12.448 "abort": true, 00:20:12.448 "nvme_admin": false, 00:20:12.448 "nvme_io": false 00:20:12.448 }, 00:20:12.448 "memory_domains": [ 00:20:12.448 { 00:20:12.448 "dma_device_id": "system", 00:20:12.448 "dma_device_type": 1 00:20:12.448 }, 00:20:12.448 { 00:20:12.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.448 "dma_device_type": 2 00:20:12.448 } 00:20:12.448 ], 00:20:12.448 "driver_specific": {} 00:20:12.448 } 00:20:12.448 ] 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.448 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.706 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.706 "name": "Existed_Raid", 00:20:12.706 "uuid": "c513f090-1261-11ef-99fd-bfc7c66e2865", 00:20:12.706 "strip_size_kb": 0, 00:20:12.706 "state": "online", 00:20:12.706 "raid_level": "raid1", 00:20:12.706 "superblock": false, 00:20:12.706 "num_base_bdevs": 4, 00:20:12.706 "num_base_bdevs_discovered": 4, 00:20:12.706 "num_base_bdevs_operational": 4, 00:20:12.706 "base_bdevs_list": [ 00:20:12.706 { 00:20:12.706 "name": "BaseBdev1", 00:20:12.706 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:12.706 "is_configured": true, 00:20:12.706 "data_offset": 0, 00:20:12.706 "data_size": 65536 00:20:12.706 }, 00:20:12.706 { 00:20:12.706 "name": "BaseBdev2", 00:20:12.706 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:12.706 "is_configured": true, 00:20:12.706 "data_offset": 0, 00:20:12.706 "data_size": 65536 00:20:12.706 }, 00:20:12.706 { 00:20:12.706 "name": "BaseBdev3", 00:20:12.706 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:12.706 "is_configured": true, 00:20:12.706 "data_offset": 0, 00:20:12.706 "data_size": 65536 00:20:12.706 }, 00:20:12.706 { 00:20:12.706 "name": "BaseBdev4", 00:20:12.706 "uuid": "c513ea63-1261-11ef-99fd-bfc7c66e2865", 00:20:12.706 "is_configured": true, 00:20:12.706 "data_offset": 0, 00:20:12.706 "data_size": 65536 00:20:12.706 } 00:20:12.706 ] 00:20:12.706 }' 00:20:12.706 02:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.706 02:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:13.271 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:13.271 [2024-05-15 02:21:01.288076] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:13.529 "name": "Existed_Raid", 00:20:13.529 "aliases": [ 00:20:13.529 "c513f090-1261-11ef-99fd-bfc7c66e2865" 00:20:13.529 ], 00:20:13.529 "product_name": "Raid Volume", 00:20:13.529 "block_size": 512, 00:20:13.529 "num_blocks": 65536, 00:20:13.529 "uuid": "c513f090-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "assigned_rate_limits": { 00:20:13.529 "rw_ios_per_sec": 0, 00:20:13.529 "rw_mbytes_per_sec": 0, 00:20:13.529 "r_mbytes_per_sec": 0, 00:20:13.529 "w_mbytes_per_sec": 0 00:20:13.529 }, 00:20:13.529 "claimed": false, 00:20:13.529 "zoned": false, 00:20:13.529 "supported_io_types": { 00:20:13.529 "read": true, 00:20:13.529 "write": true, 00:20:13.529 "unmap": false, 00:20:13.529 "write_zeroes": true, 00:20:13.529 "flush": false, 00:20:13.529 "reset": true, 00:20:13.529 "compare": false, 00:20:13.529 "compare_and_write": false, 00:20:13.529 "abort": false, 00:20:13.529 "nvme_admin": false, 00:20:13.529 "nvme_io": false 00:20:13.529 }, 00:20:13.529 "memory_domains": [ 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 } 00:20:13.529 ], 00:20:13.529 "driver_specific": { 00:20:13.529 "raid": { 00:20:13.529 "uuid": "c513f090-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "strip_size_kb": 0, 00:20:13.529 "state": "online", 00:20:13.529 "raid_level": "raid1", 00:20:13.529 "superblock": false, 00:20:13.529 "num_base_bdevs": 4, 00:20:13.529 "num_base_bdevs_discovered": 4, 00:20:13.529 "num_base_bdevs_operational": 4, 00:20:13.529 "base_bdevs_list": [ 00:20:13.529 { 00:20:13.529 "name": "BaseBdev1", 00:20:13.529 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev2", 00:20:13.529 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev3", 00:20:13.529 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev4", 00:20:13.529 "uuid": "c513ea63-1261-11ef-99fd-bfc7c66e2865", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 } 00:20:13.529 } 00:20:13.529 }' 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:13.529 BaseBdev2 00:20:13.529 BaseBdev3 00:20:13.529 BaseBdev4' 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:13.529 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:13.786 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:13.786 "name": "BaseBdev1", 00:20:13.786 "aliases": [ 00:20:13.786 "c1b43873-1261-11ef-99fd-bfc7c66e2865" 00:20:13.786 ], 00:20:13.786 "product_name": "Malloc disk", 00:20:13.786 "block_size": 512, 00:20:13.786 "num_blocks": 65536, 00:20:13.787 "uuid": "c1b43873-1261-11ef-99fd-bfc7c66e2865", 00:20:13.787 "assigned_rate_limits": { 00:20:13.787 "rw_ios_per_sec": 0, 00:20:13.787 "rw_mbytes_per_sec": 0, 00:20:13.787 "r_mbytes_per_sec": 0, 00:20:13.787 "w_mbytes_per_sec": 0 00:20:13.787 }, 00:20:13.787 "claimed": true, 00:20:13.787 "claim_type": "exclusive_write", 00:20:13.787 "zoned": false, 00:20:13.787 "supported_io_types": { 00:20:13.787 "read": true, 00:20:13.787 "write": true, 00:20:13.787 "unmap": true, 00:20:13.787 "write_zeroes": true, 00:20:13.787 "flush": true, 00:20:13.787 "reset": true, 00:20:13.787 "compare": false, 00:20:13.787 "compare_and_write": false, 00:20:13.787 "abort": true, 00:20:13.787 "nvme_admin": false, 00:20:13.787 "nvme_io": false 00:20:13.787 }, 00:20:13.787 "memory_domains": [ 00:20:13.787 { 00:20:13.787 "dma_device_id": "system", 00:20:13.787 "dma_device_type": 1 00:20:13.787 }, 00:20:13.787 { 00:20:13.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.787 "dma_device_type": 2 00:20:13.787 } 00:20:13.787 ], 00:20:13.787 "driver_specific": {} 00:20:13.787 }' 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:13.787 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:14.044 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:14.044 "name": "BaseBdev2", 00:20:14.044 "aliases": [ 00:20:14.044 "c349840a-1261-11ef-99fd-bfc7c66e2865" 00:20:14.044 ], 00:20:14.044 "product_name": "Malloc disk", 00:20:14.044 "block_size": 512, 00:20:14.044 "num_blocks": 65536, 00:20:14.044 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:14.044 "assigned_rate_limits": { 00:20:14.044 "rw_ios_per_sec": 0, 00:20:14.044 "rw_mbytes_per_sec": 0, 00:20:14.044 "r_mbytes_per_sec": 0, 00:20:14.044 "w_mbytes_per_sec": 0 00:20:14.044 }, 00:20:14.044 "claimed": true, 00:20:14.044 "claim_type": "exclusive_write", 00:20:14.044 "zoned": false, 00:20:14.044 "supported_io_types": { 00:20:14.044 "read": true, 00:20:14.044 "write": true, 00:20:14.044 "unmap": true, 00:20:14.044 "write_zeroes": true, 00:20:14.044 "flush": true, 00:20:14.044 "reset": true, 00:20:14.044 "compare": false, 00:20:14.044 "compare_and_write": false, 00:20:14.044 "abort": true, 00:20:14.044 "nvme_admin": false, 00:20:14.044 "nvme_io": false 00:20:14.044 }, 00:20:14.044 "memory_domains": [ 00:20:14.044 { 00:20:14.044 "dma_device_id": "system", 00:20:14.044 "dma_device_type": 1 00:20:14.044 }, 00:20:14.044 { 00:20:14.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.044 "dma_device_type": 2 00:20:14.044 } 00:20:14.044 ], 00:20:14.044 "driver_specific": {} 00:20:14.044 }' 00:20:14.044 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.044 02:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:14.044 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:14.609 "name": "BaseBdev3", 00:20:14.609 "aliases": [ 00:20:14.609 "c41dee9c-1261-11ef-99fd-bfc7c66e2865" 00:20:14.609 ], 00:20:14.609 "product_name": "Malloc disk", 00:20:14.609 "block_size": 512, 00:20:14.609 "num_blocks": 65536, 00:20:14.609 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:14.609 "assigned_rate_limits": { 00:20:14.609 "rw_ios_per_sec": 0, 00:20:14.609 "rw_mbytes_per_sec": 0, 00:20:14.609 "r_mbytes_per_sec": 0, 00:20:14.609 "w_mbytes_per_sec": 0 00:20:14.609 }, 00:20:14.609 "claimed": true, 00:20:14.609 "claim_type": "exclusive_write", 00:20:14.609 "zoned": false, 00:20:14.609 "supported_io_types": { 00:20:14.609 "read": true, 00:20:14.609 "write": true, 00:20:14.609 "unmap": true, 00:20:14.609 "write_zeroes": true, 00:20:14.609 "flush": true, 00:20:14.609 "reset": true, 00:20:14.609 "compare": false, 00:20:14.609 "compare_and_write": false, 00:20:14.609 "abort": true, 00:20:14.609 "nvme_admin": false, 00:20:14.609 "nvme_io": false 00:20:14.609 }, 00:20:14.609 "memory_domains": [ 00:20:14.609 { 00:20:14.609 "dma_device_id": "system", 00:20:14.609 "dma_device_type": 1 00:20:14.609 }, 00:20:14.609 { 00:20:14.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.609 "dma_device_type": 2 00:20:14.609 } 00:20:14.609 ], 00:20:14.609 "driver_specific": {} 00:20:14.609 }' 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:14.609 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:14.883 "name": "BaseBdev4", 00:20:14.883 "aliases": [ 00:20:14.883 "c513ea63-1261-11ef-99fd-bfc7c66e2865" 00:20:14.883 ], 00:20:14.883 "product_name": "Malloc disk", 00:20:14.883 "block_size": 512, 00:20:14.883 "num_blocks": 65536, 00:20:14.883 "uuid": "c513ea63-1261-11ef-99fd-bfc7c66e2865", 00:20:14.883 "assigned_rate_limits": { 00:20:14.883 "rw_ios_per_sec": 0, 00:20:14.883 "rw_mbytes_per_sec": 0, 00:20:14.883 "r_mbytes_per_sec": 0, 00:20:14.883 "w_mbytes_per_sec": 0 00:20:14.883 }, 00:20:14.883 "claimed": true, 00:20:14.883 "claim_type": "exclusive_write", 00:20:14.883 "zoned": false, 00:20:14.883 "supported_io_types": { 00:20:14.883 "read": true, 00:20:14.883 "write": true, 00:20:14.883 "unmap": true, 00:20:14.883 "write_zeroes": true, 00:20:14.883 "flush": true, 00:20:14.883 "reset": true, 00:20:14.883 "compare": false, 00:20:14.883 "compare_and_write": false, 00:20:14.883 "abort": true, 00:20:14.883 "nvme_admin": false, 00:20:14.883 "nvme_io": false 00:20:14.883 }, 00:20:14.883 "memory_domains": [ 00:20:14.883 { 00:20:14.883 "dma_device_id": "system", 00:20:14.883 "dma_device_type": 1 00:20:14.883 }, 00:20:14.883 { 00:20:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.883 "dma_device_type": 2 00:20:14.883 } 00:20:14.883 ], 00:20:14.883 "driver_specific": {} 00:20:14.883 }' 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:14.883 02:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:15.183 [2024-05-15 02:21:03.140553] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.183 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.440 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.440 "name": "Existed_Raid", 00:20:15.440 "uuid": "c513f090-1261-11ef-99fd-bfc7c66e2865", 00:20:15.440 "strip_size_kb": 0, 00:20:15.440 "state": "online", 00:20:15.440 "raid_level": "raid1", 00:20:15.440 "superblock": false, 00:20:15.440 "num_base_bdevs": 4, 00:20:15.440 "num_base_bdevs_discovered": 3, 00:20:15.440 "num_base_bdevs_operational": 3, 00:20:15.440 "base_bdevs_list": [ 00:20:15.440 { 00:20:15.440 "name": null, 00:20:15.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.440 "is_configured": false, 00:20:15.440 "data_offset": 0, 00:20:15.440 "data_size": 65536 00:20:15.440 }, 00:20:15.440 { 00:20:15.440 "name": "BaseBdev2", 00:20:15.440 "uuid": "c349840a-1261-11ef-99fd-bfc7c66e2865", 00:20:15.440 "is_configured": true, 00:20:15.440 "data_offset": 0, 00:20:15.440 "data_size": 65536 00:20:15.440 }, 00:20:15.440 { 00:20:15.440 "name": "BaseBdev3", 00:20:15.440 "uuid": "c41dee9c-1261-11ef-99fd-bfc7c66e2865", 00:20:15.440 "is_configured": true, 00:20:15.440 "data_offset": 0, 00:20:15.440 "data_size": 65536 00:20:15.440 }, 00:20:15.440 { 00:20:15.440 "name": "BaseBdev4", 00:20:15.440 "uuid": "c513ea63-1261-11ef-99fd-bfc7c66e2865", 00:20:15.440 "is_configured": true, 00:20:15.440 "data_offset": 0, 00:20:15.440 "data_size": 65536 00:20:15.440 } 00:20:15.440 ] 00:20:15.440 }' 00:20:15.440 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.440 02:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.006 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:16.006 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.007 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.007 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:16.007 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:16.007 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:16.007 02:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:16.264 [2024-05-15 02:21:04.221480] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:16.264 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:16.264 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.264 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.264 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:16.523 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:16.523 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:16.523 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:16.782 [2024-05-15 02:21:04.742358] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:16.782 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:16.782 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.782 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.782 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:17.043 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:17.043 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:17.043 02:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:17.302 [2024-05-15 02:21:05.263306] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:17.303 [2024-05-15 02:21:05.263351] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.303 [2024-05-15 02:21:05.268328] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.303 [2024-05-15 02:21:05.268350] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.303 [2024-05-15 02:21:05.268360] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceeba00 name Existed_Raid, state offline 00:20:17.303 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:17.303 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:17.303 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.303 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:17.899 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:17.899 BaseBdev2 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:18.157 02:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:18.413 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:18.671 [ 00:20:18.671 { 00:20:18.671 "name": "BaseBdev2", 00:20:18.671 "aliases": [ 00:20:18.671 "c8a1682a-1261-11ef-99fd-bfc7c66e2865" 00:20:18.671 ], 00:20:18.671 "product_name": "Malloc disk", 00:20:18.671 "block_size": 512, 00:20:18.671 "num_blocks": 65536, 00:20:18.671 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:18.671 "assigned_rate_limits": { 00:20:18.671 "rw_ios_per_sec": 0, 00:20:18.671 "rw_mbytes_per_sec": 0, 00:20:18.671 "r_mbytes_per_sec": 0, 00:20:18.671 "w_mbytes_per_sec": 0 00:20:18.671 }, 00:20:18.671 "claimed": false, 00:20:18.671 "zoned": false, 00:20:18.671 "supported_io_types": { 00:20:18.671 "read": true, 00:20:18.671 "write": true, 00:20:18.671 "unmap": true, 00:20:18.671 "write_zeroes": true, 00:20:18.671 "flush": true, 00:20:18.671 "reset": true, 00:20:18.671 "compare": false, 00:20:18.671 "compare_and_write": false, 00:20:18.671 "abort": true, 00:20:18.671 "nvme_admin": false, 00:20:18.671 "nvme_io": false 00:20:18.671 }, 00:20:18.671 "memory_domains": [ 00:20:18.671 { 00:20:18.671 "dma_device_id": "system", 00:20:18.671 "dma_device_type": 1 00:20:18.671 }, 00:20:18.671 { 00:20:18.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.671 "dma_device_type": 2 00:20:18.671 } 00:20:18.671 ], 00:20:18.671 "driver_specific": {} 00:20:18.671 } 00:20:18.671 ] 00:20:18.671 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:18.671 02:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:18.671 02:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:18.671 02:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:18.929 BaseBdev3 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:18.929 02:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.188 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:19.446 [ 00:20:19.446 { 00:20:19.446 "name": "BaseBdev3", 00:20:19.446 "aliases": [ 00:20:19.446 "c92985d8-1261-11ef-99fd-bfc7c66e2865" 00:20:19.446 ], 00:20:19.446 "product_name": "Malloc disk", 00:20:19.446 "block_size": 512, 00:20:19.446 "num_blocks": 65536, 00:20:19.446 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:19.446 "assigned_rate_limits": { 00:20:19.446 "rw_ios_per_sec": 0, 00:20:19.446 "rw_mbytes_per_sec": 0, 00:20:19.446 "r_mbytes_per_sec": 0, 00:20:19.446 "w_mbytes_per_sec": 0 00:20:19.446 }, 00:20:19.446 "claimed": false, 00:20:19.446 "zoned": false, 00:20:19.446 "supported_io_types": { 00:20:19.446 "read": true, 00:20:19.446 "write": true, 00:20:19.446 "unmap": true, 00:20:19.446 "write_zeroes": true, 00:20:19.446 "flush": true, 00:20:19.446 "reset": true, 00:20:19.446 "compare": false, 00:20:19.446 "compare_and_write": false, 00:20:19.446 "abort": true, 00:20:19.446 "nvme_admin": false, 00:20:19.446 "nvme_io": false 00:20:19.446 }, 00:20:19.446 "memory_domains": [ 00:20:19.446 { 00:20:19.446 "dma_device_id": "system", 00:20:19.446 "dma_device_type": 1 00:20:19.446 }, 00:20:19.446 { 00:20:19.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.446 "dma_device_type": 2 00:20:19.446 } 00:20:19.446 ], 00:20:19.446 "driver_specific": {} 00:20:19.446 } 00:20:19.446 ] 00:20:19.446 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:19.446 02:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:19.446 02:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:19.446 02:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:19.762 BaseBdev4 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:19.762 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:20.021 02:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:20.279 [ 00:20:20.279 { 00:20:20.279 "name": "BaseBdev4", 00:20:20.279 "aliases": [ 00:20:20.279 "c9b37939-1261-11ef-99fd-bfc7c66e2865" 00:20:20.279 ], 00:20:20.279 "product_name": "Malloc disk", 00:20:20.279 "block_size": 512, 00:20:20.279 "num_blocks": 65536, 00:20:20.279 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:20.279 "assigned_rate_limits": { 00:20:20.279 "rw_ios_per_sec": 0, 00:20:20.279 "rw_mbytes_per_sec": 0, 00:20:20.279 "r_mbytes_per_sec": 0, 00:20:20.279 "w_mbytes_per_sec": 0 00:20:20.279 }, 00:20:20.279 "claimed": false, 00:20:20.279 "zoned": false, 00:20:20.279 "supported_io_types": { 00:20:20.279 "read": true, 00:20:20.279 "write": true, 00:20:20.279 "unmap": true, 00:20:20.280 "write_zeroes": true, 00:20:20.280 "flush": true, 00:20:20.280 "reset": true, 00:20:20.280 "compare": false, 00:20:20.280 "compare_and_write": false, 00:20:20.280 "abort": true, 00:20:20.280 "nvme_admin": false, 00:20:20.280 "nvme_io": false 00:20:20.280 }, 00:20:20.280 "memory_domains": [ 00:20:20.280 { 00:20:20.280 "dma_device_id": "system", 00:20:20.280 "dma_device_type": 1 00:20:20.280 }, 00:20:20.280 { 00:20:20.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.280 "dma_device_type": 2 00:20:20.280 } 00:20:20.280 ], 00:20:20.280 "driver_specific": {} 00:20:20.280 } 00:20:20.280 ] 00:20:20.280 02:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:20.280 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:20.280 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:20.280 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:20.539 [2024-05-15 02:21:08.448506] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:20.539 [2024-05-15 02:21:08.448573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:20.539 [2024-05-15 02:21:08.448584] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.539 [2024-05-15 02:21:08.449059] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.539 [2024-05-15 02:21:08.449079] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.539 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.797 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.797 "name": "Existed_Raid", 00:20:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.797 "strip_size_kb": 0, 00:20:20.797 "state": "configuring", 00:20:20.797 "raid_level": "raid1", 00:20:20.797 "superblock": false, 00:20:20.797 "num_base_bdevs": 4, 00:20:20.797 "num_base_bdevs_discovered": 3, 00:20:20.797 "num_base_bdevs_operational": 4, 00:20:20.797 "base_bdevs_list": [ 00:20:20.797 { 00:20:20.797 "name": "BaseBdev1", 00:20:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.797 "is_configured": false, 00:20:20.797 "data_offset": 0, 00:20:20.797 "data_size": 0 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "name": "BaseBdev2", 00:20:20.797 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:20.797 "is_configured": true, 00:20:20.797 "data_offset": 0, 00:20:20.797 "data_size": 65536 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "name": "BaseBdev3", 00:20:20.797 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:20.797 "is_configured": true, 00:20:20.797 "data_offset": 0, 00:20:20.797 "data_size": 65536 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "name": "BaseBdev4", 00:20:20.797 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:20.797 "is_configured": true, 00:20:20.797 "data_offset": 0, 00:20:20.797 "data_size": 65536 00:20:20.797 } 00:20:20.797 ] 00:20:20.797 }' 00:20:20.797 02:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.797 02:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:21.365 [2024-05-15 02:21:09.336550] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.365 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.937 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.937 "name": "Existed_Raid", 00:20:21.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.937 "strip_size_kb": 0, 00:20:21.937 "state": "configuring", 00:20:21.937 "raid_level": "raid1", 00:20:21.937 "superblock": false, 00:20:21.937 "num_base_bdevs": 4, 00:20:21.937 "num_base_bdevs_discovered": 2, 00:20:21.937 "num_base_bdevs_operational": 4, 00:20:21.937 "base_bdevs_list": [ 00:20:21.937 { 00:20:21.937 "name": "BaseBdev1", 00:20:21.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.937 "is_configured": false, 00:20:21.937 "data_offset": 0, 00:20:21.937 "data_size": 0 00:20:21.937 }, 00:20:21.937 { 00:20:21.937 "name": null, 00:20:21.937 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:21.937 "is_configured": false, 00:20:21.937 "data_offset": 0, 00:20:21.937 "data_size": 65536 00:20:21.937 }, 00:20:21.937 { 00:20:21.937 "name": "BaseBdev3", 00:20:21.937 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:21.937 "is_configured": true, 00:20:21.937 "data_offset": 0, 00:20:21.937 "data_size": 65536 00:20:21.937 }, 00:20:21.937 { 00:20:21.937 "name": "BaseBdev4", 00:20:21.937 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:21.937 "is_configured": true, 00:20:21.937 "data_offset": 0, 00:20:21.937 "data_size": 65536 00:20:21.937 } 00:20:21.937 ] 00:20:21.937 }' 00:20:21.937 02:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.937 02:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.195 02:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.195 02:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:22.452 02:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:20:22.452 02:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.711 [2024-05-15 02:21:10.480749] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.711 BaseBdev1 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:22.711 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.969 02:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:23.227 [ 00:20:23.227 { 00:20:23.227 "name": "BaseBdev1", 00:20:23.227 "aliases": [ 00:20:23.227 "cb5d8505-1261-11ef-99fd-bfc7c66e2865" 00:20:23.227 ], 00:20:23.227 "product_name": "Malloc disk", 00:20:23.227 "block_size": 512, 00:20:23.227 "num_blocks": 65536, 00:20:23.227 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:23.227 "assigned_rate_limits": { 00:20:23.227 "rw_ios_per_sec": 0, 00:20:23.227 "rw_mbytes_per_sec": 0, 00:20:23.227 "r_mbytes_per_sec": 0, 00:20:23.227 "w_mbytes_per_sec": 0 00:20:23.227 }, 00:20:23.227 "claimed": true, 00:20:23.227 "claim_type": "exclusive_write", 00:20:23.227 "zoned": false, 00:20:23.227 "supported_io_types": { 00:20:23.227 "read": true, 00:20:23.227 "write": true, 00:20:23.227 "unmap": true, 00:20:23.227 "write_zeroes": true, 00:20:23.227 "flush": true, 00:20:23.227 "reset": true, 00:20:23.227 "compare": false, 00:20:23.227 "compare_and_write": false, 00:20:23.227 "abort": true, 00:20:23.227 "nvme_admin": false, 00:20:23.227 "nvme_io": false 00:20:23.227 }, 00:20:23.227 "memory_domains": [ 00:20:23.227 { 00:20:23.227 "dma_device_id": "system", 00:20:23.227 "dma_device_type": 1 00:20:23.227 }, 00:20:23.227 { 00:20:23.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.227 "dma_device_type": 2 00:20:23.227 } 00:20:23.227 ], 00:20:23.227 "driver_specific": {} 00:20:23.227 } 00:20:23.227 ] 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.227 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.486 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.486 "name": "Existed_Raid", 00:20:23.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.486 "strip_size_kb": 0, 00:20:23.486 "state": "configuring", 00:20:23.486 "raid_level": "raid1", 00:20:23.486 "superblock": false, 00:20:23.486 "num_base_bdevs": 4, 00:20:23.486 "num_base_bdevs_discovered": 3, 00:20:23.486 "num_base_bdevs_operational": 4, 00:20:23.486 "base_bdevs_list": [ 00:20:23.486 { 00:20:23.486 "name": "BaseBdev1", 00:20:23.486 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:23.486 "is_configured": true, 00:20:23.486 "data_offset": 0, 00:20:23.486 "data_size": 65536 00:20:23.486 }, 00:20:23.486 { 00:20:23.486 "name": null, 00:20:23.486 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:23.486 "is_configured": false, 00:20:23.486 "data_offset": 0, 00:20:23.486 "data_size": 65536 00:20:23.486 }, 00:20:23.486 { 00:20:23.486 "name": "BaseBdev3", 00:20:23.486 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:23.486 "is_configured": true, 00:20:23.486 "data_offset": 0, 00:20:23.486 "data_size": 65536 00:20:23.486 }, 00:20:23.486 { 00:20:23.486 "name": "BaseBdev4", 00:20:23.486 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:23.486 "is_configured": true, 00:20:23.486 "data_offset": 0, 00:20:23.486 "data_size": 65536 00:20:23.486 } 00:20:23.486 ] 00:20:23.486 }' 00:20:23.486 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.486 02:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.745 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.745 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:24.003 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:24.003 02:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:24.261 [2024-05-15 02:21:12.100690] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.261 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.519 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.519 "name": "Existed_Raid", 00:20:24.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.519 "strip_size_kb": 0, 00:20:24.519 "state": "configuring", 00:20:24.519 "raid_level": "raid1", 00:20:24.519 "superblock": false, 00:20:24.519 "num_base_bdevs": 4, 00:20:24.519 "num_base_bdevs_discovered": 2, 00:20:24.519 "num_base_bdevs_operational": 4, 00:20:24.519 "base_bdevs_list": [ 00:20:24.519 { 00:20:24.519 "name": "BaseBdev1", 00:20:24.519 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:24.519 "is_configured": true, 00:20:24.519 "data_offset": 0, 00:20:24.519 "data_size": 65536 00:20:24.519 }, 00:20:24.519 { 00:20:24.519 "name": null, 00:20:24.519 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:24.519 "is_configured": false, 00:20:24.519 "data_offset": 0, 00:20:24.519 "data_size": 65536 00:20:24.519 }, 00:20:24.519 { 00:20:24.519 "name": null, 00:20:24.519 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:24.519 "is_configured": false, 00:20:24.519 "data_offset": 0, 00:20:24.519 "data_size": 65536 00:20:24.519 }, 00:20:24.519 { 00:20:24.519 "name": "BaseBdev4", 00:20:24.519 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:24.519 "is_configured": true, 00:20:24.519 "data_offset": 0, 00:20:24.519 "data_size": 65536 00:20:24.519 } 00:20:24.519 ] 00:20:24.519 }' 00:20:24.519 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.519 02:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.086 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.086 02:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:25.344 [2024-05-15 02:21:13.340764] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.344 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.607 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.607 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.872 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.872 "name": "Existed_Raid", 00:20:25.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.872 "strip_size_kb": 0, 00:20:25.872 "state": "configuring", 00:20:25.872 "raid_level": "raid1", 00:20:25.872 "superblock": false, 00:20:25.872 "num_base_bdevs": 4, 00:20:25.872 "num_base_bdevs_discovered": 3, 00:20:25.872 "num_base_bdevs_operational": 4, 00:20:25.872 "base_bdevs_list": [ 00:20:25.872 { 00:20:25.872 "name": "BaseBdev1", 00:20:25.872 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:25.872 "is_configured": true, 00:20:25.872 "data_offset": 0, 00:20:25.872 "data_size": 65536 00:20:25.872 }, 00:20:25.872 { 00:20:25.872 "name": null, 00:20:25.872 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:25.872 "is_configured": false, 00:20:25.872 "data_offset": 0, 00:20:25.872 "data_size": 65536 00:20:25.872 }, 00:20:25.872 { 00:20:25.872 "name": "BaseBdev3", 00:20:25.872 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:25.872 "is_configured": true, 00:20:25.872 "data_offset": 0, 00:20:25.872 "data_size": 65536 00:20:25.872 }, 00:20:25.872 { 00:20:25.872 "name": "BaseBdev4", 00:20:25.872 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:25.872 "is_configured": true, 00:20:25.872 "data_offset": 0, 00:20:25.872 "data_size": 65536 00:20:25.872 } 00:20:25.872 ] 00:20:25.872 }' 00:20:25.872 02:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.872 02:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.135 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.135 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:26.393 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:20:26.393 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:26.651 [2024-05-15 02:21:14.496826] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.651 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.910 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.910 "name": "Existed_Raid", 00:20:26.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.910 "strip_size_kb": 0, 00:20:26.910 "state": "configuring", 00:20:26.910 "raid_level": "raid1", 00:20:26.910 "superblock": false, 00:20:26.910 "num_base_bdevs": 4, 00:20:26.910 "num_base_bdevs_discovered": 2, 00:20:26.910 "num_base_bdevs_operational": 4, 00:20:26.910 "base_bdevs_list": [ 00:20:26.910 { 00:20:26.910 "name": null, 00:20:26.910 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:26.910 "is_configured": false, 00:20:26.910 "data_offset": 0, 00:20:26.910 "data_size": 65536 00:20:26.910 }, 00:20:26.910 { 00:20:26.910 "name": null, 00:20:26.910 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:26.910 "is_configured": false, 00:20:26.910 "data_offset": 0, 00:20:26.910 "data_size": 65536 00:20:26.910 }, 00:20:26.910 { 00:20:26.910 "name": "BaseBdev3", 00:20:26.910 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:26.910 "is_configured": true, 00:20:26.910 "data_offset": 0, 00:20:26.910 "data_size": 65536 00:20:26.910 }, 00:20:26.910 { 00:20:26.910 "name": "BaseBdev4", 00:20:26.910 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:26.910 "is_configured": true, 00:20:26.910 "data_offset": 0, 00:20:26.910 "data_size": 65536 00:20:26.910 } 00:20:26.910 ] 00:20:26.910 }' 00:20:26.910 02:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.910 02:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.167 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.168 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:27.733 [2024-05-15 02:21:15.717752] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.733 02:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.300 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.300 "name": "Existed_Raid", 00:20:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.300 "strip_size_kb": 0, 00:20:28.300 "state": "configuring", 00:20:28.300 "raid_level": "raid1", 00:20:28.300 "superblock": false, 00:20:28.300 "num_base_bdevs": 4, 00:20:28.300 "num_base_bdevs_discovered": 3, 00:20:28.300 "num_base_bdevs_operational": 4, 00:20:28.300 "base_bdevs_list": [ 00:20:28.300 { 00:20:28.300 "name": null, 00:20:28.300 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:28.300 "is_configured": false, 00:20:28.300 "data_offset": 0, 00:20:28.300 "data_size": 65536 00:20:28.300 }, 00:20:28.300 { 00:20:28.300 "name": "BaseBdev2", 00:20:28.300 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:28.300 "is_configured": true, 00:20:28.300 "data_offset": 0, 00:20:28.300 "data_size": 65536 00:20:28.300 }, 00:20:28.300 { 00:20:28.300 "name": "BaseBdev3", 00:20:28.300 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:28.300 "is_configured": true, 00:20:28.300 "data_offset": 0, 00:20:28.300 "data_size": 65536 00:20:28.300 }, 00:20:28.300 { 00:20:28.300 "name": "BaseBdev4", 00:20:28.300 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:28.300 "is_configured": true, 00:20:28.300 "data_offset": 0, 00:20:28.300 "data_size": 65536 00:20:28.300 } 00:20:28.300 ] 00:20:28.300 }' 00:20:28.300 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.300 02:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.559 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.559 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:28.818 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:20:28.818 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:28.818 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.818 02:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cb5d8505-1261-11ef-99fd-bfc7c66e2865 00:20:29.076 [2024-05-15 02:21:17.013898] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:29.076 [2024-05-15 02:21:17.013928] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ceebf00 00:20:29.076 [2024-05-15 02:21:17.013932] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:29.076 [2024-05-15 02:21:17.013969] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cf4ee20 00:20:29.076 [2024-05-15 02:21:17.014031] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ceebf00 00:20:29.076 [2024-05-15 02:21:17.014034] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ceebf00 00:20:29.076 [2024-05-15 02:21:17.014075] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.076 NewBaseBdev 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:29.076 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.334 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:29.592 [ 00:20:29.592 { 00:20:29.592 "name": "NewBaseBdev", 00:20:29.592 "aliases": [ 00:20:29.592 "cb5d8505-1261-11ef-99fd-bfc7c66e2865" 00:20:29.592 ], 00:20:29.592 "product_name": "Malloc disk", 00:20:29.592 "block_size": 512, 00:20:29.592 "num_blocks": 65536, 00:20:29.592 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:29.592 "assigned_rate_limits": { 00:20:29.592 "rw_ios_per_sec": 0, 00:20:29.592 "rw_mbytes_per_sec": 0, 00:20:29.592 "r_mbytes_per_sec": 0, 00:20:29.592 "w_mbytes_per_sec": 0 00:20:29.592 }, 00:20:29.592 "claimed": true, 00:20:29.592 "claim_type": "exclusive_write", 00:20:29.592 "zoned": false, 00:20:29.592 "supported_io_types": { 00:20:29.592 "read": true, 00:20:29.592 "write": true, 00:20:29.592 "unmap": true, 00:20:29.592 "write_zeroes": true, 00:20:29.592 "flush": true, 00:20:29.592 "reset": true, 00:20:29.592 "compare": false, 00:20:29.592 "compare_and_write": false, 00:20:29.592 "abort": true, 00:20:29.592 "nvme_admin": false, 00:20:29.592 "nvme_io": false 00:20:29.592 }, 00:20:29.592 "memory_domains": [ 00:20:29.592 { 00:20:29.592 "dma_device_id": "system", 00:20:29.592 "dma_device_type": 1 00:20:29.592 }, 00:20:29.592 { 00:20:29.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.592 "dma_device_type": 2 00:20:29.592 } 00:20:29.592 ], 00:20:29.592 "driver_specific": {} 00:20:29.592 } 00:20:29.592 ] 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.592 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.916 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.916 "name": "Existed_Raid", 00:20:29.916 "uuid": "cf426cf4-1261-11ef-99fd-bfc7c66e2865", 00:20:29.916 "strip_size_kb": 0, 00:20:29.916 "state": "online", 00:20:29.916 "raid_level": "raid1", 00:20:29.916 "superblock": false, 00:20:29.916 "num_base_bdevs": 4, 00:20:29.916 "num_base_bdevs_discovered": 4, 00:20:29.916 "num_base_bdevs_operational": 4, 00:20:29.916 "base_bdevs_list": [ 00:20:29.916 { 00:20:29.916 "name": "NewBaseBdev", 00:20:29.916 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:29.916 "is_configured": true, 00:20:29.916 "data_offset": 0, 00:20:29.916 "data_size": 65536 00:20:29.916 }, 00:20:29.916 { 00:20:29.916 "name": "BaseBdev2", 00:20:29.916 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:29.916 "is_configured": true, 00:20:29.916 "data_offset": 0, 00:20:29.916 "data_size": 65536 00:20:29.916 }, 00:20:29.916 { 00:20:29.916 "name": "BaseBdev3", 00:20:29.916 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:29.916 "is_configured": true, 00:20:29.916 "data_offset": 0, 00:20:29.916 "data_size": 65536 00:20:29.916 }, 00:20:29.916 { 00:20:29.916 "name": "BaseBdev4", 00:20:29.916 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:29.916 "is_configured": true, 00:20:29.916 "data_offset": 0, 00:20:29.916 "data_size": 65536 00:20:29.916 } 00:20:29.916 ] 00:20:29.916 }' 00:20:29.916 02:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.916 02:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:30.175 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:30.433 [2024-05-15 02:21:18.273908] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.433 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:30.433 "name": "Existed_Raid", 00:20:30.433 "aliases": [ 00:20:30.433 "cf426cf4-1261-11ef-99fd-bfc7c66e2865" 00:20:30.433 ], 00:20:30.433 "product_name": "Raid Volume", 00:20:30.433 "block_size": 512, 00:20:30.433 "num_blocks": 65536, 00:20:30.433 "uuid": "cf426cf4-1261-11ef-99fd-bfc7c66e2865", 00:20:30.433 "assigned_rate_limits": { 00:20:30.433 "rw_ios_per_sec": 0, 00:20:30.433 "rw_mbytes_per_sec": 0, 00:20:30.433 "r_mbytes_per_sec": 0, 00:20:30.433 "w_mbytes_per_sec": 0 00:20:30.433 }, 00:20:30.433 "claimed": false, 00:20:30.433 "zoned": false, 00:20:30.433 "supported_io_types": { 00:20:30.433 "read": true, 00:20:30.433 "write": true, 00:20:30.433 "unmap": false, 00:20:30.433 "write_zeroes": true, 00:20:30.433 "flush": false, 00:20:30.433 "reset": true, 00:20:30.433 "compare": false, 00:20:30.433 "compare_and_write": false, 00:20:30.433 "abort": false, 00:20:30.433 "nvme_admin": false, 00:20:30.433 "nvme_io": false 00:20:30.433 }, 00:20:30.433 "memory_domains": [ 00:20:30.433 { 00:20:30.433 "dma_device_id": "system", 00:20:30.433 "dma_device_type": 1 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.433 "dma_device_type": 2 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "system", 00:20:30.433 "dma_device_type": 1 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.433 "dma_device_type": 2 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "system", 00:20:30.433 "dma_device_type": 1 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.433 "dma_device_type": 2 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "system", 00:20:30.433 "dma_device_type": 1 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.433 "dma_device_type": 2 00:20:30.433 } 00:20:30.433 ], 00:20:30.433 "driver_specific": { 00:20:30.433 "raid": { 00:20:30.433 "uuid": "cf426cf4-1261-11ef-99fd-bfc7c66e2865", 00:20:30.433 "strip_size_kb": 0, 00:20:30.433 "state": "online", 00:20:30.433 "raid_level": "raid1", 00:20:30.433 "superblock": false, 00:20:30.433 "num_base_bdevs": 4, 00:20:30.433 "num_base_bdevs_discovered": 4, 00:20:30.433 "num_base_bdevs_operational": 4, 00:20:30.433 "base_bdevs_list": [ 00:20:30.433 { 00:20:30.433 "name": "NewBaseBdev", 00:20:30.433 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:30.433 "is_configured": true, 00:20:30.433 "data_offset": 0, 00:20:30.433 "data_size": 65536 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "name": "BaseBdev2", 00:20:30.433 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:30.433 "is_configured": true, 00:20:30.433 "data_offset": 0, 00:20:30.433 "data_size": 65536 00:20:30.433 }, 00:20:30.433 { 00:20:30.433 "name": "BaseBdev3", 00:20:30.433 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:30.433 "is_configured": true, 00:20:30.433 "data_offset": 0, 00:20:30.434 "data_size": 65536 00:20:30.434 }, 00:20:30.434 { 00:20:30.434 "name": "BaseBdev4", 00:20:30.434 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:30.434 "is_configured": true, 00:20:30.434 "data_offset": 0, 00:20:30.434 "data_size": 65536 00:20:30.434 } 00:20:30.434 ] 00:20:30.434 } 00:20:30.434 } 00:20:30.434 }' 00:20:30.434 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.434 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:20:30.434 BaseBdev2 00:20:30.434 BaseBdev3 00:20:30.434 BaseBdev4' 00:20:30.434 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:30.434 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:30.434 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:30.691 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:30.691 "name": "NewBaseBdev", 00:20:30.691 "aliases": [ 00:20:30.691 "cb5d8505-1261-11ef-99fd-bfc7c66e2865" 00:20:30.691 ], 00:20:30.691 "product_name": "Malloc disk", 00:20:30.691 "block_size": 512, 00:20:30.691 "num_blocks": 65536, 00:20:30.691 "uuid": "cb5d8505-1261-11ef-99fd-bfc7c66e2865", 00:20:30.691 "assigned_rate_limits": { 00:20:30.691 "rw_ios_per_sec": 0, 00:20:30.691 "rw_mbytes_per_sec": 0, 00:20:30.691 "r_mbytes_per_sec": 0, 00:20:30.691 "w_mbytes_per_sec": 0 00:20:30.691 }, 00:20:30.692 "claimed": true, 00:20:30.692 "claim_type": "exclusive_write", 00:20:30.692 "zoned": false, 00:20:30.692 "supported_io_types": { 00:20:30.692 "read": true, 00:20:30.692 "write": true, 00:20:30.692 "unmap": true, 00:20:30.692 "write_zeroes": true, 00:20:30.692 "flush": true, 00:20:30.692 "reset": true, 00:20:30.692 "compare": false, 00:20:30.692 "compare_and_write": false, 00:20:30.692 "abort": true, 00:20:30.692 "nvme_admin": false, 00:20:30.692 "nvme_io": false 00:20:30.692 }, 00:20:30.692 "memory_domains": [ 00:20:30.692 { 00:20:30.692 "dma_device_id": "system", 00:20:30.692 "dma_device_type": 1 00:20:30.692 }, 00:20:30.692 { 00:20:30.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.692 "dma_device_type": 2 00:20:30.692 } 00:20:30.692 ], 00:20:30.692 "driver_specific": {} 00:20:30.692 }' 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:30.692 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:30.950 "name": "BaseBdev2", 00:20:30.950 "aliases": [ 00:20:30.950 "c8a1682a-1261-11ef-99fd-bfc7c66e2865" 00:20:30.950 ], 00:20:30.950 "product_name": "Malloc disk", 00:20:30.950 "block_size": 512, 00:20:30.950 "num_blocks": 65536, 00:20:30.950 "uuid": "c8a1682a-1261-11ef-99fd-bfc7c66e2865", 00:20:30.950 "assigned_rate_limits": { 00:20:30.950 "rw_ios_per_sec": 0, 00:20:30.950 "rw_mbytes_per_sec": 0, 00:20:30.950 "r_mbytes_per_sec": 0, 00:20:30.950 "w_mbytes_per_sec": 0 00:20:30.950 }, 00:20:30.950 "claimed": true, 00:20:30.950 "claim_type": "exclusive_write", 00:20:30.950 "zoned": false, 00:20:30.950 "supported_io_types": { 00:20:30.950 "read": true, 00:20:30.950 "write": true, 00:20:30.950 "unmap": true, 00:20:30.950 "write_zeroes": true, 00:20:30.950 "flush": true, 00:20:30.950 "reset": true, 00:20:30.950 "compare": false, 00:20:30.950 "compare_and_write": false, 00:20:30.950 "abort": true, 00:20:30.950 "nvme_admin": false, 00:20:30.950 "nvme_io": false 00:20:30.950 }, 00:20:30.950 "memory_domains": [ 00:20:30.950 { 00:20:30.950 "dma_device_id": "system", 00:20:30.950 "dma_device_type": 1 00:20:30.950 }, 00:20:30.950 { 00:20:30.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.950 "dma_device_type": 2 00:20:30.950 } 00:20:30.950 ], 00:20:30.950 "driver_specific": {} 00:20:30.950 }' 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:30.950 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:31.208 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:31.208 02:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:31.466 "name": "BaseBdev3", 00:20:31.466 "aliases": [ 00:20:31.466 "c92985d8-1261-11ef-99fd-bfc7c66e2865" 00:20:31.466 ], 00:20:31.466 "product_name": "Malloc disk", 00:20:31.466 "block_size": 512, 00:20:31.466 "num_blocks": 65536, 00:20:31.466 "uuid": "c92985d8-1261-11ef-99fd-bfc7c66e2865", 00:20:31.466 "assigned_rate_limits": { 00:20:31.466 "rw_ios_per_sec": 0, 00:20:31.466 "rw_mbytes_per_sec": 0, 00:20:31.466 "r_mbytes_per_sec": 0, 00:20:31.466 "w_mbytes_per_sec": 0 00:20:31.466 }, 00:20:31.466 "claimed": true, 00:20:31.466 "claim_type": "exclusive_write", 00:20:31.466 "zoned": false, 00:20:31.466 "supported_io_types": { 00:20:31.466 "read": true, 00:20:31.466 "write": true, 00:20:31.466 "unmap": true, 00:20:31.466 "write_zeroes": true, 00:20:31.466 "flush": true, 00:20:31.466 "reset": true, 00:20:31.466 "compare": false, 00:20:31.466 "compare_and_write": false, 00:20:31.466 "abort": true, 00:20:31.466 "nvme_admin": false, 00:20:31.466 "nvme_io": false 00:20:31.466 }, 00:20:31.466 "memory_domains": [ 00:20:31.466 { 00:20:31.466 "dma_device_id": "system", 00:20:31.466 "dma_device_type": 1 00:20:31.466 }, 00:20:31.466 { 00:20:31.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.466 "dma_device_type": 2 00:20:31.466 } 00:20:31.466 ], 00:20:31.466 "driver_specific": {} 00:20:31.466 }' 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:31.466 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:31.723 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:31.723 "name": "BaseBdev4", 00:20:31.723 "aliases": [ 00:20:31.723 "c9b37939-1261-11ef-99fd-bfc7c66e2865" 00:20:31.723 ], 00:20:31.723 "product_name": "Malloc disk", 00:20:31.723 "block_size": 512, 00:20:31.723 "num_blocks": 65536, 00:20:31.723 "uuid": "c9b37939-1261-11ef-99fd-bfc7c66e2865", 00:20:31.723 "assigned_rate_limits": { 00:20:31.723 "rw_ios_per_sec": 0, 00:20:31.723 "rw_mbytes_per_sec": 0, 00:20:31.723 "r_mbytes_per_sec": 0, 00:20:31.723 "w_mbytes_per_sec": 0 00:20:31.723 }, 00:20:31.723 "claimed": true, 00:20:31.723 "claim_type": "exclusive_write", 00:20:31.723 "zoned": false, 00:20:31.723 "supported_io_types": { 00:20:31.723 "read": true, 00:20:31.723 "write": true, 00:20:31.723 "unmap": true, 00:20:31.723 "write_zeroes": true, 00:20:31.723 "flush": true, 00:20:31.723 "reset": true, 00:20:31.723 "compare": false, 00:20:31.723 "compare_and_write": false, 00:20:31.723 "abort": true, 00:20:31.723 "nvme_admin": false, 00:20:31.723 "nvme_io": false 00:20:31.723 }, 00:20:31.723 "memory_domains": [ 00:20:31.723 { 00:20:31.723 "dma_device_id": "system", 00:20:31.724 "dma_device_type": 1 00:20:31.724 }, 00:20:31.724 { 00:20:31.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.724 "dma_device_type": 2 00:20:31.724 } 00:20:31.724 ], 00:20:31.724 "driver_specific": {} 00:20:31.724 }' 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:31.724 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:31.982 [2024-05-15 02:21:19.973966] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.982 [2024-05-15 02:21:19.973995] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.982 [2024-05-15 02:21:19.974017] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.982 [2024-05-15 02:21:19.974097] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.982 [2024-05-15 02:21:19.974102] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ceebf00 name Existed_Raid, state offline 00:20:31.982 02:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 61334 00:20:31.982 02:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 61334 ']' 00:20:31.982 02:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 61334 00:20:31.982 02:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:20:31.982 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 61334 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:32.241 killing process with pid 61334 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61334' 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 61334 00:20:32.241 [2024-05-15 02:21:20.009109] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.241 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 61334 00:20:32.242 [2024-05-15 02:21:20.028397] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:20:32.242 00:20:32.242 real 0m28.644s 00:20:32.242 user 0m52.838s 00:20:32.242 sys 0m3.660s 00:20:32.242 ************************************ 00:20:32.242 END TEST raid_state_function_test 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.242 ************************************ 00:20:32.242 02:21:20 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:32.242 02:21:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:32.242 02:21:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:32.242 02:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.242 ************************************ 00:20:32.242 START TEST raid_state_function_test_sb 00:20:32.242 ************************************ 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=62157 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 62157' 00:20:32.242 Process raid pid: 62157 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 62157 /var/tmp/spdk-raid.sock 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 62157 ']' 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:32.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:32.242 02:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.242 [2024-05-15 02:21:20.242025] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:32.242 [2024-05-15 02:21:20.242357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:33.177 EAL: TSC is not safe to use in SMP mode 00:20:33.177 EAL: TSC is not invariant 00:20:33.177 [2024-05-15 02:21:21.039751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.177 [2024-05-15 02:21:21.137966] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:33.177 [2024-05-15 02:21:21.140609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.177 [2024-05-15 02:21:21.141579] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.177 [2024-05-15 02:21:21.141595] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.436 02:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:33.436 02:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:33.436 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:33.695 [2024-05-15 02:21:21.513828] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:33.695 [2024-05-15 02:21:21.513895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:33.695 [2024-05-15 02:21:21.513900] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.695 [2024-05-15 02:21:21.513909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.695 [2024-05-15 02:21:21.513912] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.695 [2024-05-15 02:21:21.513920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.695 [2024-05-15 02:21:21.513923] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:33.695 [2024-05-15 02:21:21.513929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.695 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.954 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.954 "name": "Existed_Raid", 00:20:33.954 "uuid": "d1f10d9d-1261-11ef-99fd-bfc7c66e2865", 00:20:33.954 "strip_size_kb": 0, 00:20:33.954 "state": "configuring", 00:20:33.954 "raid_level": "raid1", 00:20:33.954 "superblock": true, 00:20:33.954 "num_base_bdevs": 4, 00:20:33.954 "num_base_bdevs_discovered": 0, 00:20:33.954 "num_base_bdevs_operational": 4, 00:20:33.954 "base_bdevs_list": [ 00:20:33.954 { 00:20:33.954 "name": "BaseBdev1", 00:20:33.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.954 "is_configured": false, 00:20:33.954 "data_offset": 0, 00:20:33.954 "data_size": 0 00:20:33.954 }, 00:20:33.954 { 00:20:33.954 "name": "BaseBdev2", 00:20:33.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.954 "is_configured": false, 00:20:33.954 "data_offset": 0, 00:20:33.954 "data_size": 0 00:20:33.954 }, 00:20:33.954 { 00:20:33.954 "name": "BaseBdev3", 00:20:33.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.954 "is_configured": false, 00:20:33.954 "data_offset": 0, 00:20:33.954 "data_size": 0 00:20:33.954 }, 00:20:33.954 { 00:20:33.954 "name": "BaseBdev4", 00:20:33.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.954 "is_configured": false, 00:20:33.954 "data_offset": 0, 00:20:33.954 "data_size": 0 00:20:33.954 } 00:20:33.954 ] 00:20:33.954 }' 00:20:33.954 02:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.954 02:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.213 02:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.471 [2024-05-15 02:21:22.409851] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.471 [2024-05-15 02:21:22.409881] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828506500 name Existed_Raid, state configuring 00:20:34.471 02:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:34.790 [2024-05-15 02:21:22.709890] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.790 [2024-05-15 02:21:22.709945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.790 [2024-05-15 02:21:22.709949] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.790 [2024-05-15 02:21:22.709957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.790 [2024-05-15 02:21:22.709960] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.790 [2024-05-15 02:21:22.709967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.790 [2024-05-15 02:21:22.709987] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:34.790 [2024-05-15 02:21:22.709994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:34.790 02:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:35.049 [2024-05-15 02:21:23.034875] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.049 BaseBdev1 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:35.049 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.614 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:35.871 [ 00:20:35.871 { 00:20:35.871 "name": "BaseBdev1", 00:20:35.871 "aliases": [ 00:20:35.871 "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865" 00:20:35.871 ], 00:20:35.871 "product_name": "Malloc disk", 00:20:35.871 "block_size": 512, 00:20:35.871 "num_blocks": 65536, 00:20:35.871 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:35.871 "assigned_rate_limits": { 00:20:35.871 "rw_ios_per_sec": 0, 00:20:35.871 "rw_mbytes_per_sec": 0, 00:20:35.871 "r_mbytes_per_sec": 0, 00:20:35.871 "w_mbytes_per_sec": 0 00:20:35.871 }, 00:20:35.871 "claimed": true, 00:20:35.871 "claim_type": "exclusive_write", 00:20:35.871 "zoned": false, 00:20:35.871 "supported_io_types": { 00:20:35.871 "read": true, 00:20:35.871 "write": true, 00:20:35.871 "unmap": true, 00:20:35.871 "write_zeroes": true, 00:20:35.871 "flush": true, 00:20:35.871 "reset": true, 00:20:35.871 "compare": false, 00:20:35.871 "compare_and_write": false, 00:20:35.871 "abort": true, 00:20:35.871 "nvme_admin": false, 00:20:35.871 "nvme_io": false 00:20:35.871 }, 00:20:35.871 "memory_domains": [ 00:20:35.871 { 00:20:35.871 "dma_device_id": "system", 00:20:35.871 "dma_device_type": 1 00:20:35.871 }, 00:20:35.871 { 00:20:35.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.871 "dma_device_type": 2 00:20:35.871 } 00:20:35.871 ], 00:20:35.871 "driver_specific": {} 00:20:35.871 } 00:20:35.871 ] 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.871 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.872 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.129 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.129 "name": "Existed_Raid", 00:20:36.129 "uuid": "d2a78ecf-1261-11ef-99fd-bfc7c66e2865", 00:20:36.129 "strip_size_kb": 0, 00:20:36.129 "state": "configuring", 00:20:36.129 "raid_level": "raid1", 00:20:36.129 "superblock": true, 00:20:36.129 "num_base_bdevs": 4, 00:20:36.129 "num_base_bdevs_discovered": 1, 00:20:36.129 "num_base_bdevs_operational": 4, 00:20:36.129 "base_bdevs_list": [ 00:20:36.129 { 00:20:36.129 "name": "BaseBdev1", 00:20:36.129 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:36.129 "is_configured": true, 00:20:36.129 "data_offset": 2048, 00:20:36.129 "data_size": 63488 00:20:36.129 }, 00:20:36.129 { 00:20:36.129 "name": "BaseBdev2", 00:20:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.129 "is_configured": false, 00:20:36.129 "data_offset": 0, 00:20:36.129 "data_size": 0 00:20:36.129 }, 00:20:36.129 { 00:20:36.129 "name": "BaseBdev3", 00:20:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.129 "is_configured": false, 00:20:36.129 "data_offset": 0, 00:20:36.129 "data_size": 0 00:20:36.129 }, 00:20:36.129 { 00:20:36.129 "name": "BaseBdev4", 00:20:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.129 "is_configured": false, 00:20:36.129 "data_offset": 0, 00:20:36.129 "data_size": 0 00:20:36.129 } 00:20:36.129 ] 00:20:36.129 }' 00:20:36.129 02:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.129 02:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.644 [2024-05-15 02:21:24.613983] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.644 [2024-05-15 02:21:24.614020] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828506500 name Existed_Raid, state configuring 00:20:36.644 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:36.950 [2024-05-15 02:21:24.882012] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.950 [2024-05-15 02:21:24.882714] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.950 [2024-05-15 02:21:24.882760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.950 [2024-05-15 02:21:24.882764] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.950 [2024-05-15 02:21:24.882773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.950 [2024-05-15 02:21:24.882776] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:36.950 [2024-05-15 02:21:24.882783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.950 02:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.208 02:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.208 "name": "Existed_Raid", 00:20:37.208 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:37.208 "strip_size_kb": 0, 00:20:37.208 "state": "configuring", 00:20:37.208 "raid_level": "raid1", 00:20:37.208 "superblock": true, 00:20:37.208 "num_base_bdevs": 4, 00:20:37.208 "num_base_bdevs_discovered": 1, 00:20:37.208 "num_base_bdevs_operational": 4, 00:20:37.208 "base_bdevs_list": [ 00:20:37.208 { 00:20:37.208 "name": "BaseBdev1", 00:20:37.208 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:37.208 "is_configured": true, 00:20:37.208 "data_offset": 2048, 00:20:37.208 "data_size": 63488 00:20:37.208 }, 00:20:37.208 { 00:20:37.208 "name": "BaseBdev2", 00:20:37.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.208 "is_configured": false, 00:20:37.208 "data_offset": 0, 00:20:37.208 "data_size": 0 00:20:37.208 }, 00:20:37.208 { 00:20:37.208 "name": "BaseBdev3", 00:20:37.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.208 "is_configured": false, 00:20:37.208 "data_offset": 0, 00:20:37.208 "data_size": 0 00:20:37.208 }, 00:20:37.208 { 00:20:37.208 "name": "BaseBdev4", 00:20:37.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.208 "is_configured": false, 00:20:37.208 "data_offset": 0, 00:20:37.208 "data_size": 0 00:20:37.208 } 00:20:37.208 ] 00:20:37.208 }' 00:20:37.208 02:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.208 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.774 02:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.774 [2024-05-15 02:21:25.782184] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.774 BaseBdev2 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:38.031 02:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.031 02:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:38.596 [ 00:20:38.596 { 00:20:38.596 "name": "BaseBdev2", 00:20:38.596 "aliases": [ 00:20:38.596 "d47c55d5-1261-11ef-99fd-bfc7c66e2865" 00:20:38.596 ], 00:20:38.596 "product_name": "Malloc disk", 00:20:38.596 "block_size": 512, 00:20:38.596 "num_blocks": 65536, 00:20:38.596 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:38.596 "assigned_rate_limits": { 00:20:38.596 "rw_ios_per_sec": 0, 00:20:38.596 "rw_mbytes_per_sec": 0, 00:20:38.596 "r_mbytes_per_sec": 0, 00:20:38.596 "w_mbytes_per_sec": 0 00:20:38.596 }, 00:20:38.596 "claimed": true, 00:20:38.596 "claim_type": "exclusive_write", 00:20:38.596 "zoned": false, 00:20:38.596 "supported_io_types": { 00:20:38.596 "read": true, 00:20:38.596 "write": true, 00:20:38.596 "unmap": true, 00:20:38.596 "write_zeroes": true, 00:20:38.596 "flush": true, 00:20:38.596 "reset": true, 00:20:38.596 "compare": false, 00:20:38.596 "compare_and_write": false, 00:20:38.596 "abort": true, 00:20:38.596 "nvme_admin": false, 00:20:38.596 "nvme_io": false 00:20:38.596 }, 00:20:38.596 "memory_domains": [ 00:20:38.596 { 00:20:38.596 "dma_device_id": "system", 00:20:38.596 "dma_device_type": 1 00:20:38.596 }, 00:20:38.596 { 00:20:38.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.596 "dma_device_type": 2 00:20:38.596 } 00:20:38.596 ], 00:20:38.596 "driver_specific": {} 00:20:38.596 } 00:20:38.596 ] 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.596 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.853 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.853 "name": "Existed_Raid", 00:20:38.853 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:38.853 "strip_size_kb": 0, 00:20:38.853 "state": "configuring", 00:20:38.853 "raid_level": "raid1", 00:20:38.853 "superblock": true, 00:20:38.853 "num_base_bdevs": 4, 00:20:38.853 "num_base_bdevs_discovered": 2, 00:20:38.853 "num_base_bdevs_operational": 4, 00:20:38.853 "base_bdevs_list": [ 00:20:38.853 { 00:20:38.853 "name": "BaseBdev1", 00:20:38.853 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:38.853 "is_configured": true, 00:20:38.853 "data_offset": 2048, 00:20:38.853 "data_size": 63488 00:20:38.853 }, 00:20:38.853 { 00:20:38.853 "name": "BaseBdev2", 00:20:38.853 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:38.853 "is_configured": true, 00:20:38.853 "data_offset": 2048, 00:20:38.853 "data_size": 63488 00:20:38.853 }, 00:20:38.853 { 00:20:38.854 "name": "BaseBdev3", 00:20:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.854 "is_configured": false, 00:20:38.854 "data_offset": 0, 00:20:38.854 "data_size": 0 00:20:38.854 }, 00:20:38.854 { 00:20:38.854 "name": "BaseBdev4", 00:20:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.854 "is_configured": false, 00:20:38.854 "data_offset": 0, 00:20:38.854 "data_size": 0 00:20:38.854 } 00:20:38.854 ] 00:20:38.854 }' 00:20:38.854 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.854 02:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.126 02:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:39.391 [2024-05-15 02:21:27.226277] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.391 BaseBdev3 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:39.391 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.649 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:39.907 [ 00:20:39.907 { 00:20:39.907 "name": "BaseBdev3", 00:20:39.907 "aliases": [ 00:20:39.907 "d558b059-1261-11ef-99fd-bfc7c66e2865" 00:20:39.907 ], 00:20:39.907 "product_name": "Malloc disk", 00:20:39.907 "block_size": 512, 00:20:39.907 "num_blocks": 65536, 00:20:39.907 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:39.907 "assigned_rate_limits": { 00:20:39.907 "rw_ios_per_sec": 0, 00:20:39.907 "rw_mbytes_per_sec": 0, 00:20:39.907 "r_mbytes_per_sec": 0, 00:20:39.907 "w_mbytes_per_sec": 0 00:20:39.907 }, 00:20:39.907 "claimed": true, 00:20:39.907 "claim_type": "exclusive_write", 00:20:39.907 "zoned": false, 00:20:39.907 "supported_io_types": { 00:20:39.907 "read": true, 00:20:39.907 "write": true, 00:20:39.907 "unmap": true, 00:20:39.907 "write_zeroes": true, 00:20:39.907 "flush": true, 00:20:39.907 "reset": true, 00:20:39.907 "compare": false, 00:20:39.907 "compare_and_write": false, 00:20:39.907 "abort": true, 00:20:39.907 "nvme_admin": false, 00:20:39.907 "nvme_io": false 00:20:39.907 }, 00:20:39.907 "memory_domains": [ 00:20:39.907 { 00:20:39.907 "dma_device_id": "system", 00:20:39.907 "dma_device_type": 1 00:20:39.907 }, 00:20:39.907 { 00:20:39.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.907 "dma_device_type": 2 00:20:39.907 } 00:20:39.907 ], 00:20:39.907 "driver_specific": {} 00:20:39.907 } 00:20:39.907 ] 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.907 02:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.165 02:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.165 "name": "Existed_Raid", 00:20:40.165 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:40.165 "strip_size_kb": 0, 00:20:40.165 "state": "configuring", 00:20:40.165 "raid_level": "raid1", 00:20:40.165 "superblock": true, 00:20:40.165 "num_base_bdevs": 4, 00:20:40.165 "num_base_bdevs_discovered": 3, 00:20:40.165 "num_base_bdevs_operational": 4, 00:20:40.165 "base_bdevs_list": [ 00:20:40.165 { 00:20:40.165 "name": "BaseBdev1", 00:20:40.165 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:40.165 "is_configured": true, 00:20:40.165 "data_offset": 2048, 00:20:40.165 "data_size": 63488 00:20:40.165 }, 00:20:40.165 { 00:20:40.165 "name": "BaseBdev2", 00:20:40.165 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:40.165 "is_configured": true, 00:20:40.165 "data_offset": 2048, 00:20:40.165 "data_size": 63488 00:20:40.165 }, 00:20:40.165 { 00:20:40.165 "name": "BaseBdev3", 00:20:40.165 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:40.165 "is_configured": true, 00:20:40.165 "data_offset": 2048, 00:20:40.165 "data_size": 63488 00:20:40.165 }, 00:20:40.165 { 00:20:40.165 "name": "BaseBdev4", 00:20:40.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.165 "is_configured": false, 00:20:40.165 "data_offset": 0, 00:20:40.165 "data_size": 0 00:20:40.165 } 00:20:40.165 ] 00:20:40.165 }' 00:20:40.165 02:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.165 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.423 02:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:40.681 [2024-05-15 02:21:28.626339] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:40.681 [2024-05-15 02:21:28.626420] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x828506a00 00:20:40.681 [2024-05-15 02:21:28.626426] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:40.681 [2024-05-15 02:21:28.626445] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x828569ec0 00:20:40.681 [2024-05-15 02:21:28.626489] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x828506a00 00:20:40.681 [2024-05-15 02:21:28.626493] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x828506a00 00:20:40.681 [2024-05-15 02:21:28.626511] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.681 BaseBdev4 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:40.681 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:40.940 02:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:41.506 [ 00:20:41.506 { 00:20:41.506 "name": "BaseBdev4", 00:20:41.506 "aliases": [ 00:20:41.506 "d62e5247-1261-11ef-99fd-bfc7c66e2865" 00:20:41.506 ], 00:20:41.506 "product_name": "Malloc disk", 00:20:41.506 "block_size": 512, 00:20:41.506 "num_blocks": 65536, 00:20:41.506 "uuid": "d62e5247-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "assigned_rate_limits": { 00:20:41.506 "rw_ios_per_sec": 0, 00:20:41.506 "rw_mbytes_per_sec": 0, 00:20:41.506 "r_mbytes_per_sec": 0, 00:20:41.506 "w_mbytes_per_sec": 0 00:20:41.506 }, 00:20:41.506 "claimed": true, 00:20:41.506 "claim_type": "exclusive_write", 00:20:41.506 "zoned": false, 00:20:41.506 "supported_io_types": { 00:20:41.506 "read": true, 00:20:41.506 "write": true, 00:20:41.506 "unmap": true, 00:20:41.506 "write_zeroes": true, 00:20:41.506 "flush": true, 00:20:41.506 "reset": true, 00:20:41.506 "compare": false, 00:20:41.506 "compare_and_write": false, 00:20:41.506 "abort": true, 00:20:41.506 "nvme_admin": false, 00:20:41.506 "nvme_io": false 00:20:41.506 }, 00:20:41.506 "memory_domains": [ 00:20:41.506 { 00:20:41.506 "dma_device_id": "system", 00:20:41.506 "dma_device_type": 1 00:20:41.506 }, 00:20:41.506 { 00:20:41.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.506 "dma_device_type": 2 00:20:41.506 } 00:20:41.506 ], 00:20:41.506 "driver_specific": {} 00:20:41.506 } 00:20:41.506 ] 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.506 "name": "Existed_Raid", 00:20:41.506 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "strip_size_kb": 0, 00:20:41.506 "state": "online", 00:20:41.506 "raid_level": "raid1", 00:20:41.506 "superblock": true, 00:20:41.506 "num_base_bdevs": 4, 00:20:41.506 "num_base_bdevs_discovered": 4, 00:20:41.506 "num_base_bdevs_operational": 4, 00:20:41.506 "base_bdevs_list": [ 00:20:41.506 { 00:20:41.506 "name": "BaseBdev1", 00:20:41.506 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "is_configured": true, 00:20:41.506 "data_offset": 2048, 00:20:41.506 "data_size": 63488 00:20:41.506 }, 00:20:41.506 { 00:20:41.506 "name": "BaseBdev2", 00:20:41.506 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "is_configured": true, 00:20:41.506 "data_offset": 2048, 00:20:41.506 "data_size": 63488 00:20:41.506 }, 00:20:41.506 { 00:20:41.506 "name": "BaseBdev3", 00:20:41.506 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "is_configured": true, 00:20:41.506 "data_offset": 2048, 00:20:41.506 "data_size": 63488 00:20:41.506 }, 00:20:41.506 { 00:20:41.506 "name": "BaseBdev4", 00:20:41.506 "uuid": "d62e5247-1261-11ef-99fd-bfc7c66e2865", 00:20:41.506 "is_configured": true, 00:20:41.506 "data_offset": 2048, 00:20:41.506 "data_size": 63488 00:20:41.506 } 00:20:41.506 ] 00:20:41.506 }' 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.506 02:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:42.074 02:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:42.332 [2024-05-15 02:21:30.162366] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.332 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:42.333 "name": "Existed_Raid", 00:20:42.333 "aliases": [ 00:20:42.333 "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865" 00:20:42.333 ], 00:20:42.333 "product_name": "Raid Volume", 00:20:42.333 "block_size": 512, 00:20:42.333 "num_blocks": 63488, 00:20:42.333 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "assigned_rate_limits": { 00:20:42.333 "rw_ios_per_sec": 0, 00:20:42.333 "rw_mbytes_per_sec": 0, 00:20:42.333 "r_mbytes_per_sec": 0, 00:20:42.333 "w_mbytes_per_sec": 0 00:20:42.333 }, 00:20:42.333 "claimed": false, 00:20:42.333 "zoned": false, 00:20:42.333 "supported_io_types": { 00:20:42.333 "read": true, 00:20:42.333 "write": true, 00:20:42.333 "unmap": false, 00:20:42.333 "write_zeroes": true, 00:20:42.333 "flush": false, 00:20:42.333 "reset": true, 00:20:42.333 "compare": false, 00:20:42.333 "compare_and_write": false, 00:20:42.333 "abort": false, 00:20:42.333 "nvme_admin": false, 00:20:42.333 "nvme_io": false 00:20:42.333 }, 00:20:42.333 "memory_domains": [ 00:20:42.333 { 00:20:42.333 "dma_device_id": "system", 00:20:42.333 "dma_device_type": 1 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.333 "dma_device_type": 2 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "system", 00:20:42.333 "dma_device_type": 1 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.333 "dma_device_type": 2 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "system", 00:20:42.333 "dma_device_type": 1 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.333 "dma_device_type": 2 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "system", 00:20:42.333 "dma_device_type": 1 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.333 "dma_device_type": 2 00:20:42.333 } 00:20:42.333 ], 00:20:42.333 "driver_specific": { 00:20:42.333 "raid": { 00:20:42.333 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "strip_size_kb": 0, 00:20:42.333 "state": "online", 00:20:42.333 "raid_level": "raid1", 00:20:42.333 "superblock": true, 00:20:42.333 "num_base_bdevs": 4, 00:20:42.333 "num_base_bdevs_discovered": 4, 00:20:42.333 "num_base_bdevs_operational": 4, 00:20:42.333 "base_bdevs_list": [ 00:20:42.333 { 00:20:42.333 "name": "BaseBdev1", 00:20:42.333 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "is_configured": true, 00:20:42.333 "data_offset": 2048, 00:20:42.333 "data_size": 63488 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "name": "BaseBdev2", 00:20:42.333 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "is_configured": true, 00:20:42.333 "data_offset": 2048, 00:20:42.333 "data_size": 63488 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "name": "BaseBdev3", 00:20:42.333 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "is_configured": true, 00:20:42.333 "data_offset": 2048, 00:20:42.333 "data_size": 63488 00:20:42.333 }, 00:20:42.333 { 00:20:42.333 "name": "BaseBdev4", 00:20:42.333 "uuid": "d62e5247-1261-11ef-99fd-bfc7c66e2865", 00:20:42.333 "is_configured": true, 00:20:42.333 "data_offset": 2048, 00:20:42.333 "data_size": 63488 00:20:42.333 } 00:20:42.333 ] 00:20:42.333 } 00:20:42.333 } 00:20:42.333 }' 00:20:42.333 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.333 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:42.333 BaseBdev2 00:20:42.333 BaseBdev3 00:20:42.333 BaseBdev4' 00:20:42.333 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:42.333 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:42.333 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:42.591 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:42.591 "name": "BaseBdev1", 00:20:42.591 "aliases": [ 00:20:42.591 "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865" 00:20:42.591 ], 00:20:42.591 "product_name": "Malloc disk", 00:20:42.591 "block_size": 512, 00:20:42.591 "num_blocks": 65536, 00:20:42.591 "uuid": "d2d8ff6e-1261-11ef-99fd-bfc7c66e2865", 00:20:42.591 "assigned_rate_limits": { 00:20:42.591 "rw_ios_per_sec": 0, 00:20:42.591 "rw_mbytes_per_sec": 0, 00:20:42.591 "r_mbytes_per_sec": 0, 00:20:42.591 "w_mbytes_per_sec": 0 00:20:42.591 }, 00:20:42.591 "claimed": true, 00:20:42.591 "claim_type": "exclusive_write", 00:20:42.592 "zoned": false, 00:20:42.592 "supported_io_types": { 00:20:42.592 "read": true, 00:20:42.592 "write": true, 00:20:42.592 "unmap": true, 00:20:42.592 "write_zeroes": true, 00:20:42.592 "flush": true, 00:20:42.592 "reset": true, 00:20:42.592 "compare": false, 00:20:42.592 "compare_and_write": false, 00:20:42.592 "abort": true, 00:20:42.592 "nvme_admin": false, 00:20:42.592 "nvme_io": false 00:20:42.592 }, 00:20:42.592 "memory_domains": [ 00:20:42.592 { 00:20:42.592 "dma_device_id": "system", 00:20:42.592 "dma_device_type": 1 00:20:42.592 }, 00:20:42.592 { 00:20:42.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.592 "dma_device_type": 2 00:20:42.592 } 00:20:42.592 ], 00:20:42.592 "driver_specific": {} 00:20:42.592 }' 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.592 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:42.850 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:42.850 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:42.850 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:42.850 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:42.850 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:43.108 "name": "BaseBdev2", 00:20:43.108 "aliases": [ 00:20:43.108 "d47c55d5-1261-11ef-99fd-bfc7c66e2865" 00:20:43.108 ], 00:20:43.108 "product_name": "Malloc disk", 00:20:43.108 "block_size": 512, 00:20:43.108 "num_blocks": 65536, 00:20:43.108 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:43.108 "assigned_rate_limits": { 00:20:43.108 "rw_ios_per_sec": 0, 00:20:43.108 "rw_mbytes_per_sec": 0, 00:20:43.108 "r_mbytes_per_sec": 0, 00:20:43.108 "w_mbytes_per_sec": 0 00:20:43.108 }, 00:20:43.108 "claimed": true, 00:20:43.108 "claim_type": "exclusive_write", 00:20:43.108 "zoned": false, 00:20:43.108 "supported_io_types": { 00:20:43.108 "read": true, 00:20:43.108 "write": true, 00:20:43.108 "unmap": true, 00:20:43.108 "write_zeroes": true, 00:20:43.108 "flush": true, 00:20:43.108 "reset": true, 00:20:43.108 "compare": false, 00:20:43.108 "compare_and_write": false, 00:20:43.108 "abort": true, 00:20:43.108 "nvme_admin": false, 00:20:43.108 "nvme_io": false 00:20:43.108 }, 00:20:43.108 "memory_domains": [ 00:20:43.108 { 00:20:43.108 "dma_device_id": "system", 00:20:43.108 "dma_device_type": 1 00:20:43.108 }, 00:20:43.108 { 00:20:43.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.108 "dma_device_type": 2 00:20:43.108 } 00:20:43.108 ], 00:20:43.108 "driver_specific": {} 00:20:43.108 }' 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.108 02:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:43.108 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:43.366 "name": "BaseBdev3", 00:20:43.366 "aliases": [ 00:20:43.366 "d558b059-1261-11ef-99fd-bfc7c66e2865" 00:20:43.366 ], 00:20:43.366 "product_name": "Malloc disk", 00:20:43.366 "block_size": 512, 00:20:43.366 "num_blocks": 65536, 00:20:43.366 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:43.366 "assigned_rate_limits": { 00:20:43.366 "rw_ios_per_sec": 0, 00:20:43.366 "rw_mbytes_per_sec": 0, 00:20:43.366 "r_mbytes_per_sec": 0, 00:20:43.366 "w_mbytes_per_sec": 0 00:20:43.366 }, 00:20:43.366 "claimed": true, 00:20:43.366 "claim_type": "exclusive_write", 00:20:43.366 "zoned": false, 00:20:43.366 "supported_io_types": { 00:20:43.366 "read": true, 00:20:43.366 "write": true, 00:20:43.366 "unmap": true, 00:20:43.366 "write_zeroes": true, 00:20:43.366 "flush": true, 00:20:43.366 "reset": true, 00:20:43.366 "compare": false, 00:20:43.366 "compare_and_write": false, 00:20:43.366 "abort": true, 00:20:43.366 "nvme_admin": false, 00:20:43.366 "nvme_io": false 00:20:43.366 }, 00:20:43.366 "memory_domains": [ 00:20:43.366 { 00:20:43.366 "dma_device_id": "system", 00:20:43.366 "dma_device_type": 1 00:20:43.366 }, 00:20:43.366 { 00:20:43.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.366 "dma_device_type": 2 00:20:43.366 } 00:20:43.366 ], 00:20:43.366 "driver_specific": {} 00:20:43.366 }' 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:43.366 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:43.933 "name": "BaseBdev4", 00:20:43.933 "aliases": [ 00:20:43.933 "d62e5247-1261-11ef-99fd-bfc7c66e2865" 00:20:43.933 ], 00:20:43.933 "product_name": "Malloc disk", 00:20:43.933 "block_size": 512, 00:20:43.933 "num_blocks": 65536, 00:20:43.933 "uuid": "d62e5247-1261-11ef-99fd-bfc7c66e2865", 00:20:43.933 "assigned_rate_limits": { 00:20:43.933 "rw_ios_per_sec": 0, 00:20:43.933 "rw_mbytes_per_sec": 0, 00:20:43.933 "r_mbytes_per_sec": 0, 00:20:43.933 "w_mbytes_per_sec": 0 00:20:43.933 }, 00:20:43.933 "claimed": true, 00:20:43.933 "claim_type": "exclusive_write", 00:20:43.933 "zoned": false, 00:20:43.933 "supported_io_types": { 00:20:43.933 "read": true, 00:20:43.933 "write": true, 00:20:43.933 "unmap": true, 00:20:43.933 "write_zeroes": true, 00:20:43.933 "flush": true, 00:20:43.933 "reset": true, 00:20:43.933 "compare": false, 00:20:43.933 "compare_and_write": false, 00:20:43.933 "abort": true, 00:20:43.933 "nvme_admin": false, 00:20:43.933 "nvme_io": false 00:20:43.933 }, 00:20:43.933 "memory_domains": [ 00:20:43.933 { 00:20:43.933 "dma_device_id": "system", 00:20:43.933 "dma_device_type": 1 00:20:43.933 }, 00:20:43.933 { 00:20:43.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.933 "dma_device_type": 2 00:20:43.933 } 00:20:43.933 ], 00:20:43.933 "driver_specific": {} 00:20:43.933 }' 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:43.933 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:44.192 [2024-05-15 02:21:32.002474] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.192 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.449 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.449 "name": "Existed_Raid", 00:20:44.449 "uuid": "d3f2ff1a-1261-11ef-99fd-bfc7c66e2865", 00:20:44.449 "strip_size_kb": 0, 00:20:44.449 "state": "online", 00:20:44.449 "raid_level": "raid1", 00:20:44.449 "superblock": true, 00:20:44.450 "num_base_bdevs": 4, 00:20:44.450 "num_base_bdevs_discovered": 3, 00:20:44.450 "num_base_bdevs_operational": 3, 00:20:44.450 "base_bdevs_list": [ 00:20:44.450 { 00:20:44.450 "name": null, 00:20:44.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.450 "is_configured": false, 00:20:44.450 "data_offset": 2048, 00:20:44.450 "data_size": 63488 00:20:44.450 }, 00:20:44.450 { 00:20:44.450 "name": "BaseBdev2", 00:20:44.450 "uuid": "d47c55d5-1261-11ef-99fd-bfc7c66e2865", 00:20:44.450 "is_configured": true, 00:20:44.450 "data_offset": 2048, 00:20:44.450 "data_size": 63488 00:20:44.450 }, 00:20:44.450 { 00:20:44.450 "name": "BaseBdev3", 00:20:44.450 "uuid": "d558b059-1261-11ef-99fd-bfc7c66e2865", 00:20:44.450 "is_configured": true, 00:20:44.450 "data_offset": 2048, 00:20:44.450 "data_size": 63488 00:20:44.450 }, 00:20:44.450 { 00:20:44.450 "name": "BaseBdev4", 00:20:44.450 "uuid": "d62e5247-1261-11ef-99fd-bfc7c66e2865", 00:20:44.450 "is_configured": true, 00:20:44.450 "data_offset": 2048, 00:20:44.450 "data_size": 63488 00:20:44.450 } 00:20:44.450 ] 00:20:44.450 }' 00:20:44.450 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.450 02:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:44.730 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:44.730 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.730 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:44.987 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:44.987 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:44.987 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:45.245 [2024-05-15 02:21:33.127365] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:45.245 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:45.245 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:45.245 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.245 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:45.542 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:45.542 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.542 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:45.800 [2024-05-15 02:21:33.620202] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:45.800 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:45.800 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:45.800 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.800 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:46.057 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:46.057 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.057 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:46.057 [2024-05-15 02:21:34.076994] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:46.057 [2024-05-15 02:21:34.077039] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.314 [2024-05-15 02:21:34.081854] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.314 [2024-05-15 02:21:34.081874] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.314 [2024-05-15 02:21:34.081879] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828506a00 name Existed_Raid, state offline 00:20:46.314 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.314 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.314 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:46.314 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:46.574 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:46.833 BaseBdev2 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:46.833 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.090 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.348 [ 00:20:47.348 { 00:20:47.348 "name": "BaseBdev2", 00:20:47.348 "aliases": [ 00:20:47.348 "d9c6b21b-1261-11ef-99fd-bfc7c66e2865" 00:20:47.348 ], 00:20:47.348 "product_name": "Malloc disk", 00:20:47.348 "block_size": 512, 00:20:47.348 "num_blocks": 65536, 00:20:47.348 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:47.348 "assigned_rate_limits": { 00:20:47.348 "rw_ios_per_sec": 0, 00:20:47.348 "rw_mbytes_per_sec": 0, 00:20:47.348 "r_mbytes_per_sec": 0, 00:20:47.348 "w_mbytes_per_sec": 0 00:20:47.348 }, 00:20:47.348 "claimed": false, 00:20:47.348 "zoned": false, 00:20:47.348 "supported_io_types": { 00:20:47.348 "read": true, 00:20:47.348 "write": true, 00:20:47.348 "unmap": true, 00:20:47.348 "write_zeroes": true, 00:20:47.348 "flush": true, 00:20:47.348 "reset": true, 00:20:47.348 "compare": false, 00:20:47.348 "compare_and_write": false, 00:20:47.348 "abort": true, 00:20:47.348 "nvme_admin": false, 00:20:47.348 "nvme_io": false 00:20:47.348 }, 00:20:47.348 "memory_domains": [ 00:20:47.348 { 00:20:47.348 "dma_device_id": "system", 00:20:47.348 "dma_device_type": 1 00:20:47.348 }, 00:20:47.348 { 00:20:47.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.348 "dma_device_type": 2 00:20:47.348 } 00:20:47.348 ], 00:20:47.348 "driver_specific": {} 00:20:47.348 } 00:20:47.348 ] 00:20:47.348 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:47.348 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:47.348 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:47.348 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:47.607 BaseBdev3 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:47.607 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.864 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:48.127 [ 00:20:48.127 { 00:20:48.127 "name": "BaseBdev3", 00:20:48.127 "aliases": [ 00:20:48.127 "da3aab75-1261-11ef-99fd-bfc7c66e2865" 00:20:48.127 ], 00:20:48.128 "product_name": "Malloc disk", 00:20:48.128 "block_size": 512, 00:20:48.128 "num_blocks": 65536, 00:20:48.128 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:48.128 "assigned_rate_limits": { 00:20:48.128 "rw_ios_per_sec": 0, 00:20:48.128 "rw_mbytes_per_sec": 0, 00:20:48.128 "r_mbytes_per_sec": 0, 00:20:48.128 "w_mbytes_per_sec": 0 00:20:48.128 }, 00:20:48.128 "claimed": false, 00:20:48.128 "zoned": false, 00:20:48.128 "supported_io_types": { 00:20:48.128 "read": true, 00:20:48.128 "write": true, 00:20:48.128 "unmap": true, 00:20:48.128 "write_zeroes": true, 00:20:48.128 "flush": true, 00:20:48.128 "reset": true, 00:20:48.128 "compare": false, 00:20:48.128 "compare_and_write": false, 00:20:48.128 "abort": true, 00:20:48.128 "nvme_admin": false, 00:20:48.128 "nvme_io": false 00:20:48.128 }, 00:20:48.128 "memory_domains": [ 00:20:48.128 { 00:20:48.128 "dma_device_id": "system", 00:20:48.128 "dma_device_type": 1 00:20:48.128 }, 00:20:48.128 { 00:20:48.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.128 "dma_device_type": 2 00:20:48.128 } 00:20:48.128 ], 00:20:48.128 "driver_specific": {} 00:20:48.128 } 00:20:48.128 ] 00:20:48.128 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:48.128 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:48.128 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:48.128 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:48.128 BaseBdev4 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:48.385 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.642 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:48.900 [ 00:20:48.900 { 00:20:48.900 "name": "BaseBdev4", 00:20:48.900 "aliases": [ 00:20:48.900 "daa889ab-1261-11ef-99fd-bfc7c66e2865" 00:20:48.900 ], 00:20:48.900 "product_name": "Malloc disk", 00:20:48.900 "block_size": 512, 00:20:48.900 "num_blocks": 65536, 00:20:48.900 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:48.900 "assigned_rate_limits": { 00:20:48.900 "rw_ios_per_sec": 0, 00:20:48.900 "rw_mbytes_per_sec": 0, 00:20:48.900 "r_mbytes_per_sec": 0, 00:20:48.900 "w_mbytes_per_sec": 0 00:20:48.900 }, 00:20:48.900 "claimed": false, 00:20:48.900 "zoned": false, 00:20:48.900 "supported_io_types": { 00:20:48.900 "read": true, 00:20:48.900 "write": true, 00:20:48.900 "unmap": true, 00:20:48.900 "write_zeroes": true, 00:20:48.900 "flush": true, 00:20:48.900 "reset": true, 00:20:48.900 "compare": false, 00:20:48.900 "compare_and_write": false, 00:20:48.900 "abort": true, 00:20:48.900 "nvme_admin": false, 00:20:48.900 "nvme_io": false 00:20:48.900 }, 00:20:48.900 "memory_domains": [ 00:20:48.900 { 00:20:48.900 "dma_device_id": "system", 00:20:48.900 "dma_device_type": 1 00:20:48.900 }, 00:20:48.900 { 00:20:48.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.900 "dma_device_type": 2 00:20:48.900 } 00:20:48.900 ], 00:20:48.900 "driver_specific": {} 00:20:48.900 } 00:20:48.900 ] 00:20:48.900 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:48.900 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:48.900 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:48.900 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:49.159 [2024-05-15 02:21:37.026217] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:49.159 [2024-05-15 02:21:37.026281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:49.159 [2024-05-15 02:21:37.026292] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.159 [2024-05-15 02:21:37.026749] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.159 [2024-05-15 02:21:37.026778] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.159 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.416 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.416 "name": "Existed_Raid", 00:20:49.416 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:49.416 "strip_size_kb": 0, 00:20:49.416 "state": "configuring", 00:20:49.416 "raid_level": "raid1", 00:20:49.416 "superblock": true, 00:20:49.416 "num_base_bdevs": 4, 00:20:49.416 "num_base_bdevs_discovered": 3, 00:20:49.416 "num_base_bdevs_operational": 4, 00:20:49.416 "base_bdevs_list": [ 00:20:49.416 { 00:20:49.416 "name": "BaseBdev1", 00:20:49.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.416 "is_configured": false, 00:20:49.416 "data_offset": 0, 00:20:49.416 "data_size": 0 00:20:49.416 }, 00:20:49.416 { 00:20:49.416 "name": "BaseBdev2", 00:20:49.416 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:49.416 "is_configured": true, 00:20:49.416 "data_offset": 2048, 00:20:49.416 "data_size": 63488 00:20:49.416 }, 00:20:49.416 { 00:20:49.416 "name": "BaseBdev3", 00:20:49.416 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:49.416 "is_configured": true, 00:20:49.416 "data_offset": 2048, 00:20:49.416 "data_size": 63488 00:20:49.416 }, 00:20:49.416 { 00:20:49.416 "name": "BaseBdev4", 00:20:49.416 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:49.416 "is_configured": true, 00:20:49.416 "data_offset": 2048, 00:20:49.416 "data_size": 63488 00:20:49.416 } 00:20:49.416 ] 00:20:49.416 }' 00:20:49.416 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.416 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.675 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:49.932 [2024-05-15 02:21:37.826248] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.932 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.190 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.190 "name": "Existed_Raid", 00:20:50.190 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:50.190 "strip_size_kb": 0, 00:20:50.190 "state": "configuring", 00:20:50.190 "raid_level": "raid1", 00:20:50.190 "superblock": true, 00:20:50.190 "num_base_bdevs": 4, 00:20:50.190 "num_base_bdevs_discovered": 2, 00:20:50.190 "num_base_bdevs_operational": 4, 00:20:50.190 "base_bdevs_list": [ 00:20:50.190 { 00:20:50.190 "name": "BaseBdev1", 00:20:50.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.190 "is_configured": false, 00:20:50.190 "data_offset": 0, 00:20:50.190 "data_size": 0 00:20:50.190 }, 00:20:50.190 { 00:20:50.190 "name": null, 00:20:50.190 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:50.190 "is_configured": false, 00:20:50.190 "data_offset": 2048, 00:20:50.190 "data_size": 63488 00:20:50.190 }, 00:20:50.190 { 00:20:50.190 "name": "BaseBdev3", 00:20:50.190 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:50.190 "is_configured": true, 00:20:50.190 "data_offset": 2048, 00:20:50.190 "data_size": 63488 00:20:50.190 }, 00:20:50.190 { 00:20:50.190 "name": "BaseBdev4", 00:20:50.190 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:50.190 "is_configured": true, 00:20:50.190 "data_offset": 2048, 00:20:50.190 "data_size": 63488 00:20:50.190 } 00:20:50.190 ] 00:20:50.190 }' 00:20:50.190 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.190 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.448 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.448 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:50.706 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:20:50.706 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:50.964 [2024-05-15 02:21:38.894415] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.964 BaseBdev1 00:20:50.964 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:50.965 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.223 02:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:51.481 [ 00:20:51.481 { 00:20:51.481 "name": "BaseBdev1", 00:20:51.481 "aliases": [ 00:20:51.481 "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865" 00:20:51.481 ], 00:20:51.481 "product_name": "Malloc disk", 00:20:51.481 "block_size": 512, 00:20:51.481 "num_blocks": 65536, 00:20:51.481 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:51.481 "assigned_rate_limits": { 00:20:51.481 "rw_ios_per_sec": 0, 00:20:51.481 "rw_mbytes_per_sec": 0, 00:20:51.481 "r_mbytes_per_sec": 0, 00:20:51.481 "w_mbytes_per_sec": 0 00:20:51.481 }, 00:20:51.481 "claimed": true, 00:20:51.481 "claim_type": "exclusive_write", 00:20:51.481 "zoned": false, 00:20:51.481 "supported_io_types": { 00:20:51.481 "read": true, 00:20:51.481 "write": true, 00:20:51.481 "unmap": true, 00:20:51.481 "write_zeroes": true, 00:20:51.481 "flush": true, 00:20:51.481 "reset": true, 00:20:51.481 "compare": false, 00:20:51.481 "compare_and_write": false, 00:20:51.481 "abort": true, 00:20:51.481 "nvme_admin": false, 00:20:51.481 "nvme_io": false 00:20:51.481 }, 00:20:51.481 "memory_domains": [ 00:20:51.481 { 00:20:51.481 "dma_device_id": "system", 00:20:51.481 "dma_device_type": 1 00:20:51.481 }, 00:20:51.481 { 00:20:51.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.481 "dma_device_type": 2 00:20:51.481 } 00:20:51.481 ], 00:20:51.481 "driver_specific": {} 00:20:51.481 } 00:20:51.481 ] 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.481 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.740 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.740 "name": "Existed_Raid", 00:20:51.740 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:51.740 "strip_size_kb": 0, 00:20:51.740 "state": "configuring", 00:20:51.740 "raid_level": "raid1", 00:20:51.740 "superblock": true, 00:20:51.740 "num_base_bdevs": 4, 00:20:51.740 "num_base_bdevs_discovered": 3, 00:20:51.740 "num_base_bdevs_operational": 4, 00:20:51.740 "base_bdevs_list": [ 00:20:51.740 { 00:20:51.740 "name": "BaseBdev1", 00:20:51.740 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:51.740 "is_configured": true, 00:20:51.740 "data_offset": 2048, 00:20:51.740 "data_size": 63488 00:20:51.740 }, 00:20:51.740 { 00:20:51.740 "name": null, 00:20:51.740 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:51.740 "is_configured": false, 00:20:51.740 "data_offset": 2048, 00:20:51.740 "data_size": 63488 00:20:51.740 }, 00:20:51.740 { 00:20:51.740 "name": "BaseBdev3", 00:20:51.740 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:51.740 "is_configured": true, 00:20:51.740 "data_offset": 2048, 00:20:51.740 "data_size": 63488 00:20:51.740 }, 00:20:51.740 { 00:20:51.740 "name": "BaseBdev4", 00:20:51.740 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:51.740 "is_configured": true, 00:20:51.740 "data_offset": 2048, 00:20:51.740 "data_size": 63488 00:20:51.740 } 00:20:51.740 ] 00:20:51.740 }' 00:20:51.740 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.740 02:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.999 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.999 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:52.257 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:52.257 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:52.515 [2024-05-15 02:21:40.450391] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.515 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.773 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.773 "name": "Existed_Raid", 00:20:52.773 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:52.773 "strip_size_kb": 0, 00:20:52.773 "state": "configuring", 00:20:52.773 "raid_level": "raid1", 00:20:52.773 "superblock": true, 00:20:52.773 "num_base_bdevs": 4, 00:20:52.773 "num_base_bdevs_discovered": 2, 00:20:52.773 "num_base_bdevs_operational": 4, 00:20:52.773 "base_bdevs_list": [ 00:20:52.773 { 00:20:52.773 "name": "BaseBdev1", 00:20:52.773 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:52.773 "is_configured": true, 00:20:52.773 "data_offset": 2048, 00:20:52.773 "data_size": 63488 00:20:52.773 }, 00:20:52.773 { 00:20:52.773 "name": null, 00:20:52.773 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:52.773 "is_configured": false, 00:20:52.773 "data_offset": 2048, 00:20:52.773 "data_size": 63488 00:20:52.773 }, 00:20:52.773 { 00:20:52.773 "name": null, 00:20:52.773 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:52.773 "is_configured": false, 00:20:52.773 "data_offset": 2048, 00:20:52.773 "data_size": 63488 00:20:52.773 }, 00:20:52.773 { 00:20:52.773 "name": "BaseBdev4", 00:20:52.773 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:52.773 "is_configured": true, 00:20:52.773 "data_offset": 2048, 00:20:52.773 "data_size": 63488 00:20:52.773 } 00:20:52.773 ] 00:20:52.773 }' 00:20:52.773 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.773 02:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.339 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:53.339 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:20:53.339 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:53.597 [2024-05-15 02:21:41.530491] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.597 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.856 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.856 "name": "Existed_Raid", 00:20:53.856 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:53.856 "strip_size_kb": 0, 00:20:53.856 "state": "configuring", 00:20:53.856 "raid_level": "raid1", 00:20:53.856 "superblock": true, 00:20:53.856 "num_base_bdevs": 4, 00:20:53.856 "num_base_bdevs_discovered": 3, 00:20:53.856 "num_base_bdevs_operational": 4, 00:20:53.856 "base_bdevs_list": [ 00:20:53.856 { 00:20:53.856 "name": "BaseBdev1", 00:20:53.856 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:53.856 "is_configured": true, 00:20:53.856 "data_offset": 2048, 00:20:53.856 "data_size": 63488 00:20:53.856 }, 00:20:53.856 { 00:20:53.856 "name": null, 00:20:53.856 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:53.856 "is_configured": false, 00:20:53.856 "data_offset": 2048, 00:20:53.856 "data_size": 63488 00:20:53.856 }, 00:20:53.856 { 00:20:53.856 "name": "BaseBdev3", 00:20:53.856 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:53.856 "is_configured": true, 00:20:53.856 "data_offset": 2048, 00:20:53.856 "data_size": 63488 00:20:53.856 }, 00:20:53.856 { 00:20:53.856 "name": "BaseBdev4", 00:20:53.856 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:53.856 "is_configured": true, 00:20:53.856 "data_offset": 2048, 00:20:53.856 "data_size": 63488 00:20:53.856 } 00:20:53.856 ] 00:20:53.856 }' 00:20:53.856 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.856 02:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.426 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.426 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:54.426 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:20:54.426 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:54.684 [2024-05-15 02:21:42.566594] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.684 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.942 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.942 "name": "Existed_Raid", 00:20:54.942 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:54.942 "strip_size_kb": 0, 00:20:54.942 "state": "configuring", 00:20:54.942 "raid_level": "raid1", 00:20:54.942 "superblock": true, 00:20:54.942 "num_base_bdevs": 4, 00:20:54.942 "num_base_bdevs_discovered": 2, 00:20:54.942 "num_base_bdevs_operational": 4, 00:20:54.942 "base_bdevs_list": [ 00:20:54.942 { 00:20:54.942 "name": null, 00:20:54.942 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:54.942 "is_configured": false, 00:20:54.942 "data_offset": 2048, 00:20:54.942 "data_size": 63488 00:20:54.942 }, 00:20:54.942 { 00:20:54.942 "name": null, 00:20:54.942 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:54.942 "is_configured": false, 00:20:54.942 "data_offset": 2048, 00:20:54.942 "data_size": 63488 00:20:54.942 }, 00:20:54.942 { 00:20:54.942 "name": "BaseBdev3", 00:20:54.942 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:54.942 "is_configured": true, 00:20:54.942 "data_offset": 2048, 00:20:54.942 "data_size": 63488 00:20:54.942 }, 00:20:54.942 { 00:20:54.942 "name": "BaseBdev4", 00:20:54.942 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:54.942 "is_configured": true, 00:20:54.942 "data_offset": 2048, 00:20:54.942 "data_size": 63488 00:20:54.942 } 00:20:54.942 ] 00:20:54.942 }' 00:20:54.942 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.942 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.200 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.200 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:55.767 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:20:55.767 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:55.767 [2024-05-15 02:21:43.768131] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.035 02:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.294 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.294 "name": "Existed_Raid", 00:20:56.294 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:56.294 "strip_size_kb": 0, 00:20:56.294 "state": "configuring", 00:20:56.294 "raid_level": "raid1", 00:20:56.294 "superblock": true, 00:20:56.294 "num_base_bdevs": 4, 00:20:56.294 "num_base_bdevs_discovered": 3, 00:20:56.294 "num_base_bdevs_operational": 4, 00:20:56.294 "base_bdevs_list": [ 00:20:56.294 { 00:20:56.294 "name": null, 00:20:56.294 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:56.294 "is_configured": false, 00:20:56.294 "data_offset": 2048, 00:20:56.294 "data_size": 63488 00:20:56.294 }, 00:20:56.294 { 00:20:56.294 "name": "BaseBdev2", 00:20:56.294 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:56.294 "is_configured": true, 00:20:56.294 "data_offset": 2048, 00:20:56.294 "data_size": 63488 00:20:56.294 }, 00:20:56.294 { 00:20:56.294 "name": "BaseBdev3", 00:20:56.294 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:56.294 "is_configured": true, 00:20:56.294 "data_offset": 2048, 00:20:56.294 "data_size": 63488 00:20:56.294 }, 00:20:56.294 { 00:20:56.294 "name": "BaseBdev4", 00:20:56.294 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:56.294 "is_configured": true, 00:20:56.294 "data_offset": 2048, 00:20:56.294 "data_size": 63488 00:20:56.294 } 00:20:56.294 ] 00:20:56.294 }' 00:20:56.294 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.294 02:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.552 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.552 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:56.810 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:20:56.811 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.811 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:57.068 02:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u dc4d1b0b-1261-11ef-99fd-bfc7c66e2865 00:20:57.326 [2024-05-15 02:21:45.160306] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:57.326 [2024-05-15 02:21:45.160364] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x828506f00 00:20:57.327 [2024-05-15 02:21:45.160369] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:57.327 [2024-05-15 02:21:45.160389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x828569e20 00:20:57.327 [2024-05-15 02:21:45.160427] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x828506f00 00:20:57.327 [2024-05-15 02:21:45.160430] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x828506f00 00:20:57.327 [2024-05-15 02:21:45.160449] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.327 NewBaseBdev 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:57.327 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.585 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:57.843 [ 00:20:57.843 { 00:20:57.843 "name": "NewBaseBdev", 00:20:57.843 "aliases": [ 00:20:57.843 "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865" 00:20:57.843 ], 00:20:57.843 "product_name": "Malloc disk", 00:20:57.843 "block_size": 512, 00:20:57.843 "num_blocks": 65536, 00:20:57.843 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:57.843 "assigned_rate_limits": { 00:20:57.843 "rw_ios_per_sec": 0, 00:20:57.843 "rw_mbytes_per_sec": 0, 00:20:57.843 "r_mbytes_per_sec": 0, 00:20:57.843 "w_mbytes_per_sec": 0 00:20:57.843 }, 00:20:57.843 "claimed": true, 00:20:57.843 "claim_type": "exclusive_write", 00:20:57.843 "zoned": false, 00:20:57.843 "supported_io_types": { 00:20:57.843 "read": true, 00:20:57.843 "write": true, 00:20:57.843 "unmap": true, 00:20:57.843 "write_zeroes": true, 00:20:57.843 "flush": true, 00:20:57.843 "reset": true, 00:20:57.843 "compare": false, 00:20:57.843 "compare_and_write": false, 00:20:57.843 "abort": true, 00:20:57.843 "nvme_admin": false, 00:20:57.843 "nvme_io": false 00:20:57.843 }, 00:20:57.843 "memory_domains": [ 00:20:57.843 { 00:20:57.843 "dma_device_id": "system", 00:20:57.843 "dma_device_type": 1 00:20:57.843 }, 00:20:57.843 { 00:20:57.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.843 "dma_device_type": 2 00:20:57.843 } 00:20:57.843 ], 00:20:57.843 "driver_specific": {} 00:20:57.843 } 00:20:57.843 ] 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.843 02:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.157 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.157 "name": "Existed_Raid", 00:20:58.157 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:58.157 "strip_size_kb": 0, 00:20:58.157 "state": "online", 00:20:58.157 "raid_level": "raid1", 00:20:58.157 "superblock": true, 00:20:58.157 "num_base_bdevs": 4, 00:20:58.157 "num_base_bdevs_discovered": 4, 00:20:58.157 "num_base_bdevs_operational": 4, 00:20:58.157 "base_bdevs_list": [ 00:20:58.157 { 00:20:58.157 "name": "NewBaseBdev", 00:20:58.157 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 }, 00:20:58.157 { 00:20:58.157 "name": "BaseBdev2", 00:20:58.157 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 }, 00:20:58.157 { 00:20:58.157 "name": "BaseBdev3", 00:20:58.157 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 }, 00:20:58.157 { 00:20:58.157 "name": "BaseBdev4", 00:20:58.157 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 } 00:20:58.157 ] 00:20:58.157 }' 00:20:58.157 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.157 02:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:58.415 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:58.674 [2024-05-15 02:21:46.524295] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:58.674 "name": "Existed_Raid", 00:20:58.674 "aliases": [ 00:20:58.674 "db300e50-1261-11ef-99fd-bfc7c66e2865" 00:20:58.674 ], 00:20:58.674 "product_name": "Raid Volume", 00:20:58.674 "block_size": 512, 00:20:58.674 "num_blocks": 63488, 00:20:58.674 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "assigned_rate_limits": { 00:20:58.674 "rw_ios_per_sec": 0, 00:20:58.674 "rw_mbytes_per_sec": 0, 00:20:58.674 "r_mbytes_per_sec": 0, 00:20:58.674 "w_mbytes_per_sec": 0 00:20:58.674 }, 00:20:58.674 "claimed": false, 00:20:58.674 "zoned": false, 00:20:58.674 "supported_io_types": { 00:20:58.674 "read": true, 00:20:58.674 "write": true, 00:20:58.674 "unmap": false, 00:20:58.674 "write_zeroes": true, 00:20:58.674 "flush": false, 00:20:58.674 "reset": true, 00:20:58.674 "compare": false, 00:20:58.674 "compare_and_write": false, 00:20:58.674 "abort": false, 00:20:58.674 "nvme_admin": false, 00:20:58.674 "nvme_io": false 00:20:58.674 }, 00:20:58.674 "memory_domains": [ 00:20:58.674 { 00:20:58.674 "dma_device_id": "system", 00:20:58.674 "dma_device_type": 1 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.674 "dma_device_type": 2 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "system", 00:20:58.674 "dma_device_type": 1 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.674 "dma_device_type": 2 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "system", 00:20:58.674 "dma_device_type": 1 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.674 "dma_device_type": 2 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "system", 00:20:58.674 "dma_device_type": 1 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.674 "dma_device_type": 2 00:20:58.674 } 00:20:58.674 ], 00:20:58.674 "driver_specific": { 00:20:58.674 "raid": { 00:20:58.674 "uuid": "db300e50-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "strip_size_kb": 0, 00:20:58.674 "state": "online", 00:20:58.674 "raid_level": "raid1", 00:20:58.674 "superblock": true, 00:20:58.674 "num_base_bdevs": 4, 00:20:58.674 "num_base_bdevs_discovered": 4, 00:20:58.674 "num_base_bdevs_operational": 4, 00:20:58.674 "base_bdevs_list": [ 00:20:58.674 { 00:20:58.674 "name": "NewBaseBdev", 00:20:58.674 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "is_configured": true, 00:20:58.674 "data_offset": 2048, 00:20:58.674 "data_size": 63488 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "name": "BaseBdev2", 00:20:58.674 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "is_configured": true, 00:20:58.674 "data_offset": 2048, 00:20:58.674 "data_size": 63488 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "name": "BaseBdev3", 00:20:58.674 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "is_configured": true, 00:20:58.674 "data_offset": 2048, 00:20:58.674 "data_size": 63488 00:20:58.674 }, 00:20:58.674 { 00:20:58.674 "name": "BaseBdev4", 00:20:58.674 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:20:58.674 "is_configured": true, 00:20:58.674 "data_offset": 2048, 00:20:58.674 "data_size": 63488 00:20:58.674 } 00:20:58.674 ] 00:20:58.674 } 00:20:58.674 } 00:20:58.674 }' 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:20:58.674 BaseBdev2 00:20:58.674 BaseBdev3 00:20:58.674 BaseBdev4' 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:58.674 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:58.932 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:58.932 "name": "NewBaseBdev", 00:20:58.932 "aliases": [ 00:20:58.932 "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865" 00:20:58.932 ], 00:20:58.933 "product_name": "Malloc disk", 00:20:58.933 "block_size": 512, 00:20:58.933 "num_blocks": 65536, 00:20:58.933 "uuid": "dc4d1b0b-1261-11ef-99fd-bfc7c66e2865", 00:20:58.933 "assigned_rate_limits": { 00:20:58.933 "rw_ios_per_sec": 0, 00:20:58.933 "rw_mbytes_per_sec": 0, 00:20:58.933 "r_mbytes_per_sec": 0, 00:20:58.933 "w_mbytes_per_sec": 0 00:20:58.933 }, 00:20:58.933 "claimed": true, 00:20:58.933 "claim_type": "exclusive_write", 00:20:58.933 "zoned": false, 00:20:58.933 "supported_io_types": { 00:20:58.933 "read": true, 00:20:58.933 "write": true, 00:20:58.933 "unmap": true, 00:20:58.933 "write_zeroes": true, 00:20:58.933 "flush": true, 00:20:58.933 "reset": true, 00:20:58.933 "compare": false, 00:20:58.933 "compare_and_write": false, 00:20:58.933 "abort": true, 00:20:58.933 "nvme_admin": false, 00:20:58.933 "nvme_io": false 00:20:58.933 }, 00:20:58.933 "memory_domains": [ 00:20:58.933 { 00:20:58.933 "dma_device_id": "system", 00:20:58.933 "dma_device_type": 1 00:20:58.933 }, 00:20:58.933 { 00:20:58.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.933 "dma_device_type": 2 00:20:58.933 } 00:20:58.933 ], 00:20:58.933 "driver_specific": {} 00:20:58.933 }' 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:58.933 02:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:59.191 "name": "BaseBdev2", 00:20:59.191 "aliases": [ 00:20:59.191 "d9c6b21b-1261-11ef-99fd-bfc7c66e2865" 00:20:59.191 ], 00:20:59.191 "product_name": "Malloc disk", 00:20:59.191 "block_size": 512, 00:20:59.191 "num_blocks": 65536, 00:20:59.191 "uuid": "d9c6b21b-1261-11ef-99fd-bfc7c66e2865", 00:20:59.191 "assigned_rate_limits": { 00:20:59.191 "rw_ios_per_sec": 0, 00:20:59.191 "rw_mbytes_per_sec": 0, 00:20:59.191 "r_mbytes_per_sec": 0, 00:20:59.191 "w_mbytes_per_sec": 0 00:20:59.191 }, 00:20:59.191 "claimed": true, 00:20:59.191 "claim_type": "exclusive_write", 00:20:59.191 "zoned": false, 00:20:59.191 "supported_io_types": { 00:20:59.191 "read": true, 00:20:59.191 "write": true, 00:20:59.191 "unmap": true, 00:20:59.191 "write_zeroes": true, 00:20:59.191 "flush": true, 00:20:59.191 "reset": true, 00:20:59.191 "compare": false, 00:20:59.191 "compare_and_write": false, 00:20:59.191 "abort": true, 00:20:59.191 "nvme_admin": false, 00:20:59.191 "nvme_io": false 00:20:59.191 }, 00:20:59.191 "memory_domains": [ 00:20:59.191 { 00:20:59.191 "dma_device_id": "system", 00:20:59.191 "dma_device_type": 1 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.191 "dma_device_type": 2 00:20:59.191 } 00:20:59.191 ], 00:20:59.191 "driver_specific": {} 00:20:59.191 }' 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:59.191 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:59.758 "name": "BaseBdev3", 00:20:59.758 "aliases": [ 00:20:59.758 "da3aab75-1261-11ef-99fd-bfc7c66e2865" 00:20:59.758 ], 00:20:59.758 "product_name": "Malloc disk", 00:20:59.758 "block_size": 512, 00:20:59.758 "num_blocks": 65536, 00:20:59.758 "uuid": "da3aab75-1261-11ef-99fd-bfc7c66e2865", 00:20:59.758 "assigned_rate_limits": { 00:20:59.758 "rw_ios_per_sec": 0, 00:20:59.758 "rw_mbytes_per_sec": 0, 00:20:59.758 "r_mbytes_per_sec": 0, 00:20:59.758 "w_mbytes_per_sec": 0 00:20:59.758 }, 00:20:59.758 "claimed": true, 00:20:59.758 "claim_type": "exclusive_write", 00:20:59.758 "zoned": false, 00:20:59.758 "supported_io_types": { 00:20:59.758 "read": true, 00:20:59.758 "write": true, 00:20:59.758 "unmap": true, 00:20:59.758 "write_zeroes": true, 00:20:59.758 "flush": true, 00:20:59.758 "reset": true, 00:20:59.758 "compare": false, 00:20:59.758 "compare_and_write": false, 00:20:59.758 "abort": true, 00:20:59.758 "nvme_admin": false, 00:20:59.758 "nvme_io": false 00:20:59.758 }, 00:20:59.758 "memory_domains": [ 00:20:59.758 { 00:20:59.758 "dma_device_id": "system", 00:20:59.758 "dma_device_type": 1 00:20:59.758 }, 00:20:59.758 { 00:20:59.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.758 "dma_device_type": 2 00:20:59.758 } 00:20:59.758 ], 00:20:59.758 "driver_specific": {} 00:20:59.758 }' 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:59.758 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:00.016 "name": "BaseBdev4", 00:21:00.016 "aliases": [ 00:21:00.016 "daa889ab-1261-11ef-99fd-bfc7c66e2865" 00:21:00.016 ], 00:21:00.016 "product_name": "Malloc disk", 00:21:00.016 "block_size": 512, 00:21:00.016 "num_blocks": 65536, 00:21:00.016 "uuid": "daa889ab-1261-11ef-99fd-bfc7c66e2865", 00:21:00.016 "assigned_rate_limits": { 00:21:00.016 "rw_ios_per_sec": 0, 00:21:00.016 "rw_mbytes_per_sec": 0, 00:21:00.016 "r_mbytes_per_sec": 0, 00:21:00.016 "w_mbytes_per_sec": 0 00:21:00.016 }, 00:21:00.016 "claimed": true, 00:21:00.016 "claim_type": "exclusive_write", 00:21:00.016 "zoned": false, 00:21:00.016 "supported_io_types": { 00:21:00.016 "read": true, 00:21:00.016 "write": true, 00:21:00.016 "unmap": true, 00:21:00.016 "write_zeroes": true, 00:21:00.016 "flush": true, 00:21:00.016 "reset": true, 00:21:00.016 "compare": false, 00:21:00.016 "compare_and_write": false, 00:21:00.016 "abort": true, 00:21:00.016 "nvme_admin": false, 00:21:00.016 "nvme_io": false 00:21:00.016 }, 00:21:00.016 "memory_domains": [ 00:21:00.016 { 00:21:00.016 "dma_device_id": "system", 00:21:00.016 "dma_device_type": 1 00:21:00.016 }, 00:21:00.016 { 00:21:00.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.016 "dma_device_type": 2 00:21:00.016 } 00:21:00.016 ], 00:21:00.016 "driver_specific": {} 00:21:00.016 }' 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:00.016 02:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:00.274 [2024-05-15 02:21:48.148353] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.274 [2024-05-15 02:21:48.148382] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.274 [2024-05-15 02:21:48.148404] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.274 [2024-05-15 02:21:48.148488] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.274 [2024-05-15 02:21:48.148492] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828506f00 name Existed_Raid, state offline 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 62157 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 62157 ']' 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 62157 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 62157 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:00.274 killing process with pid 62157 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62157' 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 62157 00:21:00.274 [2024-05-15 02:21:48.191152] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.274 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 62157 00:21:00.274 [2024-05-15 02:21:48.210295] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.544 02:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:21:00.544 00:21:00.544 real 0m28.146s 00:21:00.544 user 0m51.503s 00:21:00.544 sys 0m4.010s 00:21:00.544 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:00.544 02:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.544 ************************************ 00:21:00.544 END TEST raid_state_function_test_sb 00:21:00.544 ************************************ 00:21:00.544 02:21:48 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:21:00.544 02:21:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:00.544 02:21:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:00.544 02:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.544 ************************************ 00:21:00.544 START TEST raid_superblock_test 00:21:00.544 ************************************ 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62979 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62979 /var/tmp/spdk-raid.sock 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 62979 ']' 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.544 02:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.544 [2024-05-15 02:21:48.416551] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:00.544 [2024-05-15 02:21:48.416811] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:01.191 EAL: TSC is not safe to use in SMP mode 00:21:01.191 EAL: TSC is not invariant 00:21:01.191 [2024-05-15 02:21:48.938502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.191 [2024-05-15 02:21:49.027200] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:01.191 [2024-05-15 02:21:49.029421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.191 [2024-05-15 02:21:49.030164] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.191 [2024-05-15 02:21:49.030176] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.450 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:01.709 malloc1 00:21:01.709 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.968 [2024-05-15 02:21:49.921865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.968 [2024-05-15 02:21:49.921939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.968 [2024-05-15 02:21:49.922562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5c780 00:21:01.968 [2024-05-15 02:21:49.922593] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.968 [2024-05-15 02:21:49.923368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.968 [2024-05-15 02:21:49.923399] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.968 pt1 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.968 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:02.226 malloc2 00:21:02.227 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.485 [2024-05-15 02:21:50.425889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.485 [2024-05-15 02:21:50.425952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.485 [2024-05-15 02:21:50.425980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5cc80 00:21:02.485 [2024-05-15 02:21:50.425988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.485 [2024-05-15 02:21:50.426540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.485 [2024-05-15 02:21:50.426570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.485 pt2 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.485 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:02.743 malloc3 00:21:02.743 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:03.075 [2024-05-15 02:21:50.921917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:03.075 [2024-05-15 02:21:50.921984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.075 [2024-05-15 02:21:50.922012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d180 00:21:03.075 [2024-05-15 02:21:50.922021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.075 [2024-05-15 02:21:50.922548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.075 [2024-05-15 02:21:50.922577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:03.075 pt3 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.075 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:03.333 malloc4 00:21:03.333 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:03.592 [2024-05-15 02:21:51.357930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:03.592 [2024-05-15 02:21:51.357990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.592 [2024-05-15 02:21:51.358032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d680 00:21:03.592 [2024-05-15 02:21:51.358040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.592 [2024-05-15 02:21:51.358500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.592 [2024-05-15 02:21:51.358528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:03.592 pt4 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:03.592 [2024-05-15 02:21:51.585958] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.592 [2024-05-15 02:21:51.586418] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.592 [2024-05-15 02:21:51.586432] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:03.592 [2024-05-15 02:21:51.586442] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:03.592 [2024-05-15 02:21:51.586507] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cc5d900 00:21:03.592 [2024-05-15 02:21:51.586512] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:03.592 [2024-05-15 02:21:51.586543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ccbfe20 00:21:03.592 [2024-05-15 02:21:51.586600] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cc5d900 00:21:03.592 [2024-05-15 02:21:51.586604] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cc5d900 00:21:03.592 [2024-05-15 02:21:51.586625] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.592 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.850 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.850 "name": "raid_bdev1", 00:21:03.850 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:03.850 "strip_size_kb": 0, 00:21:03.850 "state": "online", 00:21:03.850 "raid_level": "raid1", 00:21:03.850 "superblock": true, 00:21:03.850 "num_base_bdevs": 4, 00:21:03.850 "num_base_bdevs_discovered": 4, 00:21:03.850 "num_base_bdevs_operational": 4, 00:21:03.850 "base_bdevs_list": [ 00:21:03.850 { 00:21:03.850 "name": "pt1", 00:21:03.850 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:03.850 "is_configured": true, 00:21:03.850 "data_offset": 2048, 00:21:03.850 "data_size": 63488 00:21:03.850 }, 00:21:03.850 { 00:21:03.850 "name": "pt2", 00:21:03.850 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:03.850 "is_configured": true, 00:21:03.850 "data_offset": 2048, 00:21:03.850 "data_size": 63488 00:21:03.850 }, 00:21:03.850 { 00:21:03.850 "name": "pt3", 00:21:03.850 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:03.850 "is_configured": true, 00:21:03.850 "data_offset": 2048, 00:21:03.850 "data_size": 63488 00:21:03.850 }, 00:21:03.850 { 00:21:03.850 "name": "pt4", 00:21:03.850 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:03.850 "is_configured": true, 00:21:03.850 "data_offset": 2048, 00:21:03.850 "data_size": 63488 00:21:03.850 } 00:21:03.850 ] 00:21:03.850 }' 00:21:03.850 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.850 02:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.421 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:04.421 [2024-05-15 02:21:52.430042] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:04.679 "name": "raid_bdev1", 00:21:04.679 "aliases": [ 00:21:04.679 "e3ddb21d-1261-11ef-99fd-bfc7c66e2865" 00:21:04.679 ], 00:21:04.679 "product_name": "Raid Volume", 00:21:04.679 "block_size": 512, 00:21:04.679 "num_blocks": 63488, 00:21:04.679 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:04.679 "assigned_rate_limits": { 00:21:04.679 "rw_ios_per_sec": 0, 00:21:04.679 "rw_mbytes_per_sec": 0, 00:21:04.679 "r_mbytes_per_sec": 0, 00:21:04.679 "w_mbytes_per_sec": 0 00:21:04.679 }, 00:21:04.679 "claimed": false, 00:21:04.679 "zoned": false, 00:21:04.679 "supported_io_types": { 00:21:04.679 "read": true, 00:21:04.679 "write": true, 00:21:04.679 "unmap": false, 00:21:04.679 "write_zeroes": true, 00:21:04.679 "flush": false, 00:21:04.679 "reset": true, 00:21:04.679 "compare": false, 00:21:04.679 "compare_and_write": false, 00:21:04.679 "abort": false, 00:21:04.679 "nvme_admin": false, 00:21:04.679 "nvme_io": false 00:21:04.679 }, 00:21:04.679 "memory_domains": [ 00:21:04.679 { 00:21:04.679 "dma_device_id": "system", 00:21:04.679 "dma_device_type": 1 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.679 "dma_device_type": 2 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "system", 00:21:04.679 "dma_device_type": 1 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.679 "dma_device_type": 2 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "system", 00:21:04.679 "dma_device_type": 1 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.679 "dma_device_type": 2 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "system", 00:21:04.679 "dma_device_type": 1 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.679 "dma_device_type": 2 00:21:04.679 } 00:21:04.679 ], 00:21:04.679 "driver_specific": { 00:21:04.679 "raid": { 00:21:04.679 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:04.679 "strip_size_kb": 0, 00:21:04.679 "state": "online", 00:21:04.679 "raid_level": "raid1", 00:21:04.679 "superblock": true, 00:21:04.679 "num_base_bdevs": 4, 00:21:04.679 "num_base_bdevs_discovered": 4, 00:21:04.679 "num_base_bdevs_operational": 4, 00:21:04.679 "base_bdevs_list": [ 00:21:04.679 { 00:21:04.679 "name": "pt1", 00:21:04.679 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:04.679 "is_configured": true, 00:21:04.679 "data_offset": 2048, 00:21:04.679 "data_size": 63488 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "name": "pt2", 00:21:04.679 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:04.679 "is_configured": true, 00:21:04.679 "data_offset": 2048, 00:21:04.679 "data_size": 63488 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "name": "pt3", 00:21:04.679 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:04.679 "is_configured": true, 00:21:04.679 "data_offset": 2048, 00:21:04.679 "data_size": 63488 00:21:04.679 }, 00:21:04.679 { 00:21:04.679 "name": "pt4", 00:21:04.679 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:04.679 "is_configured": true, 00:21:04.679 "data_offset": 2048, 00:21:04.679 "data_size": 63488 00:21:04.679 } 00:21:04.679 ] 00:21:04.679 } 00:21:04.679 } 00:21:04.679 }' 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:04.679 pt2 00:21:04.679 pt3 00:21:04.679 pt4' 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:04.679 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:04.939 "name": "pt1", 00:21:04.939 "aliases": [ 00:21:04.939 "9637fc98-292e-c55b-adfd-71dbca77ee03" 00:21:04.939 ], 00:21:04.939 "product_name": "passthru", 00:21:04.939 "block_size": 512, 00:21:04.939 "num_blocks": 65536, 00:21:04.939 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:04.939 "assigned_rate_limits": { 00:21:04.939 "rw_ios_per_sec": 0, 00:21:04.939 "rw_mbytes_per_sec": 0, 00:21:04.939 "r_mbytes_per_sec": 0, 00:21:04.939 "w_mbytes_per_sec": 0 00:21:04.939 }, 00:21:04.939 "claimed": true, 00:21:04.939 "claim_type": "exclusive_write", 00:21:04.939 "zoned": false, 00:21:04.939 "supported_io_types": { 00:21:04.939 "read": true, 00:21:04.939 "write": true, 00:21:04.939 "unmap": true, 00:21:04.939 "write_zeroes": true, 00:21:04.939 "flush": true, 00:21:04.939 "reset": true, 00:21:04.939 "compare": false, 00:21:04.939 "compare_and_write": false, 00:21:04.939 "abort": true, 00:21:04.939 "nvme_admin": false, 00:21:04.939 "nvme_io": false 00:21:04.939 }, 00:21:04.939 "memory_domains": [ 00:21:04.939 { 00:21:04.939 "dma_device_id": "system", 00:21:04.939 "dma_device_type": 1 00:21:04.939 }, 00:21:04.939 { 00:21:04.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.939 "dma_device_type": 2 00:21:04.939 } 00:21:04.939 ], 00:21:04.939 "driver_specific": { 00:21:04.939 "passthru": { 00:21:04.939 "name": "pt1", 00:21:04.939 "base_bdev_name": "malloc1" 00:21:04.939 } 00:21:04.939 } 00:21:04.939 }' 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:04.939 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:04.940 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:05.198 "name": "pt2", 00:21:05.198 "aliases": [ 00:21:05.198 "107a02d5-fcbc-a65c-8a46-15502b3c58d0" 00:21:05.198 ], 00:21:05.198 "product_name": "passthru", 00:21:05.198 "block_size": 512, 00:21:05.198 "num_blocks": 65536, 00:21:05.198 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:05.198 "assigned_rate_limits": { 00:21:05.198 "rw_ios_per_sec": 0, 00:21:05.198 "rw_mbytes_per_sec": 0, 00:21:05.198 "r_mbytes_per_sec": 0, 00:21:05.198 "w_mbytes_per_sec": 0 00:21:05.198 }, 00:21:05.198 "claimed": true, 00:21:05.198 "claim_type": "exclusive_write", 00:21:05.198 "zoned": false, 00:21:05.198 "supported_io_types": { 00:21:05.198 "read": true, 00:21:05.198 "write": true, 00:21:05.198 "unmap": true, 00:21:05.198 "write_zeroes": true, 00:21:05.198 "flush": true, 00:21:05.198 "reset": true, 00:21:05.198 "compare": false, 00:21:05.198 "compare_and_write": false, 00:21:05.198 "abort": true, 00:21:05.198 "nvme_admin": false, 00:21:05.198 "nvme_io": false 00:21:05.198 }, 00:21:05.198 "memory_domains": [ 00:21:05.198 { 00:21:05.198 "dma_device_id": "system", 00:21:05.198 "dma_device_type": 1 00:21:05.198 }, 00:21:05.198 { 00:21:05.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.198 "dma_device_type": 2 00:21:05.198 } 00:21:05.198 ], 00:21:05.198 "driver_specific": { 00:21:05.198 "passthru": { 00:21:05.198 "name": "pt2", 00:21:05.198 "base_bdev_name": "malloc2" 00:21:05.198 } 00:21:05.198 } 00:21:05.198 }' 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.198 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:05.199 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:05.457 "name": "pt3", 00:21:05.457 "aliases": [ 00:21:05.457 "9704ae70-b398-9851-a449-e4aa84aaaf0a" 00:21:05.457 ], 00:21:05.457 "product_name": "passthru", 00:21:05.457 "block_size": 512, 00:21:05.457 "num_blocks": 65536, 00:21:05.457 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:05.457 "assigned_rate_limits": { 00:21:05.457 "rw_ios_per_sec": 0, 00:21:05.457 "rw_mbytes_per_sec": 0, 00:21:05.457 "r_mbytes_per_sec": 0, 00:21:05.457 "w_mbytes_per_sec": 0 00:21:05.457 }, 00:21:05.457 "claimed": true, 00:21:05.457 "claim_type": "exclusive_write", 00:21:05.457 "zoned": false, 00:21:05.457 "supported_io_types": { 00:21:05.457 "read": true, 00:21:05.457 "write": true, 00:21:05.457 "unmap": true, 00:21:05.457 "write_zeroes": true, 00:21:05.457 "flush": true, 00:21:05.457 "reset": true, 00:21:05.457 "compare": false, 00:21:05.457 "compare_and_write": false, 00:21:05.457 "abort": true, 00:21:05.457 "nvme_admin": false, 00:21:05.457 "nvme_io": false 00:21:05.457 }, 00:21:05.457 "memory_domains": [ 00:21:05.457 { 00:21:05.457 "dma_device_id": "system", 00:21:05.457 "dma_device_type": 1 00:21:05.457 }, 00:21:05.457 { 00:21:05.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.457 "dma_device_type": 2 00:21:05.457 } 00:21:05.457 ], 00:21:05.457 "driver_specific": { 00:21:05.457 "passthru": { 00:21:05.457 "name": "pt3", 00:21:05.457 "base_bdev_name": "malloc3" 00:21:05.457 } 00:21:05.457 } 00:21:05.457 }' 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.457 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.458 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.458 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:05.458 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:05.458 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:05.458 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:05.728 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:05.728 "name": "pt4", 00:21:05.728 "aliases": [ 00:21:05.728 "e9188d8c-25d4-065e-983c-28c158c1d19c" 00:21:05.728 ], 00:21:05.728 "product_name": "passthru", 00:21:05.728 "block_size": 512, 00:21:05.728 "num_blocks": 65536, 00:21:05.728 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:05.728 "assigned_rate_limits": { 00:21:05.728 "rw_ios_per_sec": 0, 00:21:05.728 "rw_mbytes_per_sec": 0, 00:21:05.728 "r_mbytes_per_sec": 0, 00:21:05.728 "w_mbytes_per_sec": 0 00:21:05.728 }, 00:21:05.728 "claimed": true, 00:21:05.728 "claim_type": "exclusive_write", 00:21:05.728 "zoned": false, 00:21:05.728 "supported_io_types": { 00:21:05.728 "read": true, 00:21:05.728 "write": true, 00:21:05.728 "unmap": true, 00:21:05.728 "write_zeroes": true, 00:21:05.728 "flush": true, 00:21:05.728 "reset": true, 00:21:05.728 "compare": false, 00:21:05.728 "compare_and_write": false, 00:21:05.728 "abort": true, 00:21:05.728 "nvme_admin": false, 00:21:05.728 "nvme_io": false 00:21:05.728 }, 00:21:05.728 "memory_domains": [ 00:21:05.728 { 00:21:05.728 "dma_device_id": "system", 00:21:05.728 "dma_device_type": 1 00:21:05.728 }, 00:21:05.728 { 00:21:05.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.728 "dma_device_type": 2 00:21:05.728 } 00:21:05.728 ], 00:21:05.728 "driver_specific": { 00:21:05.728 "passthru": { 00:21:05.728 "name": "pt4", 00:21:05.728 "base_bdev_name": "malloc4" 00:21:05.728 } 00:21:05.728 } 00:21:05.728 }' 00:21:05.728 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.728 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.986 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:05.986 [2024-05-15 02:21:53.994124] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.246 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3ddb21d-1261-11ef-99fd-bfc7c66e2865 00:21:06.246 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3ddb21d-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:06.246 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.246 [2024-05-15 02:21:54.242099] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.246 [2024-05-15 02:21:54.242126] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.246 [2024-05-15 02:21:54.242146] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.246 [2024-05-15 02:21:54.242165] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.246 [2024-05-15 02:21:54.242170] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5d900 name raid_bdev1, state offline 00:21:06.246 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.246 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:06.813 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:06.813 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:06.813 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.813 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:07.072 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.072 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.388 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.388 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:07.388 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.388 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:07.954 02:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:08.520 [2024-05-15 02:21:56.258217] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:08.520 [2024-05-15 02:21:56.258701] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:08.520 [2024-05-15 02:21:56.258721] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:08.520 [2024-05-15 02:21:56.258728] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:08.520 [2024-05-15 02:21:56.258742] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:08.520 [2024-05-15 02:21:56.258779] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:08.520 [2024-05-15 02:21:56.258790] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:08.520 [2024-05-15 02:21:56.258799] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:08.520 [2024-05-15 02:21:56.258807] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.520 [2024-05-15 02:21:56.258811] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5d680 name raid_bdev1, state configuring 00:21:08.520 request: 00:21:08.520 { 00:21:08.520 "name": "raid_bdev1", 00:21:08.520 "raid_level": "raid1", 00:21:08.520 "base_bdevs": [ 00:21:08.520 "malloc1", 00:21:08.520 "malloc2", 00:21:08.520 "malloc3", 00:21:08.520 "malloc4" 00:21:08.520 ], 00:21:08.520 "superblock": false, 00:21:08.520 "method": "bdev_raid_create", 00:21:08.520 "req_id": 1 00:21:08.520 } 00:21:08.520 Got JSON-RPC error response 00:21:08.520 response: 00:21:08.520 { 00:21:08.520 "code": -17, 00:21:08.520 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:08.520 } 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:08.520 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:09.088 [2024-05-15 02:21:56.818241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:09.088 [2024-05-15 02:21:56.818306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.088 [2024-05-15 02:21:56.818336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d180 00:21:09.088 [2024-05-15 02:21:56.818344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.088 [2024-05-15 02:21:56.818875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.088 [2024-05-15 02:21:56.818906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:09.088 [2024-05-15 02:21:56.818931] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:09.088 [2024-05-15 02:21:56.818942] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:09.088 pt1 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.088 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.088 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.088 "name": "raid_bdev1", 00:21:09.088 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:09.088 "strip_size_kb": 0, 00:21:09.088 "state": "configuring", 00:21:09.088 "raid_level": "raid1", 00:21:09.088 "superblock": true, 00:21:09.088 "num_base_bdevs": 4, 00:21:09.088 "num_base_bdevs_discovered": 1, 00:21:09.088 "num_base_bdevs_operational": 4, 00:21:09.088 "base_bdevs_list": [ 00:21:09.088 { 00:21:09.088 "name": "pt1", 00:21:09.088 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:09.088 "is_configured": true, 00:21:09.088 "data_offset": 2048, 00:21:09.088 "data_size": 63488 00:21:09.088 }, 00:21:09.088 { 00:21:09.088 "name": null, 00:21:09.088 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:09.088 "is_configured": false, 00:21:09.088 "data_offset": 2048, 00:21:09.088 "data_size": 63488 00:21:09.088 }, 00:21:09.088 { 00:21:09.088 "name": null, 00:21:09.088 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:09.088 "is_configured": false, 00:21:09.088 "data_offset": 2048, 00:21:09.088 "data_size": 63488 00:21:09.088 }, 00:21:09.088 { 00:21:09.088 "name": null, 00:21:09.088 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:09.088 "is_configured": false, 00:21:09.088 "data_offset": 2048, 00:21:09.088 "data_size": 63488 00:21:09.088 } 00:21:09.088 ] 00:21:09.088 }' 00:21:09.088 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.088 02:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.656 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:09.656 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.656 [2024-05-15 02:21:57.650299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.656 [2024-05-15 02:21:57.650372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.656 [2024-05-15 02:21:57.650400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5c780 00:21:09.656 [2024-05-15 02:21:57.650409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.656 [2024-05-15 02:21:57.650510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.656 [2024-05-15 02:21:57.650520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.656 [2024-05-15 02:21:57.650542] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.656 [2024-05-15 02:21:57.650550] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.656 pt2 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:09.999 [2024-05-15 02:21:57.906309] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.999 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.257 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.257 "name": "raid_bdev1", 00:21:10.257 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:10.257 "strip_size_kb": 0, 00:21:10.257 "state": "configuring", 00:21:10.257 "raid_level": "raid1", 00:21:10.257 "superblock": true, 00:21:10.257 "num_base_bdevs": 4, 00:21:10.257 "num_base_bdevs_discovered": 1, 00:21:10.257 "num_base_bdevs_operational": 4, 00:21:10.257 "base_bdevs_list": [ 00:21:10.257 { 00:21:10.257 "name": "pt1", 00:21:10.257 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:10.257 "is_configured": true, 00:21:10.257 "data_offset": 2048, 00:21:10.257 "data_size": 63488 00:21:10.257 }, 00:21:10.257 { 00:21:10.257 "name": null, 00:21:10.257 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:10.257 "is_configured": false, 00:21:10.257 "data_offset": 2048, 00:21:10.257 "data_size": 63488 00:21:10.257 }, 00:21:10.257 { 00:21:10.257 "name": null, 00:21:10.257 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:10.257 "is_configured": false, 00:21:10.257 "data_offset": 2048, 00:21:10.257 "data_size": 63488 00:21:10.257 }, 00:21:10.257 { 00:21:10.257 "name": null, 00:21:10.257 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:10.257 "is_configured": false, 00:21:10.257 "data_offset": 2048, 00:21:10.257 "data_size": 63488 00:21:10.257 } 00:21:10.257 ] 00:21:10.257 }' 00:21:10.257 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.257 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.515 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:10.515 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:10.515 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.774 [2024-05-15 02:21:58.766352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.774 [2024-05-15 02:21:58.766427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.774 [2024-05-15 02:21:58.766456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5c780 00:21:10.774 [2024-05-15 02:21:58.766465] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.774 [2024-05-15 02:21:58.766567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.774 [2024-05-15 02:21:58.766576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.774 [2024-05-15 02:21:58.766599] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:10.774 [2024-05-15 02:21:58.766607] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.774 pt2 00:21:10.774 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:10.774 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:10.774 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:11.339 [2024-05-15 02:21:59.066381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:11.339 [2024-05-15 02:21:59.066449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.339 [2024-05-15 02:21:59.066477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5db80 00:21:11.339 [2024-05-15 02:21:59.066485] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.339 [2024-05-15 02:21:59.066590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.339 [2024-05-15 02:21:59.066599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:11.339 [2024-05-15 02:21:59.066621] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:11.339 [2024-05-15 02:21:59.066629] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:11.339 pt3 00:21:11.339 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:11.339 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:11.339 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:11.339 [2024-05-15 02:21:59.342385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:11.339 [2024-05-15 02:21:59.342449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.339 [2024-05-15 02:21:59.342495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d900 00:21:11.339 [2024-05-15 02:21:59.342503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.339 [2024-05-15 02:21:59.342606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.339 [2024-05-15 02:21:59.342615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:11.339 [2024-05-15 02:21:59.342636] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:11.339 [2024-05-15 02:21:59.342645] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:11.339 [2024-05-15 02:21:59.342673] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cc5cc80 00:21:11.339 [2024-05-15 02:21:59.342678] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:11.339 [2024-05-15 02:21:59.342700] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ccbfe20 00:21:11.339 [2024-05-15 02:21:59.342745] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cc5cc80 00:21:11.339 [2024-05-15 02:21:59.342749] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cc5cc80 00:21:11.339 [2024-05-15 02:21:59.342766] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.339 pt4 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.598 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.856 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.856 "name": "raid_bdev1", 00:21:11.856 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:11.856 "strip_size_kb": 0, 00:21:11.856 "state": "online", 00:21:11.856 "raid_level": "raid1", 00:21:11.856 "superblock": true, 00:21:11.856 "num_base_bdevs": 4, 00:21:11.856 "num_base_bdevs_discovered": 4, 00:21:11.856 "num_base_bdevs_operational": 4, 00:21:11.856 "base_bdevs_list": [ 00:21:11.856 { 00:21:11.856 "name": "pt1", 00:21:11.856 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:11.856 "is_configured": true, 00:21:11.856 "data_offset": 2048, 00:21:11.856 "data_size": 63488 00:21:11.856 }, 00:21:11.856 { 00:21:11.856 "name": "pt2", 00:21:11.856 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:11.856 "is_configured": true, 00:21:11.856 "data_offset": 2048, 00:21:11.856 "data_size": 63488 00:21:11.856 }, 00:21:11.856 { 00:21:11.856 "name": "pt3", 00:21:11.856 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:11.856 "is_configured": true, 00:21:11.856 "data_offset": 2048, 00:21:11.856 "data_size": 63488 00:21:11.856 }, 00:21:11.856 { 00:21:11.856 "name": "pt4", 00:21:11.856 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:11.856 "is_configured": true, 00:21:11.856 "data_offset": 2048, 00:21:11.856 "data_size": 63488 00:21:11.856 } 00:21:11.856 ] 00:21:11.856 }' 00:21:11.856 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.856 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:12.114 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:12.371 [2024-05-15 02:22:00.354484] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:12.371 "name": "raid_bdev1", 00:21:12.371 "aliases": [ 00:21:12.371 "e3ddb21d-1261-11ef-99fd-bfc7c66e2865" 00:21:12.371 ], 00:21:12.371 "product_name": "Raid Volume", 00:21:12.371 "block_size": 512, 00:21:12.371 "num_blocks": 63488, 00:21:12.371 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:12.371 "assigned_rate_limits": { 00:21:12.371 "rw_ios_per_sec": 0, 00:21:12.371 "rw_mbytes_per_sec": 0, 00:21:12.371 "r_mbytes_per_sec": 0, 00:21:12.371 "w_mbytes_per_sec": 0 00:21:12.371 }, 00:21:12.371 "claimed": false, 00:21:12.371 "zoned": false, 00:21:12.371 "supported_io_types": { 00:21:12.371 "read": true, 00:21:12.371 "write": true, 00:21:12.371 "unmap": false, 00:21:12.371 "write_zeroes": true, 00:21:12.371 "flush": false, 00:21:12.371 "reset": true, 00:21:12.371 "compare": false, 00:21:12.371 "compare_and_write": false, 00:21:12.371 "abort": false, 00:21:12.371 "nvme_admin": false, 00:21:12.371 "nvme_io": false 00:21:12.371 }, 00:21:12.371 "memory_domains": [ 00:21:12.371 { 00:21:12.371 "dma_device_id": "system", 00:21:12.371 "dma_device_type": 1 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.371 "dma_device_type": 2 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "system", 00:21:12.371 "dma_device_type": 1 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.371 "dma_device_type": 2 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "system", 00:21:12.371 "dma_device_type": 1 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.371 "dma_device_type": 2 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "system", 00:21:12.371 "dma_device_type": 1 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.371 "dma_device_type": 2 00:21:12.371 } 00:21:12.371 ], 00:21:12.371 "driver_specific": { 00:21:12.371 "raid": { 00:21:12.371 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:12.371 "strip_size_kb": 0, 00:21:12.371 "state": "online", 00:21:12.371 "raid_level": "raid1", 00:21:12.371 "superblock": true, 00:21:12.371 "num_base_bdevs": 4, 00:21:12.371 "num_base_bdevs_discovered": 4, 00:21:12.371 "num_base_bdevs_operational": 4, 00:21:12.371 "base_bdevs_list": [ 00:21:12.371 { 00:21:12.371 "name": "pt1", 00:21:12.371 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:12.371 "is_configured": true, 00:21:12.371 "data_offset": 2048, 00:21:12.371 "data_size": 63488 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "name": "pt2", 00:21:12.371 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:12.371 "is_configured": true, 00:21:12.371 "data_offset": 2048, 00:21:12.371 "data_size": 63488 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "name": "pt3", 00:21:12.371 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:12.371 "is_configured": true, 00:21:12.371 "data_offset": 2048, 00:21:12.371 "data_size": 63488 00:21:12.371 }, 00:21:12.371 { 00:21:12.371 "name": "pt4", 00:21:12.371 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:12.371 "is_configured": true, 00:21:12.371 "data_offset": 2048, 00:21:12.371 "data_size": 63488 00:21:12.371 } 00:21:12.371 ] 00:21:12.371 } 00:21:12.371 } 00:21:12.371 }' 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:12.371 pt2 00:21:12.371 pt3 00:21:12.371 pt4' 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:12.371 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:12.629 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:12.629 "name": "pt1", 00:21:12.629 "aliases": [ 00:21:12.629 "9637fc98-292e-c55b-adfd-71dbca77ee03" 00:21:12.629 ], 00:21:12.629 "product_name": "passthru", 00:21:12.629 "block_size": 512, 00:21:12.629 "num_blocks": 65536, 00:21:12.629 "uuid": "9637fc98-292e-c55b-adfd-71dbca77ee03", 00:21:12.629 "assigned_rate_limits": { 00:21:12.629 "rw_ios_per_sec": 0, 00:21:12.629 "rw_mbytes_per_sec": 0, 00:21:12.629 "r_mbytes_per_sec": 0, 00:21:12.629 "w_mbytes_per_sec": 0 00:21:12.629 }, 00:21:12.629 "claimed": true, 00:21:12.629 "claim_type": "exclusive_write", 00:21:12.629 "zoned": false, 00:21:12.629 "supported_io_types": { 00:21:12.629 "read": true, 00:21:12.629 "write": true, 00:21:12.629 "unmap": true, 00:21:12.629 "write_zeroes": true, 00:21:12.629 "flush": true, 00:21:12.629 "reset": true, 00:21:12.629 "compare": false, 00:21:12.629 "compare_and_write": false, 00:21:12.629 "abort": true, 00:21:12.629 "nvme_admin": false, 00:21:12.629 "nvme_io": false 00:21:12.629 }, 00:21:12.629 "memory_domains": [ 00:21:12.629 { 00:21:12.629 "dma_device_id": "system", 00:21:12.629 "dma_device_type": 1 00:21:12.629 }, 00:21:12.629 { 00:21:12.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.629 "dma_device_type": 2 00:21:12.629 } 00:21:12.629 ], 00:21:12.629 "driver_specific": { 00:21:12.629 "passthru": { 00:21:12.629 "name": "pt1", 00:21:12.629 "base_bdev_name": "malloc1" 00:21:12.629 } 00:21:12.629 } 00:21:12.629 }' 00:21:12.629 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:12.629 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:12.887 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:13.145 "name": "pt2", 00:21:13.145 "aliases": [ 00:21:13.145 "107a02d5-fcbc-a65c-8a46-15502b3c58d0" 00:21:13.145 ], 00:21:13.145 "product_name": "passthru", 00:21:13.145 "block_size": 512, 00:21:13.145 "num_blocks": 65536, 00:21:13.145 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:13.145 "assigned_rate_limits": { 00:21:13.145 "rw_ios_per_sec": 0, 00:21:13.145 "rw_mbytes_per_sec": 0, 00:21:13.145 "r_mbytes_per_sec": 0, 00:21:13.145 "w_mbytes_per_sec": 0 00:21:13.145 }, 00:21:13.145 "claimed": true, 00:21:13.145 "claim_type": "exclusive_write", 00:21:13.145 "zoned": false, 00:21:13.145 "supported_io_types": { 00:21:13.145 "read": true, 00:21:13.145 "write": true, 00:21:13.145 "unmap": true, 00:21:13.145 "write_zeroes": true, 00:21:13.145 "flush": true, 00:21:13.145 "reset": true, 00:21:13.145 "compare": false, 00:21:13.145 "compare_and_write": false, 00:21:13.145 "abort": true, 00:21:13.145 "nvme_admin": false, 00:21:13.145 "nvme_io": false 00:21:13.145 }, 00:21:13.145 "memory_domains": [ 00:21:13.145 { 00:21:13.145 "dma_device_id": "system", 00:21:13.145 "dma_device_type": 1 00:21:13.145 }, 00:21:13.145 { 00:21:13.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.145 "dma_device_type": 2 00:21:13.145 } 00:21:13.145 ], 00:21:13.145 "driver_specific": { 00:21:13.145 "passthru": { 00:21:13.145 "name": "pt2", 00:21:13.145 "base_bdev_name": "malloc2" 00:21:13.145 } 00:21:13.145 } 00:21:13.145 }' 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:13.145 02:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:13.403 "name": "pt3", 00:21:13.403 "aliases": [ 00:21:13.403 "9704ae70-b398-9851-a449-e4aa84aaaf0a" 00:21:13.403 ], 00:21:13.403 "product_name": "passthru", 00:21:13.403 "block_size": 512, 00:21:13.403 "num_blocks": 65536, 00:21:13.403 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:13.403 "assigned_rate_limits": { 00:21:13.403 "rw_ios_per_sec": 0, 00:21:13.403 "rw_mbytes_per_sec": 0, 00:21:13.403 "r_mbytes_per_sec": 0, 00:21:13.403 "w_mbytes_per_sec": 0 00:21:13.403 }, 00:21:13.403 "claimed": true, 00:21:13.403 "claim_type": "exclusive_write", 00:21:13.403 "zoned": false, 00:21:13.403 "supported_io_types": { 00:21:13.403 "read": true, 00:21:13.403 "write": true, 00:21:13.403 "unmap": true, 00:21:13.403 "write_zeroes": true, 00:21:13.403 "flush": true, 00:21:13.403 "reset": true, 00:21:13.403 "compare": false, 00:21:13.403 "compare_and_write": false, 00:21:13.403 "abort": true, 00:21:13.403 "nvme_admin": false, 00:21:13.403 "nvme_io": false 00:21:13.403 }, 00:21:13.403 "memory_domains": [ 00:21:13.403 { 00:21:13.403 "dma_device_id": "system", 00:21:13.403 "dma_device_type": 1 00:21:13.403 }, 00:21:13.403 { 00:21:13.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.403 "dma_device_type": 2 00:21:13.403 } 00:21:13.403 ], 00:21:13.403 "driver_specific": { 00:21:13.403 "passthru": { 00:21:13.403 "name": "pt3", 00:21:13.403 "base_bdev_name": "malloc3" 00:21:13.403 } 00:21:13.403 } 00:21:13.403 }' 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:13.403 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:13.661 "name": "pt4", 00:21:13.661 "aliases": [ 00:21:13.661 "e9188d8c-25d4-065e-983c-28c158c1d19c" 00:21:13.661 ], 00:21:13.661 "product_name": "passthru", 00:21:13.661 "block_size": 512, 00:21:13.661 "num_blocks": 65536, 00:21:13.661 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:13.661 "assigned_rate_limits": { 00:21:13.661 "rw_ios_per_sec": 0, 00:21:13.661 "rw_mbytes_per_sec": 0, 00:21:13.661 "r_mbytes_per_sec": 0, 00:21:13.661 "w_mbytes_per_sec": 0 00:21:13.661 }, 00:21:13.661 "claimed": true, 00:21:13.661 "claim_type": "exclusive_write", 00:21:13.661 "zoned": false, 00:21:13.661 "supported_io_types": { 00:21:13.661 "read": true, 00:21:13.661 "write": true, 00:21:13.661 "unmap": true, 00:21:13.661 "write_zeroes": true, 00:21:13.661 "flush": true, 00:21:13.661 "reset": true, 00:21:13.661 "compare": false, 00:21:13.661 "compare_and_write": false, 00:21:13.661 "abort": true, 00:21:13.661 "nvme_admin": false, 00:21:13.661 "nvme_io": false 00:21:13.661 }, 00:21:13.661 "memory_domains": [ 00:21:13.661 { 00:21:13.661 "dma_device_id": "system", 00:21:13.661 "dma_device_type": 1 00:21:13.661 }, 00:21:13.661 { 00:21:13.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.661 "dma_device_type": 2 00:21:13.661 } 00:21:13.661 ], 00:21:13.661 "driver_specific": { 00:21:13.661 "passthru": { 00:21:13.661 "name": "pt4", 00:21:13.661 "base_bdev_name": "malloc4" 00:21:13.661 } 00:21:13.661 } 00:21:13.661 }' 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:13.661 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:14.016 [2024-05-15 02:22:01.878580] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.016 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3ddb21d-1261-11ef-99fd-bfc7c66e2865 '!=' e3ddb21d-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:14.017 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:14.017 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:14.017 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:21:14.017 02:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:14.274 [2024-05-15 02:22:02.110558] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.274 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.531 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.531 "name": "raid_bdev1", 00:21:14.531 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:14.531 "strip_size_kb": 0, 00:21:14.531 "state": "online", 00:21:14.531 "raid_level": "raid1", 00:21:14.531 "superblock": true, 00:21:14.531 "num_base_bdevs": 4, 00:21:14.531 "num_base_bdevs_discovered": 3, 00:21:14.531 "num_base_bdevs_operational": 3, 00:21:14.531 "base_bdevs_list": [ 00:21:14.531 { 00:21:14.531 "name": null, 00:21:14.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.531 "is_configured": false, 00:21:14.531 "data_offset": 2048, 00:21:14.531 "data_size": 63488 00:21:14.531 }, 00:21:14.531 { 00:21:14.531 "name": "pt2", 00:21:14.531 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:14.531 "is_configured": true, 00:21:14.531 "data_offset": 2048, 00:21:14.531 "data_size": 63488 00:21:14.531 }, 00:21:14.531 { 00:21:14.531 "name": "pt3", 00:21:14.531 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:14.531 "is_configured": true, 00:21:14.531 "data_offset": 2048, 00:21:14.531 "data_size": 63488 00:21:14.531 }, 00:21:14.531 { 00:21:14.531 "name": "pt4", 00:21:14.531 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:14.531 "is_configured": true, 00:21:14.531 "data_offset": 2048, 00:21:14.531 "data_size": 63488 00:21:14.531 } 00:21:14.531 ] 00:21:14.531 }' 00:21:14.531 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.531 02:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.789 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:15.047 [2024-05-15 02:22:02.970597] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.047 [2024-05-15 02:22:02.970633] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.047 [2024-05-15 02:22:02.970656] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.047 [2024-05-15 02:22:02.970692] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.047 [2024-05-15 02:22:02.970696] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5cc80 name raid_bdev1, state offline 00:21:15.047 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:15.047 02:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.305 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:15.305 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:15.305 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:15.305 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:15.305 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:15.562 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:15.562 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:15.562 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:15.819 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:15.819 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:15.819 02:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:16.076 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:16.076 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:16.076 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:16.076 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:16.076 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.335 [2024-05-15 02:22:04.274670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.335 [2024-05-15 02:22:04.274743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.335 [2024-05-15 02:22:04.274771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d900 00:21:16.335 [2024-05-15 02:22:04.274780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.335 [2024-05-15 02:22:04.275302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.335 [2024-05-15 02:22:04.275333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.335 [2024-05-15 02:22:04.275357] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:16.335 [2024-05-15 02:22:04.275369] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.335 pt2 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.335 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.593 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.593 "name": "raid_bdev1", 00:21:16.593 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:16.593 "strip_size_kb": 0, 00:21:16.593 "state": "configuring", 00:21:16.593 "raid_level": "raid1", 00:21:16.593 "superblock": true, 00:21:16.593 "num_base_bdevs": 4, 00:21:16.593 "num_base_bdevs_discovered": 1, 00:21:16.593 "num_base_bdevs_operational": 3, 00:21:16.593 "base_bdevs_list": [ 00:21:16.593 { 00:21:16.593 "name": null, 00:21:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.593 "is_configured": false, 00:21:16.593 "data_offset": 2048, 00:21:16.593 "data_size": 63488 00:21:16.593 }, 00:21:16.593 { 00:21:16.593 "name": "pt2", 00:21:16.593 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:16.593 "is_configured": true, 00:21:16.593 "data_offset": 2048, 00:21:16.593 "data_size": 63488 00:21:16.593 }, 00:21:16.593 { 00:21:16.593 "name": null, 00:21:16.593 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:16.593 "is_configured": false, 00:21:16.593 "data_offset": 2048, 00:21:16.593 "data_size": 63488 00:21:16.593 }, 00:21:16.593 { 00:21:16.593 "name": null, 00:21:16.593 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:16.593 "is_configured": false, 00:21:16.593 "data_offset": 2048, 00:21:16.593 "data_size": 63488 00:21:16.593 } 00:21:16.593 ] 00:21:16.593 }' 00:21:16.593 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.593 02:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.197 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:17.197 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:17.197 02:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:17.197 [2024-05-15 02:22:05.186727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:17.197 [2024-05-15 02:22:05.186801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.197 [2024-05-15 02:22:05.186835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d680 00:21:17.197 [2024-05-15 02:22:05.186843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.197 [2024-05-15 02:22:05.186949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.197 [2024-05-15 02:22:05.186959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:17.198 [2024-05-15 02:22:05.186981] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:17.198 [2024-05-15 02:22:05.186989] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:17.198 pt3 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.456 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.713 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.713 "name": "raid_bdev1", 00:21:17.713 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:17.713 "strip_size_kb": 0, 00:21:17.713 "state": "configuring", 00:21:17.713 "raid_level": "raid1", 00:21:17.713 "superblock": true, 00:21:17.713 "num_base_bdevs": 4, 00:21:17.713 "num_base_bdevs_discovered": 2, 00:21:17.713 "num_base_bdevs_operational": 3, 00:21:17.714 "base_bdevs_list": [ 00:21:17.714 { 00:21:17.714 "name": null, 00:21:17.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.714 "is_configured": false, 00:21:17.714 "data_offset": 2048, 00:21:17.714 "data_size": 63488 00:21:17.714 }, 00:21:17.714 { 00:21:17.714 "name": "pt2", 00:21:17.714 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:17.714 "is_configured": true, 00:21:17.714 "data_offset": 2048, 00:21:17.714 "data_size": 63488 00:21:17.714 }, 00:21:17.714 { 00:21:17.714 "name": "pt3", 00:21:17.714 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:17.714 "is_configured": true, 00:21:17.714 "data_offset": 2048, 00:21:17.714 "data_size": 63488 00:21:17.714 }, 00:21:17.714 { 00:21:17.714 "name": null, 00:21:17.714 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:17.714 "is_configured": false, 00:21:17.714 "data_offset": 2048, 00:21:17.714 "data_size": 63488 00:21:17.714 } 00:21:17.714 ] 00:21:17.714 }' 00:21:17.714 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.714 02:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.972 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:17.972 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:17.972 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:17.972 02:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:18.231 [2024-05-15 02:22:06.094780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:18.231 [2024-05-15 02:22:06.094850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.231 [2024-05-15 02:22:06.094883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5cc80 00:21:18.231 [2024-05-15 02:22:06.094892] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.231 [2024-05-15 02:22:06.094996] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.231 [2024-05-15 02:22:06.095005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:18.231 [2024-05-15 02:22:06.095027] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:18.231 [2024-05-15 02:22:06.095036] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:18.231 [2024-05-15 02:22:06.095063] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cc5c780 00:21:18.231 [2024-05-15 02:22:06.095067] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:18.231 [2024-05-15 02:22:06.095086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ccbfe20 00:21:18.231 [2024-05-15 02:22:06.095123] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cc5c780 00:21:18.231 [2024-05-15 02:22:06.095126] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cc5c780 00:21:18.231 [2024-05-15 02:22:06.095150] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.231 pt4 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.231 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.490 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.490 "name": "raid_bdev1", 00:21:18.490 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:18.490 "strip_size_kb": 0, 00:21:18.490 "state": "online", 00:21:18.490 "raid_level": "raid1", 00:21:18.490 "superblock": true, 00:21:18.490 "num_base_bdevs": 4, 00:21:18.490 "num_base_bdevs_discovered": 3, 00:21:18.490 "num_base_bdevs_operational": 3, 00:21:18.490 "base_bdevs_list": [ 00:21:18.490 { 00:21:18.490 "name": null, 00:21:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.490 "is_configured": false, 00:21:18.490 "data_offset": 2048, 00:21:18.490 "data_size": 63488 00:21:18.490 }, 00:21:18.490 { 00:21:18.490 "name": "pt2", 00:21:18.490 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:18.490 "is_configured": true, 00:21:18.490 "data_offset": 2048, 00:21:18.490 "data_size": 63488 00:21:18.490 }, 00:21:18.490 { 00:21:18.490 "name": "pt3", 00:21:18.490 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:18.490 "is_configured": true, 00:21:18.490 "data_offset": 2048, 00:21:18.490 "data_size": 63488 00:21:18.490 }, 00:21:18.490 { 00:21:18.490 "name": "pt4", 00:21:18.490 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:18.490 "is_configured": true, 00:21:18.490 "data_offset": 2048, 00:21:18.490 "data_size": 63488 00:21:18.490 } 00:21:18.490 ] 00:21:18.490 }' 00:21:18.490 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.490 02:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.751 02:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:19.032 [2024-05-15 02:22:06.990837] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.032 [2024-05-15 02:22:06.990873] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.032 [2024-05-15 02:22:06.990908] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.032 [2024-05-15 02:22:06.990926] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.032 [2024-05-15 02:22:06.990930] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5c780 name raid_bdev1, state offline 00:21:19.032 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.032 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:19.294 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:19.294 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:19.294 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:19.294 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:19.294 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:19.860 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.861 [2024-05-15 02:22:07.854882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.861 [2024-05-15 02:22:07.854951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.861 [2024-05-15 02:22:07.854981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5cc80 00:21:19.861 [2024-05-15 02:22:07.854990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.861 [2024-05-15 02:22:07.855540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.861 [2024-05-15 02:22:07.855565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.861 [2024-05-15 02:22:07.855591] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.861 [2024-05-15 02:22:07.855603] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.861 [2024-05-15 02:22:07.855630] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:19.861 [2024-05-15 02:22:07.855634] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.861 [2024-05-15 02:22:07.855639] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5c780 name raid_bdev1, state configuring 00:21:19.861 [2024-05-15 02:22:07.855646] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.861 [2024-05-15 02:22:07.855664] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:19.861 pt1 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.861 02:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.427 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.427 "name": "raid_bdev1", 00:21:20.427 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:20.427 "strip_size_kb": 0, 00:21:20.427 "state": "configuring", 00:21:20.427 "raid_level": "raid1", 00:21:20.427 "superblock": true, 00:21:20.427 "num_base_bdevs": 4, 00:21:20.427 "num_base_bdevs_discovered": 2, 00:21:20.427 "num_base_bdevs_operational": 3, 00:21:20.427 "base_bdevs_list": [ 00:21:20.427 { 00:21:20.427 "name": null, 00:21:20.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.427 "is_configured": false, 00:21:20.427 "data_offset": 2048, 00:21:20.427 "data_size": 63488 00:21:20.427 }, 00:21:20.427 { 00:21:20.427 "name": "pt2", 00:21:20.427 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:20.427 "is_configured": true, 00:21:20.427 "data_offset": 2048, 00:21:20.427 "data_size": 63488 00:21:20.427 }, 00:21:20.427 { 00:21:20.427 "name": "pt3", 00:21:20.427 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:20.427 "is_configured": true, 00:21:20.427 "data_offset": 2048, 00:21:20.427 "data_size": 63488 00:21:20.427 }, 00:21:20.427 { 00:21:20.427 "name": null, 00:21:20.427 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:20.427 "is_configured": false, 00:21:20.427 "data_offset": 2048, 00:21:20.427 "data_size": 63488 00:21:20.427 } 00:21:20.427 ] 00:21:20.427 }' 00:21:20.427 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.427 02:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.685 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:20.685 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.946 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:20.946 02:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:21.211 [2024-05-15 02:22:09.034962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:21.211 [2024-05-15 02:22:09.035038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.211 [2024-05-15 02:22:09.035087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cc5d180 00:21:21.211 [2024-05-15 02:22:09.035105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.211 [2024-05-15 02:22:09.035264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.211 [2024-05-15 02:22:09.035295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:21.211 [2024-05-15 02:22:09.035344] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:21.211 [2024-05-15 02:22:09.035364] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:21.211 [2024-05-15 02:22:09.035395] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cc5c780 00:21:21.211 [2024-05-15 02:22:09.035400] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:21.211 [2024-05-15 02:22:09.035435] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ccbfe20 00:21:21.211 [2024-05-15 02:22:09.035495] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cc5c780 00:21:21.211 [2024-05-15 02:22:09.035508] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cc5c780 00:21:21.211 [2024-05-15 02:22:09.035545] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.211 pt4 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.211 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.477 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.477 "name": "raid_bdev1", 00:21:21.477 "uuid": "e3ddb21d-1261-11ef-99fd-bfc7c66e2865", 00:21:21.477 "strip_size_kb": 0, 00:21:21.477 "state": "online", 00:21:21.477 "raid_level": "raid1", 00:21:21.477 "superblock": true, 00:21:21.477 "num_base_bdevs": 4, 00:21:21.477 "num_base_bdevs_discovered": 3, 00:21:21.477 "num_base_bdevs_operational": 3, 00:21:21.477 "base_bdevs_list": [ 00:21:21.477 { 00:21:21.477 "name": null, 00:21:21.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.477 "is_configured": false, 00:21:21.477 "data_offset": 2048, 00:21:21.477 "data_size": 63488 00:21:21.477 }, 00:21:21.477 { 00:21:21.477 "name": "pt2", 00:21:21.477 "uuid": "107a02d5-fcbc-a65c-8a46-15502b3c58d0", 00:21:21.477 "is_configured": true, 00:21:21.477 "data_offset": 2048, 00:21:21.477 "data_size": 63488 00:21:21.477 }, 00:21:21.477 { 00:21:21.477 "name": "pt3", 00:21:21.477 "uuid": "9704ae70-b398-9851-a449-e4aa84aaaf0a", 00:21:21.477 "is_configured": true, 00:21:21.477 "data_offset": 2048, 00:21:21.477 "data_size": 63488 00:21:21.477 }, 00:21:21.477 { 00:21:21.477 "name": "pt4", 00:21:21.477 "uuid": "e9188d8c-25d4-065e-983c-28c158c1d19c", 00:21:21.477 "is_configured": true, 00:21:21.477 "data_offset": 2048, 00:21:21.477 "data_size": 63488 00:21:21.477 } 00:21:21.477 ] 00:21:21.477 }' 00:21:21.477 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.477 02:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.744 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:21.744 02:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:22.014 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:22.014 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:22.014 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:22.286 [2024-05-15 02:22:10.219064] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e3ddb21d-1261-11ef-99fd-bfc7c66e2865 '!=' e3ddb21d-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62979 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 62979 ']' 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 62979 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 62979 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:22.286 killing process with pid 62979 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62979' 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 62979 00:21:22.286 [2024-05-15 02:22:10.250236] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.286 [2024-05-15 02:22:10.250282] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.286 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 62979 00:21:22.286 [2024-05-15 02:22:10.250302] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.286 [2024-05-15 02:22:10.250308] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc5c780 name raid_bdev1, state offline 00:21:22.286 [2024-05-15 02:22:10.269589] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.560 02:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:22.560 00:21:22.560 real 0m22.010s 00:21:22.560 user 0m40.193s 00:21:22.560 sys 0m3.028s 00:21:22.560 ************************************ 00:21:22.560 END TEST raid_superblock_test 00:21:22.560 ************************************ 00:21:22.560 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:22.560 02:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.560 02:22:10 bdev_raid -- bdev/bdev_raid.sh@809 -- # '[' '' = true ']' 00:21:22.560 02:22:10 bdev_raid -- bdev/bdev_raid.sh@818 -- # '[' n == y ']' 00:21:22.560 02:22:10 bdev_raid -- bdev/bdev_raid.sh@830 -- # base_blocklen=4096 00:21:22.560 02:22:10 bdev_raid -- bdev/bdev_raid.sh@832 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:22.560 02:22:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:22.560 02:22:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:22.560 02:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.560 ************************************ 00:21:22.560 START TEST raid_state_function_test_sb_4k 00:21:22.560 ************************************ 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=63617 00:21:22.560 Process raid pid: 63617 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 63617' 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 63617 /var/tmp/spdk-raid.sock 00:21:22.560 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 63617 ']' 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.561 02:22:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.561 [2024-05-15 02:22:10.471363] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:22.561 [2024-05-15 02:22:10.471602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:23.173 EAL: TSC is not safe to use in SMP mode 00:21:23.173 EAL: TSC is not invariant 00:21:23.173 [2024-05-15 02:22:10.949791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.173 [2024-05-15 02:22:11.038259] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:23.173 [2024-05-15 02:22:11.040533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.173 [2024-05-15 02:22:11.041328] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.173 [2024-05-15 02:22:11.041345] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.432 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.432 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:21:23.432 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:23.690 [2024-05-15 02:22:11.642157] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.690 [2024-05-15 02:22:11.642219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.690 [2024-05-15 02:22:11.642224] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.690 [2024-05-15 02:22:11.642233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.690 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.949 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.949 "name": "Existed_Raid", 00:21:23.949 "uuid": "efd20776-1261-11ef-99fd-bfc7c66e2865", 00:21:23.949 "strip_size_kb": 0, 00:21:23.949 "state": "configuring", 00:21:23.949 "raid_level": "raid1", 00:21:23.949 "superblock": true, 00:21:23.949 "num_base_bdevs": 2, 00:21:23.949 "num_base_bdevs_discovered": 0, 00:21:23.949 "num_base_bdevs_operational": 2, 00:21:23.949 "base_bdevs_list": [ 00:21:23.949 { 00:21:23.949 "name": "BaseBdev1", 00:21:23.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.949 "is_configured": false, 00:21:23.949 "data_offset": 0, 00:21:23.949 "data_size": 0 00:21:23.949 }, 00:21:23.949 { 00:21:23.949 "name": "BaseBdev2", 00:21:23.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.949 "is_configured": false, 00:21:23.949 "data_offset": 0, 00:21:23.949 "data_size": 0 00:21:23.949 } 00:21:23.949 ] 00:21:23.949 }' 00:21:23.949 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.949 02:22:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.513 02:22:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:24.772 [2024-05-15 02:22:12.586188] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.772 [2024-05-15 02:22:12.586220] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5cf500 name Existed_Raid, state configuring 00:21:24.772 02:22:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:25.030 [2024-05-15 02:22:12.902212] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.030 [2024-05-15 02:22:12.902283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.030 [2024-05-15 02:22:12.902287] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.030 [2024-05-15 02:22:12.902297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.030 02:22:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:21:25.289 [2024-05-15 02:22:13.131186] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.289 BaseBdev1 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:25.289 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.548 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:25.806 [ 00:21:25.806 { 00:21:25.806 "name": "BaseBdev1", 00:21:25.806 "aliases": [ 00:21:25.806 "f0b51716-1261-11ef-99fd-bfc7c66e2865" 00:21:25.806 ], 00:21:25.806 "product_name": "Malloc disk", 00:21:25.806 "block_size": 4096, 00:21:25.806 "num_blocks": 8192, 00:21:25.806 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:25.806 "assigned_rate_limits": { 00:21:25.806 "rw_ios_per_sec": 0, 00:21:25.806 "rw_mbytes_per_sec": 0, 00:21:25.806 "r_mbytes_per_sec": 0, 00:21:25.806 "w_mbytes_per_sec": 0 00:21:25.806 }, 00:21:25.806 "claimed": true, 00:21:25.806 "claim_type": "exclusive_write", 00:21:25.806 "zoned": false, 00:21:25.806 "supported_io_types": { 00:21:25.806 "read": true, 00:21:25.806 "write": true, 00:21:25.806 "unmap": true, 00:21:25.806 "write_zeroes": true, 00:21:25.806 "flush": true, 00:21:25.806 "reset": true, 00:21:25.806 "compare": false, 00:21:25.806 "compare_and_write": false, 00:21:25.806 "abort": true, 00:21:25.806 "nvme_admin": false, 00:21:25.806 "nvme_io": false 00:21:25.806 }, 00:21:25.806 "memory_domains": [ 00:21:25.806 { 00:21:25.806 "dma_device_id": "system", 00:21:25.806 "dma_device_type": 1 00:21:25.806 }, 00:21:25.806 { 00:21:25.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.806 "dma_device_type": 2 00:21:25.806 } 00:21:25.806 ], 00:21:25.806 "driver_specific": {} 00:21:25.806 } 00:21:25.806 ] 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.806 02:22:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.064 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.064 "name": "Existed_Raid", 00:21:26.064 "uuid": "f0924c61-1261-11ef-99fd-bfc7c66e2865", 00:21:26.064 "strip_size_kb": 0, 00:21:26.065 "state": "configuring", 00:21:26.065 "raid_level": "raid1", 00:21:26.065 "superblock": true, 00:21:26.065 "num_base_bdevs": 2, 00:21:26.065 "num_base_bdevs_discovered": 1, 00:21:26.065 "num_base_bdevs_operational": 2, 00:21:26.065 "base_bdevs_list": [ 00:21:26.065 { 00:21:26.065 "name": "BaseBdev1", 00:21:26.065 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:26.065 "is_configured": true, 00:21:26.065 "data_offset": 256, 00:21:26.065 "data_size": 7936 00:21:26.065 }, 00:21:26.065 { 00:21:26.065 "name": "BaseBdev2", 00:21:26.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.065 "is_configured": false, 00:21:26.065 "data_offset": 0, 00:21:26.065 "data_size": 0 00:21:26.065 } 00:21:26.065 ] 00:21:26.065 }' 00:21:26.065 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.065 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:26.630 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:26.630 [2024-05-15 02:22:14.586279] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.630 [2024-05-15 02:22:14.586318] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5cf500 name Existed_Raid, state configuring 00:21:26.630 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:27.194 [2024-05-15 02:22:14.926321] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.194 [2024-05-15 02:22:14.927068] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.194 [2024-05-15 02:22:14.927119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.194 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:21:27.194 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.195 02:22:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.452 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.452 "name": "Existed_Raid", 00:21:27.452 "uuid": "f1c726f8-1261-11ef-99fd-bfc7c66e2865", 00:21:27.452 "strip_size_kb": 0, 00:21:27.452 "state": "configuring", 00:21:27.452 "raid_level": "raid1", 00:21:27.452 "superblock": true, 00:21:27.452 "num_base_bdevs": 2, 00:21:27.452 "num_base_bdevs_discovered": 1, 00:21:27.452 "num_base_bdevs_operational": 2, 00:21:27.452 "base_bdevs_list": [ 00:21:27.452 { 00:21:27.452 "name": "BaseBdev1", 00:21:27.452 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:27.452 "is_configured": true, 00:21:27.452 "data_offset": 256, 00:21:27.452 "data_size": 7936 00:21:27.452 }, 00:21:27.452 { 00:21:27.452 "name": "BaseBdev2", 00:21:27.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.452 "is_configured": false, 00:21:27.452 "data_offset": 0, 00:21:27.452 "data_size": 0 00:21:27.452 } 00:21:27.452 ] 00:21:27.452 }' 00:21:27.452 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.452 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:27.709 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:21:27.965 [2024-05-15 02:22:15.822511] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.965 [2024-05-15 02:22:15.822578] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d5cfa00 00:21:27.965 [2024-05-15 02:22:15.822584] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:27.965 [2024-05-15 02:22:15.822603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d632ec0 00:21:27.965 [2024-05-15 02:22:15.822640] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d5cfa00 00:21:27.965 [2024-05-15 02:22:15.822644] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d5cfa00 00:21:27.965 [2024-05-15 02:22:15.822662] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.965 BaseBdev2 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:27.965 02:22:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.224 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.790 [ 00:21:28.790 { 00:21:28.790 "name": "BaseBdev2", 00:21:28.790 "aliases": [ 00:21:28.790 "f24fe1f4-1261-11ef-99fd-bfc7c66e2865" 00:21:28.790 ], 00:21:28.790 "product_name": "Malloc disk", 00:21:28.790 "block_size": 4096, 00:21:28.790 "num_blocks": 8192, 00:21:28.790 "uuid": "f24fe1f4-1261-11ef-99fd-bfc7c66e2865", 00:21:28.790 "assigned_rate_limits": { 00:21:28.790 "rw_ios_per_sec": 0, 00:21:28.790 "rw_mbytes_per_sec": 0, 00:21:28.790 "r_mbytes_per_sec": 0, 00:21:28.790 "w_mbytes_per_sec": 0 00:21:28.790 }, 00:21:28.790 "claimed": true, 00:21:28.790 "claim_type": "exclusive_write", 00:21:28.790 "zoned": false, 00:21:28.790 "supported_io_types": { 00:21:28.790 "read": true, 00:21:28.790 "write": true, 00:21:28.790 "unmap": true, 00:21:28.790 "write_zeroes": true, 00:21:28.790 "flush": true, 00:21:28.790 "reset": true, 00:21:28.790 "compare": false, 00:21:28.790 "compare_and_write": false, 00:21:28.790 "abort": true, 00:21:28.790 "nvme_admin": false, 00:21:28.790 "nvme_io": false 00:21:28.790 }, 00:21:28.790 "memory_domains": [ 00:21:28.790 { 00:21:28.790 "dma_device_id": "system", 00:21:28.790 "dma_device_type": 1 00:21:28.790 }, 00:21:28.790 { 00:21:28.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.790 "dma_device_type": 2 00:21:28.790 } 00:21:28.790 ], 00:21:28.790 "driver_specific": {} 00:21:28.790 } 00:21:28.790 ] 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.790 "name": "Existed_Raid", 00:21:28.790 "uuid": "f1c726f8-1261-11ef-99fd-bfc7c66e2865", 00:21:28.790 "strip_size_kb": 0, 00:21:28.790 "state": "online", 00:21:28.790 "raid_level": "raid1", 00:21:28.790 "superblock": true, 00:21:28.790 "num_base_bdevs": 2, 00:21:28.790 "num_base_bdevs_discovered": 2, 00:21:28.790 "num_base_bdevs_operational": 2, 00:21:28.790 "base_bdevs_list": [ 00:21:28.790 { 00:21:28.790 "name": "BaseBdev1", 00:21:28.790 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:28.790 "is_configured": true, 00:21:28.790 "data_offset": 256, 00:21:28.790 "data_size": 7936 00:21:28.790 }, 00:21:28.790 { 00:21:28.790 "name": "BaseBdev2", 00:21:28.790 "uuid": "f24fe1f4-1261-11ef-99fd-bfc7c66e2865", 00:21:28.790 "is_configured": true, 00:21:28.790 "data_offset": 256, 00:21:28.790 "data_size": 7936 00:21:28.790 } 00:21:28.790 ] 00:21:28.790 }' 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.790 02:22:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:29.356 [2024-05-15 02:22:17.282514] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.356 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:29.356 "name": "Existed_Raid", 00:21:29.356 "aliases": [ 00:21:29.356 "f1c726f8-1261-11ef-99fd-bfc7c66e2865" 00:21:29.356 ], 00:21:29.356 "product_name": "Raid Volume", 00:21:29.356 "block_size": 4096, 00:21:29.356 "num_blocks": 7936, 00:21:29.356 "uuid": "f1c726f8-1261-11ef-99fd-bfc7c66e2865", 00:21:29.356 "assigned_rate_limits": { 00:21:29.356 "rw_ios_per_sec": 0, 00:21:29.356 "rw_mbytes_per_sec": 0, 00:21:29.356 "r_mbytes_per_sec": 0, 00:21:29.356 "w_mbytes_per_sec": 0 00:21:29.356 }, 00:21:29.356 "claimed": false, 00:21:29.356 "zoned": false, 00:21:29.356 "supported_io_types": { 00:21:29.356 "read": true, 00:21:29.356 "write": true, 00:21:29.356 "unmap": false, 00:21:29.356 "write_zeroes": true, 00:21:29.356 "flush": false, 00:21:29.356 "reset": true, 00:21:29.356 "compare": false, 00:21:29.356 "compare_and_write": false, 00:21:29.356 "abort": false, 00:21:29.356 "nvme_admin": false, 00:21:29.356 "nvme_io": false 00:21:29.356 }, 00:21:29.356 "memory_domains": [ 00:21:29.356 { 00:21:29.356 "dma_device_id": "system", 00:21:29.356 "dma_device_type": 1 00:21:29.356 }, 00:21:29.356 { 00:21:29.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.356 "dma_device_type": 2 00:21:29.356 }, 00:21:29.356 { 00:21:29.356 "dma_device_id": "system", 00:21:29.356 "dma_device_type": 1 00:21:29.356 }, 00:21:29.356 { 00:21:29.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.356 "dma_device_type": 2 00:21:29.356 } 00:21:29.356 ], 00:21:29.356 "driver_specific": { 00:21:29.356 "raid": { 00:21:29.356 "uuid": "f1c726f8-1261-11ef-99fd-bfc7c66e2865", 00:21:29.356 "strip_size_kb": 0, 00:21:29.356 "state": "online", 00:21:29.356 "raid_level": "raid1", 00:21:29.356 "superblock": true, 00:21:29.356 "num_base_bdevs": 2, 00:21:29.356 "num_base_bdevs_discovered": 2, 00:21:29.356 "num_base_bdevs_operational": 2, 00:21:29.356 "base_bdevs_list": [ 00:21:29.356 { 00:21:29.356 "name": "BaseBdev1", 00:21:29.356 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:29.356 "is_configured": true, 00:21:29.356 "data_offset": 256, 00:21:29.356 "data_size": 7936 00:21:29.356 }, 00:21:29.356 { 00:21:29.356 "name": "BaseBdev2", 00:21:29.356 "uuid": "f24fe1f4-1261-11ef-99fd-bfc7c66e2865", 00:21:29.356 "is_configured": true, 00:21:29.356 "data_offset": 256, 00:21:29.356 "data_size": 7936 00:21:29.356 } 00:21:29.356 ] 00:21:29.356 } 00:21:29.356 } 00:21:29.356 }' 00:21:29.357 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:29.357 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:21:29.357 BaseBdev2' 00:21:29.357 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:29.357 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:29.357 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:29.616 "name": "BaseBdev1", 00:21:29.616 "aliases": [ 00:21:29.616 "f0b51716-1261-11ef-99fd-bfc7c66e2865" 00:21:29.616 ], 00:21:29.616 "product_name": "Malloc disk", 00:21:29.616 "block_size": 4096, 00:21:29.616 "num_blocks": 8192, 00:21:29.616 "uuid": "f0b51716-1261-11ef-99fd-bfc7c66e2865", 00:21:29.616 "assigned_rate_limits": { 00:21:29.616 "rw_ios_per_sec": 0, 00:21:29.616 "rw_mbytes_per_sec": 0, 00:21:29.616 "r_mbytes_per_sec": 0, 00:21:29.616 "w_mbytes_per_sec": 0 00:21:29.616 }, 00:21:29.616 "claimed": true, 00:21:29.616 "claim_type": "exclusive_write", 00:21:29.616 "zoned": false, 00:21:29.616 "supported_io_types": { 00:21:29.616 "read": true, 00:21:29.616 "write": true, 00:21:29.616 "unmap": true, 00:21:29.616 "write_zeroes": true, 00:21:29.616 "flush": true, 00:21:29.616 "reset": true, 00:21:29.616 "compare": false, 00:21:29.616 "compare_and_write": false, 00:21:29.616 "abort": true, 00:21:29.616 "nvme_admin": false, 00:21:29.616 "nvme_io": false 00:21:29.616 }, 00:21:29.616 "memory_domains": [ 00:21:29.616 { 00:21:29.616 "dma_device_id": "system", 00:21:29.616 "dma_device_type": 1 00:21:29.616 }, 00:21:29.616 { 00:21:29.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.616 "dma_device_type": 2 00:21:29.616 } 00:21:29.616 ], 00:21:29.616 "driver_specific": {} 00:21:29.616 }' 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:29.616 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:30.202 "name": "BaseBdev2", 00:21:30.202 "aliases": [ 00:21:30.202 "f24fe1f4-1261-11ef-99fd-bfc7c66e2865" 00:21:30.202 ], 00:21:30.202 "product_name": "Malloc disk", 00:21:30.202 "block_size": 4096, 00:21:30.202 "num_blocks": 8192, 00:21:30.202 "uuid": "f24fe1f4-1261-11ef-99fd-bfc7c66e2865", 00:21:30.202 "assigned_rate_limits": { 00:21:30.202 "rw_ios_per_sec": 0, 00:21:30.202 "rw_mbytes_per_sec": 0, 00:21:30.202 "r_mbytes_per_sec": 0, 00:21:30.202 "w_mbytes_per_sec": 0 00:21:30.202 }, 00:21:30.202 "claimed": true, 00:21:30.202 "claim_type": "exclusive_write", 00:21:30.202 "zoned": false, 00:21:30.202 "supported_io_types": { 00:21:30.202 "read": true, 00:21:30.202 "write": true, 00:21:30.202 "unmap": true, 00:21:30.202 "write_zeroes": true, 00:21:30.202 "flush": true, 00:21:30.202 "reset": true, 00:21:30.202 "compare": false, 00:21:30.202 "compare_and_write": false, 00:21:30.202 "abort": true, 00:21:30.202 "nvme_admin": false, 00:21:30.202 "nvme_io": false 00:21:30.202 }, 00:21:30.202 "memory_domains": [ 00:21:30.202 { 00:21:30.202 "dma_device_id": "system", 00:21:30.202 "dma_device_type": 1 00:21:30.202 }, 00:21:30.202 { 00:21:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.202 "dma_device_type": 2 00:21:30.202 } 00:21:30.202 ], 00:21:30.202 "driver_specific": {} 00:21:30.202 }' 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:30.202 02:22:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:30.202 [2024-05-15 02:22:18.186540] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:30.202 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.493 "name": "Existed_Raid", 00:21:30.493 "uuid": "f1c726f8-1261-11ef-99fd-bfc7c66e2865", 00:21:30.493 "strip_size_kb": 0, 00:21:30.493 "state": "online", 00:21:30.493 "raid_level": "raid1", 00:21:30.493 "superblock": true, 00:21:30.493 "num_base_bdevs": 2, 00:21:30.493 "num_base_bdevs_discovered": 1, 00:21:30.493 "num_base_bdevs_operational": 1, 00:21:30.493 "base_bdevs_list": [ 00:21:30.493 { 00:21:30.493 "name": null, 00:21:30.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.493 "is_configured": false, 00:21:30.493 "data_offset": 256, 00:21:30.493 "data_size": 7936 00:21:30.493 }, 00:21:30.493 { 00:21:30.493 "name": "BaseBdev2", 00:21:30.493 "uuid": "f24fe1f4-1261-11ef-99fd-bfc7c66e2865", 00:21:30.493 "is_configured": true, 00:21:30.493 "data_offset": 256, 00:21:30.493 "data_size": 7936 00:21:30.493 } 00:21:30.493 ] 00:21:30.493 }' 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.493 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.059 02:22:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:31.318 [2024-05-15 02:22:19.231493] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.318 [2024-05-15 02:22:19.231535] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.318 [2024-05-15 02:22:19.236458] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.318 [2024-05-15 02:22:19.236475] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.318 [2024-05-15 02:22:19.236480] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5cfa00 name Existed_Raid, state offline 00:21:31.318 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:31.318 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.318 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.318 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 63617 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 63617 ']' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 63617 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps -c -o command 63617 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # tail -1 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:31.576 killing process with pid 63617 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63617' 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 63617 00:21:31.576 [2024-05-15 02:22:19.534907] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.576 [2024-05-15 02:22:19.534946] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.576 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 63617 00:21:31.835 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:21:31.835 00:21:31.835 real 0m9.219s 00:21:31.835 user 0m16.313s 00:21:31.835 sys 0m1.426s 00:21:31.835 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:31.835 02:22:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:31.835 ************************************ 00:21:31.835 END TEST raid_state_function_test_sb_4k 00:21:31.835 ************************************ 00:21:31.835 02:22:19 bdev_raid -- bdev/bdev_raid.sh@833 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:31.835 02:22:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:31.835 02:22:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:31.835 02:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.835 ************************************ 00:21:31.836 START TEST raid_superblock_test_4k 00:21:31.836 ************************************ 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=63891 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 63891 /var/tmp/spdk-raid.sock 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 63891 ']' 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.836 02:22:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:31.836 [2024-05-15 02:22:19.736427] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:31.836 [2024-05-15 02:22:19.736599] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:32.402 EAL: TSC is not safe to use in SMP mode 00:21:32.402 EAL: TSC is not invariant 00:21:32.402 [2024-05-15 02:22:20.189412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.402 [2024-05-15 02:22:20.272591] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:32.403 [2024-05-15 02:22:20.274767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.403 [2024-05-15 02:22:20.275455] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.403 [2024-05-15 02:22:20.275467] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:21:32.970 malloc1 00:21:32.970 02:22:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:33.228 [2024-05-15 02:22:21.230088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:33.228 [2024-05-15 02:22:21.230158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.228 [2024-05-15 02:22:21.230742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11780 00:21:33.228 [2024-05-15 02:22:21.230770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.228 [2024-05-15 02:22:21.231500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.228 [2024-05-15 02:22:21.231542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:33.228 pt1 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.486 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:21:33.486 malloc2 00:21:33.745 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:33.745 [2024-05-15 02:22:21.730125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:33.745 [2024-05-15 02:22:21.730202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.745 [2024-05-15 02:22:21.730231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11c80 00:21:33.745 [2024-05-15 02:22:21.730239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.745 [2024-05-15 02:22:21.730760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.745 [2024-05-15 02:22:21.730782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:33.745 pt2 00:21:33.745 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.745 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.745 02:22:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:34.003 [2024-05-15 02:22:22.016094] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:34.003 [2024-05-15 02:22:22.016516] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:34.003 [2024-05-15 02:22:22.016567] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb11f00 00:21:34.003 [2024-05-15 02:22:22.016573] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:34.003 [2024-05-15 02:22:22.016621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bb74e20 00:21:34.003 [2024-05-15 02:22:22.016674] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb11f00 00:21:34.003 [2024-05-15 02:22:22.016678] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb11f00 00:21:34.003 [2024-05-15 02:22:22.016698] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.262 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.520 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.520 "name": "raid_bdev1", 00:21:34.520 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:34.520 "strip_size_kb": 0, 00:21:34.520 "state": "online", 00:21:34.520 "raid_level": "raid1", 00:21:34.520 "superblock": true, 00:21:34.520 "num_base_bdevs": 2, 00:21:34.520 "num_base_bdevs_discovered": 2, 00:21:34.520 "num_base_bdevs_operational": 2, 00:21:34.520 "base_bdevs_list": [ 00:21:34.520 { 00:21:34.520 "name": "pt1", 00:21:34.520 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:34.520 "is_configured": true, 00:21:34.520 "data_offset": 256, 00:21:34.520 "data_size": 7936 00:21:34.520 }, 00:21:34.520 { 00:21:34.520 "name": "pt2", 00:21:34.520 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:34.520 "is_configured": true, 00:21:34.521 "data_offset": 256, 00:21:34.521 "data_size": 7936 00:21:34.521 } 00:21:34.521 ] 00:21:34.521 }' 00:21:34.521 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.521 02:22:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:34.779 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:35.038 [2024-05-15 02:22:22.832143] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:35.038 "name": "raid_bdev1", 00:21:35.038 "aliases": [ 00:21:35.038 "f600f74b-1261-11ef-99fd-bfc7c66e2865" 00:21:35.038 ], 00:21:35.038 "product_name": "Raid Volume", 00:21:35.038 "block_size": 4096, 00:21:35.038 "num_blocks": 7936, 00:21:35.038 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:35.038 "assigned_rate_limits": { 00:21:35.038 "rw_ios_per_sec": 0, 00:21:35.038 "rw_mbytes_per_sec": 0, 00:21:35.038 "r_mbytes_per_sec": 0, 00:21:35.038 "w_mbytes_per_sec": 0 00:21:35.038 }, 00:21:35.038 "claimed": false, 00:21:35.038 "zoned": false, 00:21:35.038 "supported_io_types": { 00:21:35.038 "read": true, 00:21:35.038 "write": true, 00:21:35.038 "unmap": false, 00:21:35.038 "write_zeroes": true, 00:21:35.038 "flush": false, 00:21:35.038 "reset": true, 00:21:35.038 "compare": false, 00:21:35.038 "compare_and_write": false, 00:21:35.038 "abort": false, 00:21:35.038 "nvme_admin": false, 00:21:35.038 "nvme_io": false 00:21:35.038 }, 00:21:35.038 "memory_domains": [ 00:21:35.038 { 00:21:35.038 "dma_device_id": "system", 00:21:35.038 "dma_device_type": 1 00:21:35.038 }, 00:21:35.038 { 00:21:35.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.038 "dma_device_type": 2 00:21:35.038 }, 00:21:35.038 { 00:21:35.038 "dma_device_id": "system", 00:21:35.038 "dma_device_type": 1 00:21:35.038 }, 00:21:35.038 { 00:21:35.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.038 "dma_device_type": 2 00:21:35.038 } 00:21:35.038 ], 00:21:35.038 "driver_specific": { 00:21:35.038 "raid": { 00:21:35.038 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:35.038 "strip_size_kb": 0, 00:21:35.038 "state": "online", 00:21:35.038 "raid_level": "raid1", 00:21:35.038 "superblock": true, 00:21:35.038 "num_base_bdevs": 2, 00:21:35.038 "num_base_bdevs_discovered": 2, 00:21:35.038 "num_base_bdevs_operational": 2, 00:21:35.038 "base_bdevs_list": [ 00:21:35.038 { 00:21:35.038 "name": "pt1", 00:21:35.038 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:35.038 "is_configured": true, 00:21:35.038 "data_offset": 256, 00:21:35.038 "data_size": 7936 00:21:35.038 }, 00:21:35.038 { 00:21:35.038 "name": "pt2", 00:21:35.038 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:35.038 "is_configured": true, 00:21:35.038 "data_offset": 256, 00:21:35.038 "data_size": 7936 00:21:35.038 } 00:21:35.038 ] 00:21:35.038 } 00:21:35.038 } 00:21:35.038 }' 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:35.038 pt2' 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:35.038 02:22:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:35.296 "name": "pt1", 00:21:35.296 "aliases": [ 00:21:35.296 "cbefa4e1-d780-095b-95a3-75e21e8e0379" 00:21:35.296 ], 00:21:35.296 "product_name": "passthru", 00:21:35.296 "block_size": 4096, 00:21:35.296 "num_blocks": 8192, 00:21:35.296 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:35.296 "assigned_rate_limits": { 00:21:35.296 "rw_ios_per_sec": 0, 00:21:35.296 "rw_mbytes_per_sec": 0, 00:21:35.296 "r_mbytes_per_sec": 0, 00:21:35.296 "w_mbytes_per_sec": 0 00:21:35.296 }, 00:21:35.296 "claimed": true, 00:21:35.296 "claim_type": "exclusive_write", 00:21:35.296 "zoned": false, 00:21:35.296 "supported_io_types": { 00:21:35.296 "read": true, 00:21:35.296 "write": true, 00:21:35.296 "unmap": true, 00:21:35.296 "write_zeroes": true, 00:21:35.296 "flush": true, 00:21:35.296 "reset": true, 00:21:35.296 "compare": false, 00:21:35.296 "compare_and_write": false, 00:21:35.296 "abort": true, 00:21:35.296 "nvme_admin": false, 00:21:35.296 "nvme_io": false 00:21:35.296 }, 00:21:35.296 "memory_domains": [ 00:21:35.296 { 00:21:35.296 "dma_device_id": "system", 00:21:35.296 "dma_device_type": 1 00:21:35.296 }, 00:21:35.296 { 00:21:35.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.296 "dma_device_type": 2 00:21:35.296 } 00:21:35.296 ], 00:21:35.296 "driver_specific": { 00:21:35.296 "passthru": { 00:21:35.296 "name": "pt1", 00:21:35.296 "base_bdev_name": "malloc1" 00:21:35.296 } 00:21:35.296 } 00:21:35.296 }' 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:35.296 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:35.671 "name": "pt2", 00:21:35.671 "aliases": [ 00:21:35.671 "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c" 00:21:35.671 ], 00:21:35.671 "product_name": "passthru", 00:21:35.671 "block_size": 4096, 00:21:35.671 "num_blocks": 8192, 00:21:35.671 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:35.671 "assigned_rate_limits": { 00:21:35.671 "rw_ios_per_sec": 0, 00:21:35.671 "rw_mbytes_per_sec": 0, 00:21:35.671 "r_mbytes_per_sec": 0, 00:21:35.671 "w_mbytes_per_sec": 0 00:21:35.671 }, 00:21:35.671 "claimed": true, 00:21:35.671 "claim_type": "exclusive_write", 00:21:35.671 "zoned": false, 00:21:35.671 "supported_io_types": { 00:21:35.671 "read": true, 00:21:35.671 "write": true, 00:21:35.671 "unmap": true, 00:21:35.671 "write_zeroes": true, 00:21:35.671 "flush": true, 00:21:35.671 "reset": true, 00:21:35.671 "compare": false, 00:21:35.671 "compare_and_write": false, 00:21:35.671 "abort": true, 00:21:35.671 "nvme_admin": false, 00:21:35.671 "nvme_io": false 00:21:35.671 }, 00:21:35.671 "memory_domains": [ 00:21:35.671 { 00:21:35.671 "dma_device_id": "system", 00:21:35.671 "dma_device_type": 1 00:21:35.671 }, 00:21:35.671 { 00:21:35.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.671 "dma_device_type": 2 00:21:35.671 } 00:21:35.671 ], 00:21:35.671 "driver_specific": { 00:21:35.671 "passthru": { 00:21:35.671 "name": "pt2", 00:21:35.671 "base_bdev_name": "malloc2" 00:21:35.671 } 00:21:35.671 } 00:21:35.671 }' 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:35.671 [2024-05-15 02:22:23.636183] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f600f74b-1261-11ef-99fd-bfc7c66e2865 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z f600f74b-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:35.671 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.933 [2024-05-15 02:22:23.932183] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.933 [2024-05-15 02:22:23.932205] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.933 [2024-05-15 02:22:23.932227] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.933 [2024-05-15 02:22:23.932239] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.934 [2024-05-15 02:22:23.932243] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb11f00 name raid_bdev1, state offline 00:21:36.191 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.191 02:22:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:36.450 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:36.450 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:36.450 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.450 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:36.708 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.708 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:36.966 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:36.966 02:22:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:37.225 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:37.225 [2024-05-15 02:22:25.236245] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:37.225 [2024-05-15 02:22:25.236699] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:37.225 [2024-05-15 02:22:25.236721] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:37.225 [2024-05-15 02:22:25.236757] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:37.225 [2024-05-15 02:22:25.236766] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.225 [2024-05-15 02:22:25.236771] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb11c80 name raid_bdev1, state configuring 00:21:37.225 request: 00:21:37.225 { 00:21:37.225 "name": "raid_bdev1", 00:21:37.225 "raid_level": "raid1", 00:21:37.225 "base_bdevs": [ 00:21:37.225 "malloc1", 00:21:37.225 "malloc2" 00:21:37.225 ], 00:21:37.225 "superblock": false, 00:21:37.225 "method": "bdev_raid_create", 00:21:37.225 "req_id": 1 00:21:37.225 } 00:21:37.225 Got JSON-RPC error response 00:21:37.225 response: 00:21:37.225 { 00:21:37.225 "code": -17, 00:21:37.225 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:37.225 } 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.483 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:37.740 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:37.740 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:37.740 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.740 [2024-05-15 02:22:25.704277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.741 [2024-05-15 02:22:25.704328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.741 [2024-05-15 02:22:25.704369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11780 00:21:37.741 [2024-05-15 02:22:25.704376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.741 [2024-05-15 02:22:25.704848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.741 [2024-05-15 02:22:25.704887] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.741 [2024-05-15 02:22:25.704907] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.741 [2024-05-15 02:22:25.704916] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.741 pt1 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.741 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.036 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.036 "name": "raid_bdev1", 00:21:38.036 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:38.036 "strip_size_kb": 0, 00:21:38.036 "state": "configuring", 00:21:38.036 "raid_level": "raid1", 00:21:38.036 "superblock": true, 00:21:38.036 "num_base_bdevs": 2, 00:21:38.036 "num_base_bdevs_discovered": 1, 00:21:38.036 "num_base_bdevs_operational": 2, 00:21:38.036 "base_bdevs_list": [ 00:21:38.036 { 00:21:38.036 "name": "pt1", 00:21:38.036 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:38.036 "is_configured": true, 00:21:38.036 "data_offset": 256, 00:21:38.036 "data_size": 7936 00:21:38.036 }, 00:21:38.036 { 00:21:38.036 "name": null, 00:21:38.036 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:38.036 "is_configured": false, 00:21:38.036 "data_offset": 256, 00:21:38.036 "data_size": 7936 00:21:38.036 } 00:21:38.036 ] 00:21:38.036 }' 00:21:38.036 02:22:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.036 02:22:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.316 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:38.316 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:38.316 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:38.316 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.574 [2024-05-15 02:22:26.512312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.574 [2024-05-15 02:22:26.512366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.574 [2024-05-15 02:22:26.512393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11f00 00:21:38.574 [2024-05-15 02:22:26.512401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.574 [2024-05-15 02:22:26.512495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.574 [2024-05-15 02:22:26.512503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.574 [2024-05-15 02:22:26.512521] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.574 [2024-05-15 02:22:26.512528] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.574 [2024-05-15 02:22:26.512567] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb12180 00:21:38.574 [2024-05-15 02:22:26.512570] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.574 [2024-05-15 02:22:26.512587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bb74e20 00:21:38.574 [2024-05-15 02:22:26.512626] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb12180 00:21:38.574 [2024-05-15 02:22:26.512630] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb12180 00:21:38.574 [2024-05-15 02:22:26.512647] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.574 pt2 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.574 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.832 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.832 "name": "raid_bdev1", 00:21:38.832 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:38.832 "strip_size_kb": 0, 00:21:38.832 "state": "online", 00:21:38.832 "raid_level": "raid1", 00:21:38.832 "superblock": true, 00:21:38.832 "num_base_bdevs": 2, 00:21:38.832 "num_base_bdevs_discovered": 2, 00:21:38.832 "num_base_bdevs_operational": 2, 00:21:38.832 "base_bdevs_list": [ 00:21:38.832 { 00:21:38.832 "name": "pt1", 00:21:38.832 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:38.832 "is_configured": true, 00:21:38.832 "data_offset": 256, 00:21:38.832 "data_size": 7936 00:21:38.832 }, 00:21:38.832 { 00:21:38.832 "name": "pt2", 00:21:38.832 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:38.832 "is_configured": true, 00:21:38.832 "data_offset": 256, 00:21:38.832 "data_size": 7936 00:21:38.832 } 00:21:38.832 ] 00:21:38.832 }' 00:21:38.832 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.832 02:22:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.091 02:22:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:39.350 [2024-05-15 02:22:27.240379] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.350 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:39.350 "name": "raid_bdev1", 00:21:39.350 "aliases": [ 00:21:39.350 "f600f74b-1261-11ef-99fd-bfc7c66e2865" 00:21:39.350 ], 00:21:39.350 "product_name": "Raid Volume", 00:21:39.350 "block_size": 4096, 00:21:39.350 "num_blocks": 7936, 00:21:39.350 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:39.350 "assigned_rate_limits": { 00:21:39.350 "rw_ios_per_sec": 0, 00:21:39.350 "rw_mbytes_per_sec": 0, 00:21:39.350 "r_mbytes_per_sec": 0, 00:21:39.350 "w_mbytes_per_sec": 0 00:21:39.350 }, 00:21:39.350 "claimed": false, 00:21:39.350 "zoned": false, 00:21:39.350 "supported_io_types": { 00:21:39.350 "read": true, 00:21:39.350 "write": true, 00:21:39.350 "unmap": false, 00:21:39.350 "write_zeroes": true, 00:21:39.350 "flush": false, 00:21:39.350 "reset": true, 00:21:39.350 "compare": false, 00:21:39.350 "compare_and_write": false, 00:21:39.350 "abort": false, 00:21:39.350 "nvme_admin": false, 00:21:39.350 "nvme_io": false 00:21:39.350 }, 00:21:39.350 "memory_domains": [ 00:21:39.350 { 00:21:39.350 "dma_device_id": "system", 00:21:39.350 "dma_device_type": 1 00:21:39.350 }, 00:21:39.350 { 00:21:39.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.350 "dma_device_type": 2 00:21:39.350 }, 00:21:39.350 { 00:21:39.350 "dma_device_id": "system", 00:21:39.350 "dma_device_type": 1 00:21:39.350 }, 00:21:39.350 { 00:21:39.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.350 "dma_device_type": 2 00:21:39.350 } 00:21:39.350 ], 00:21:39.350 "driver_specific": { 00:21:39.350 "raid": { 00:21:39.350 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:39.350 "strip_size_kb": 0, 00:21:39.350 "state": "online", 00:21:39.350 "raid_level": "raid1", 00:21:39.350 "superblock": true, 00:21:39.350 "num_base_bdevs": 2, 00:21:39.350 "num_base_bdevs_discovered": 2, 00:21:39.350 "num_base_bdevs_operational": 2, 00:21:39.350 "base_bdevs_list": [ 00:21:39.350 { 00:21:39.350 "name": "pt1", 00:21:39.350 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:39.350 "is_configured": true, 00:21:39.350 "data_offset": 256, 00:21:39.350 "data_size": 7936 00:21:39.350 }, 00:21:39.350 { 00:21:39.350 "name": "pt2", 00:21:39.350 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:39.350 "is_configured": true, 00:21:39.350 "data_offset": 256, 00:21:39.350 "data_size": 7936 00:21:39.350 } 00:21:39.350 ] 00:21:39.350 } 00:21:39.350 } 00:21:39.350 }' 00:21:39.350 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.351 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:39.351 pt2' 00:21:39.351 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:39.351 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:39.351 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:39.609 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:39.609 "name": "pt1", 00:21:39.609 "aliases": [ 00:21:39.609 "cbefa4e1-d780-095b-95a3-75e21e8e0379" 00:21:39.609 ], 00:21:39.609 "product_name": "passthru", 00:21:39.609 "block_size": 4096, 00:21:39.609 "num_blocks": 8192, 00:21:39.609 "uuid": "cbefa4e1-d780-095b-95a3-75e21e8e0379", 00:21:39.609 "assigned_rate_limits": { 00:21:39.609 "rw_ios_per_sec": 0, 00:21:39.609 "rw_mbytes_per_sec": 0, 00:21:39.609 "r_mbytes_per_sec": 0, 00:21:39.609 "w_mbytes_per_sec": 0 00:21:39.609 }, 00:21:39.609 "claimed": true, 00:21:39.609 "claim_type": "exclusive_write", 00:21:39.609 "zoned": false, 00:21:39.609 "supported_io_types": { 00:21:39.609 "read": true, 00:21:39.609 "write": true, 00:21:39.609 "unmap": true, 00:21:39.609 "write_zeroes": true, 00:21:39.609 "flush": true, 00:21:39.609 "reset": true, 00:21:39.609 "compare": false, 00:21:39.609 "compare_and_write": false, 00:21:39.609 "abort": true, 00:21:39.609 "nvme_admin": false, 00:21:39.609 "nvme_io": false 00:21:39.609 }, 00:21:39.609 "memory_domains": [ 00:21:39.609 { 00:21:39.609 "dma_device_id": "system", 00:21:39.609 "dma_device_type": 1 00:21:39.609 }, 00:21:39.609 { 00:21:39.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.609 "dma_device_type": 2 00:21:39.609 } 00:21:39.610 ], 00:21:39.610 "driver_specific": { 00:21:39.610 "passthru": { 00:21:39.610 "name": "pt1", 00:21:39.610 "base_bdev_name": "malloc1" 00:21:39.610 } 00:21:39.610 } 00:21:39.610 }' 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:39.610 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:39.869 "name": "pt2", 00:21:39.869 "aliases": [ 00:21:39.869 "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c" 00:21:39.869 ], 00:21:39.869 "product_name": "passthru", 00:21:39.869 "block_size": 4096, 00:21:39.869 "num_blocks": 8192, 00:21:39.869 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:39.869 "assigned_rate_limits": { 00:21:39.869 "rw_ios_per_sec": 0, 00:21:39.869 "rw_mbytes_per_sec": 0, 00:21:39.869 "r_mbytes_per_sec": 0, 00:21:39.869 "w_mbytes_per_sec": 0 00:21:39.869 }, 00:21:39.869 "claimed": true, 00:21:39.869 "claim_type": "exclusive_write", 00:21:39.869 "zoned": false, 00:21:39.869 "supported_io_types": { 00:21:39.869 "read": true, 00:21:39.869 "write": true, 00:21:39.869 "unmap": true, 00:21:39.869 "write_zeroes": true, 00:21:39.869 "flush": true, 00:21:39.869 "reset": true, 00:21:39.869 "compare": false, 00:21:39.869 "compare_and_write": false, 00:21:39.869 "abort": true, 00:21:39.869 "nvme_admin": false, 00:21:39.869 "nvme_io": false 00:21:39.869 }, 00:21:39.869 "memory_domains": [ 00:21:39.869 { 00:21:39.869 "dma_device_id": "system", 00:21:39.869 "dma_device_type": 1 00:21:39.869 }, 00:21:39.869 { 00:21:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.869 "dma_device_type": 2 00:21:39.869 } 00:21:39.869 ], 00:21:39.869 "driver_specific": { 00:21:39.869 "passthru": { 00:21:39.869 "name": "pt2", 00:21:39.869 "base_bdev_name": "malloc2" 00:21:39.869 } 00:21:39.869 } 00:21:39.869 }' 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.869 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:40.128 [2024-05-15 02:22:27.948435] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.128 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' f600f74b-1261-11ef-99fd-bfc7c66e2865 '!=' f600f74b-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:40.128 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:40.128 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:40.128 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:21:40.128 02:22:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:40.386 [2024-05-15 02:22:28.212413] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.386 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.646 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.646 "name": "raid_bdev1", 00:21:40.646 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:40.646 "strip_size_kb": 0, 00:21:40.646 "state": "online", 00:21:40.646 "raid_level": "raid1", 00:21:40.646 "superblock": true, 00:21:40.646 "num_base_bdevs": 2, 00:21:40.646 "num_base_bdevs_discovered": 1, 00:21:40.646 "num_base_bdevs_operational": 1, 00:21:40.646 "base_bdevs_list": [ 00:21:40.646 { 00:21:40.646 "name": null, 00:21:40.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.646 "is_configured": false, 00:21:40.646 "data_offset": 256, 00:21:40.646 "data_size": 7936 00:21:40.646 }, 00:21:40.646 { 00:21:40.646 "name": "pt2", 00:21:40.646 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:40.646 "is_configured": true, 00:21:40.646 "data_offset": 256, 00:21:40.646 "data_size": 7936 00:21:40.646 } 00:21:40.646 ] 00:21:40.646 }' 00:21:40.646 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.646 02:22:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.905 02:22:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:41.174 [2024-05-15 02:22:29.112450] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.174 [2024-05-15 02:22:29.112469] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.174 [2024-05-15 02:22:29.112481] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.174 [2024-05-15 02:22:29.112489] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.174 [2024-05-15 02:22:29.112492] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb12180 name raid_bdev1, state offline 00:21:41.174 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.174 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:41.432 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:41.432 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:41.432 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:41.432 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:41.432 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:41.690 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:41.948 [2024-05-15 02:22:29.888500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:41.948 [2024-05-15 02:22:29.888546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.948 [2024-05-15 02:22:29.888570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11f00 00:21:41.948 [2024-05-15 02:22:29.888577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.948 [2024-05-15 02:22:29.889098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.948 [2024-05-15 02:22:29.889127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:41.948 [2024-05-15 02:22:29.889148] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:41.948 [2024-05-15 02:22:29.889158] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.948 [2024-05-15 02:22:29.889176] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb12180 00:21:41.948 [2024-05-15 02:22:29.889180] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:41.948 [2024-05-15 02:22:29.889198] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bb74e20 00:21:41.948 [2024-05-15 02:22:29.889231] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb12180 00:21:41.948 [2024-05-15 02:22:29.889236] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb12180 00:21:41.948 [2024-05-15 02:22:29.889253] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.948 pt2 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.948 02:22:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.206 02:22:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.206 "name": "raid_bdev1", 00:21:42.206 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:42.206 "strip_size_kb": 0, 00:21:42.206 "state": "online", 00:21:42.206 "raid_level": "raid1", 00:21:42.206 "superblock": true, 00:21:42.206 "num_base_bdevs": 2, 00:21:42.206 "num_base_bdevs_discovered": 1, 00:21:42.206 "num_base_bdevs_operational": 1, 00:21:42.206 "base_bdevs_list": [ 00:21:42.206 { 00:21:42.206 "name": null, 00:21:42.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.206 "is_configured": false, 00:21:42.206 "data_offset": 256, 00:21:42.206 "data_size": 7936 00:21:42.206 }, 00:21:42.206 { 00:21:42.206 "name": "pt2", 00:21:42.206 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:42.206 "is_configured": true, 00:21:42.206 "data_offset": 256, 00:21:42.206 "data_size": 7936 00:21:42.206 } 00:21:42.206 ] 00:21:42.206 }' 00:21:42.206 02:22:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.206 02:22:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.773 02:22:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:42.773 [2024-05-15 02:22:30.712545] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.773 [2024-05-15 02:22:30.712567] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.773 [2024-05-15 02:22:30.712580] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.773 [2024-05-15 02:22:30.712588] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.773 [2024-05-15 02:22:30.712591] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb12180 name raid_bdev1, state offline 00:21:42.773 02:22:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.773 02:22:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:43.031 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:43.031 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:43.031 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:43.031 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.290 [2024-05-15 02:22:31.244605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.290 [2024-05-15 02:22:31.244657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.290 [2024-05-15 02:22:31.244684] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb11c80 00:21:43.290 [2024-05-15 02:22:31.244691] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.290 [2024-05-15 02:22:31.245199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.290 [2024-05-15 02:22:31.245218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.290 [2024-05-15 02:22:31.245239] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.290 [2024-05-15 02:22:31.245252] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.290 [2024-05-15 02:22:31.245276] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:43.290 [2024-05-15 02:22:31.245279] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.290 [2024-05-15 02:22:31.245284] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb11780 name raid_bdev1, state configuring 00:21:43.290 [2024-05-15 02:22:31.245290] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.290 [2024-05-15 02:22:31.245303] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb11780 00:21:43.290 [2024-05-15 02:22:31.245306] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:43.290 [2024-05-15 02:22:31.245330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bb74e20 00:21:43.290 [2024-05-15 02:22:31.245381] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb11780 00:21:43.290 [2024-05-15 02:22:31.245384] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb11780 00:21:43.290 [2024-05-15 02:22:31.245401] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.290 pt1 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.290 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.549 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.549 "name": "raid_bdev1", 00:21:43.549 "uuid": "f600f74b-1261-11ef-99fd-bfc7c66e2865", 00:21:43.549 "strip_size_kb": 0, 00:21:43.549 "state": "online", 00:21:43.549 "raid_level": "raid1", 00:21:43.549 "superblock": true, 00:21:43.549 "num_base_bdevs": 2, 00:21:43.549 "num_base_bdevs_discovered": 1, 00:21:43.549 "num_base_bdevs_operational": 1, 00:21:43.549 "base_bdevs_list": [ 00:21:43.549 { 00:21:43.549 "name": null, 00:21:43.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.549 "is_configured": false, 00:21:43.549 "data_offset": 256, 00:21:43.549 "data_size": 7936 00:21:43.549 }, 00:21:43.549 { 00:21:43.549 "name": "pt2", 00:21:43.549 "uuid": "d7b757a8-b52d-d35b-9dee-90c0f0bfc93c", 00:21:43.549 "is_configured": true, 00:21:43.549 "data_offset": 256, 00:21:43.549 "data_size": 7936 00:21:43.549 } 00:21:43.549 ] 00:21:43.549 }' 00:21:43.549 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.549 02:22:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.807 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:43.807 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:44.065 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:44.065 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:44.065 02:22:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:44.322 [2024-05-15 02:22:32.280662] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' f600f74b-1261-11ef-99fd-bfc7c66e2865 '!=' f600f74b-1261-11ef-99fd-bfc7c66e2865 ']' 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 63891 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 63891 ']' 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 63891 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps -c -o command 63891 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # tail -1 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:44.322 killing process with pid 63891 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63891' 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 63891 00:21:44.322 [2024-05-15 02:22:32.309229] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.322 [2024-05-15 02:22:32.309247] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.322 [2024-05-15 02:22:32.309269] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.322 [2024-05-15 02:22:32.309273] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb11780 name raid_bdev1, state offline 00:21:44.322 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 63891 00:21:44.322 [2024-05-15 02:22:32.318967] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.580 02:22:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:44.580 00:21:44.580 real 0m12.735s 00:21:44.580 user 0m22.740s 00:21:44.580 sys 0m2.010s 00:21:44.580 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:44.580 02:22:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.580 ************************************ 00:21:44.580 END TEST raid_superblock_test_4k 00:21:44.580 ************************************ 00:21:44.580 02:22:32 bdev_raid -- bdev/bdev_raid.sh@834 -- # '[' '' = true ']' 00:21:44.580 02:22:32 bdev_raid -- bdev/bdev_raid.sh@838 -- # base_malloc_params='-m 32' 00:21:44.580 02:22:32 bdev_raid -- bdev/bdev_raid.sh@839 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:44.580 02:22:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:44.580 02:22:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:44.580 02:22:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.580 ************************************ 00:21:44.580 START TEST raid_state_function_test_sb_md_separate 00:21:44.580 ************************************ 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:21:44.580 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=64278 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64278' 00:21:44.581 Process raid pid: 64278 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 64278 /var/tmp/spdk-raid.sock 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64278 ']' 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.581 02:22:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.581 [2024-05-15 02:22:32.519018] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:44.581 [2024-05-15 02:22:32.519182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:45.147 EAL: TSC is not safe to use in SMP mode 00:21:45.147 EAL: TSC is not invariant 00:21:45.147 [2024-05-15 02:22:32.973826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.147 [2024-05-15 02:22:33.074136] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:45.147 [2024-05-15 02:22:33.076316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.147 [2024-05-15 02:22:33.077059] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.147 [2024-05-15 02:22:33.077071] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:45.712 [2024-05-15 02:22:33.667854] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.712 [2024-05-15 02:22:33.667928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.712 [2024-05-15 02:22:33.667933] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.712 [2024-05-15 02:22:33.667941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.712 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.969 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.969 "name": "Existed_Raid", 00:21:45.969 "uuid": "fcf2e249-1261-11ef-99fd-bfc7c66e2865", 00:21:45.969 "strip_size_kb": 0, 00:21:45.969 "state": "configuring", 00:21:45.969 "raid_level": "raid1", 00:21:45.969 "superblock": true, 00:21:45.969 "num_base_bdevs": 2, 00:21:45.969 "num_base_bdevs_discovered": 0, 00:21:45.969 "num_base_bdevs_operational": 2, 00:21:45.969 "base_bdevs_list": [ 00:21:45.969 { 00:21:45.969 "name": "BaseBdev1", 00:21:45.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.969 "is_configured": false, 00:21:45.969 "data_offset": 0, 00:21:45.969 "data_size": 0 00:21:45.969 }, 00:21:45.969 { 00:21:45.969 "name": "BaseBdev2", 00:21:45.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.969 "is_configured": false, 00:21:45.969 "data_offset": 0, 00:21:45.969 "data_size": 0 00:21:45.969 } 00:21:45.969 ] 00:21:45.969 }' 00:21:45.969 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.969 02:22:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.227 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.485 [2024-05-15 02:22:34.423853] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.485 [2024-05-15 02:22:34.423877] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c29c500 name Existed_Raid, state configuring 00:21:46.485 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:46.795 [2024-05-15 02:22:34.611886] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.795 [2024-05-15 02:22:34.611937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.795 [2024-05-15 02:22:34.611942] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.795 [2024-05-15 02:22:34.611949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.795 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:46.795 [2024-05-15 02:22:34.796663] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.795 BaseBdev1 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.069 02:22:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:47.328 [ 00:21:47.328 { 00:21:47.328 "name": "BaseBdev1", 00:21:47.328 "aliases": [ 00:21:47.328 "fd9f020f-1261-11ef-99fd-bfc7c66e2865" 00:21:47.328 ], 00:21:47.328 "product_name": "Malloc disk", 00:21:47.328 "block_size": 4096, 00:21:47.328 "num_blocks": 8192, 00:21:47.328 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:47.328 "md_size": 32, 00:21:47.328 "md_interleave": false, 00:21:47.328 "dif_type": 0, 00:21:47.328 "assigned_rate_limits": { 00:21:47.328 "rw_ios_per_sec": 0, 00:21:47.328 "rw_mbytes_per_sec": 0, 00:21:47.328 "r_mbytes_per_sec": 0, 00:21:47.328 "w_mbytes_per_sec": 0 00:21:47.328 }, 00:21:47.328 "claimed": true, 00:21:47.328 "claim_type": "exclusive_write", 00:21:47.328 "zoned": false, 00:21:47.328 "supported_io_types": { 00:21:47.328 "read": true, 00:21:47.328 "write": true, 00:21:47.328 "unmap": true, 00:21:47.328 "write_zeroes": true, 00:21:47.328 "flush": true, 00:21:47.328 "reset": true, 00:21:47.328 "compare": false, 00:21:47.328 "compare_and_write": false, 00:21:47.328 "abort": true, 00:21:47.328 "nvme_admin": false, 00:21:47.328 "nvme_io": false 00:21:47.328 }, 00:21:47.328 "memory_domains": [ 00:21:47.328 { 00:21:47.328 "dma_device_id": "system", 00:21:47.328 "dma_device_type": 1 00:21:47.328 }, 00:21:47.328 { 00:21:47.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.328 "dma_device_type": 2 00:21:47.328 } 00:21:47.328 ], 00:21:47.328 "driver_specific": {} 00:21:47.328 } 00:21:47.328 ] 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.328 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.329 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.588 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.588 "name": "Existed_Raid", 00:21:47.588 "uuid": "fd82ee90-1261-11ef-99fd-bfc7c66e2865", 00:21:47.588 "strip_size_kb": 0, 00:21:47.588 "state": "configuring", 00:21:47.588 "raid_level": "raid1", 00:21:47.588 "superblock": true, 00:21:47.588 "num_base_bdevs": 2, 00:21:47.588 "num_base_bdevs_discovered": 1, 00:21:47.588 "num_base_bdevs_operational": 2, 00:21:47.588 "base_bdevs_list": [ 00:21:47.588 { 00:21:47.588 "name": "BaseBdev1", 00:21:47.588 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:47.588 "is_configured": true, 00:21:47.588 "data_offset": 256, 00:21:47.588 "data_size": 7936 00:21:47.588 }, 00:21:47.588 { 00:21:47.588 "name": "BaseBdev2", 00:21:47.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.588 "is_configured": false, 00:21:47.588 "data_offset": 0, 00:21:47.588 "data_size": 0 00:21:47.588 } 00:21:47.588 ] 00:21:47.588 }' 00:21:47.588 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.588 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 02:22:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.106 [2024-05-15 02:22:36.043945] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.106 [2024-05-15 02:22:36.043976] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c29c500 name Existed_Raid, state configuring 00:21:48.106 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:48.364 [2024-05-15 02:22:36.235954] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.364 [2024-05-15 02:22:36.236604] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.364 [2024-05-15 02:22:36.236641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.364 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.624 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.624 "name": "Existed_Raid", 00:21:48.624 "uuid": "fe7abeab-1261-11ef-99fd-bfc7c66e2865", 00:21:48.624 "strip_size_kb": 0, 00:21:48.624 "state": "configuring", 00:21:48.624 "raid_level": "raid1", 00:21:48.624 "superblock": true, 00:21:48.624 "num_base_bdevs": 2, 00:21:48.624 "num_base_bdevs_discovered": 1, 00:21:48.624 "num_base_bdevs_operational": 2, 00:21:48.624 "base_bdevs_list": [ 00:21:48.624 { 00:21:48.624 "name": "BaseBdev1", 00:21:48.624 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:48.624 "is_configured": true, 00:21:48.624 "data_offset": 256, 00:21:48.624 "data_size": 7936 00:21:48.624 }, 00:21:48.624 { 00:21:48.624 "name": "BaseBdev2", 00:21:48.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.624 "is_configured": false, 00:21:48.624 "data_offset": 0, 00:21:48.624 "data_size": 0 00:21:48.624 } 00:21:48.624 ] 00:21:48.624 }' 00:21:48.624 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.624 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.884 02:22:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:49.144 [2024-05-15 02:22:36.988076] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.144 [2024-05-15 02:22:36.988127] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c29ca00 00:21:49.144 [2024-05-15 02:22:36.988139] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:49.144 [2024-05-15 02:22:36.988156] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c2ffe20 00:21:49.144 [2024-05-15 02:22:36.988180] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c29ca00 00:21:49.144 [2024-05-15 02:22:36.988183] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c29ca00 00:21:49.144 [2024-05-15 02:22:36.988194] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.144 BaseBdev2 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:49.144 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.402 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:49.714 [ 00:21:49.714 { 00:21:49.714 "name": "BaseBdev2", 00:21:49.714 "aliases": [ 00:21:49.714 "feed803b-1261-11ef-99fd-bfc7c66e2865" 00:21:49.714 ], 00:21:49.714 "product_name": "Malloc disk", 00:21:49.714 "block_size": 4096, 00:21:49.714 "num_blocks": 8192, 00:21:49.714 "uuid": "feed803b-1261-11ef-99fd-bfc7c66e2865", 00:21:49.714 "md_size": 32, 00:21:49.714 "md_interleave": false, 00:21:49.714 "dif_type": 0, 00:21:49.714 "assigned_rate_limits": { 00:21:49.714 "rw_ios_per_sec": 0, 00:21:49.714 "rw_mbytes_per_sec": 0, 00:21:49.714 "r_mbytes_per_sec": 0, 00:21:49.714 "w_mbytes_per_sec": 0 00:21:49.714 }, 00:21:49.714 "claimed": true, 00:21:49.714 "claim_type": "exclusive_write", 00:21:49.714 "zoned": false, 00:21:49.714 "supported_io_types": { 00:21:49.714 "read": true, 00:21:49.714 "write": true, 00:21:49.714 "unmap": true, 00:21:49.714 "write_zeroes": true, 00:21:49.714 "flush": true, 00:21:49.714 "reset": true, 00:21:49.714 "compare": false, 00:21:49.714 "compare_and_write": false, 00:21:49.714 "abort": true, 00:21:49.714 "nvme_admin": false, 00:21:49.714 "nvme_io": false 00:21:49.714 }, 00:21:49.714 "memory_domains": [ 00:21:49.714 { 00:21:49.714 "dma_device_id": "system", 00:21:49.714 "dma_device_type": 1 00:21:49.714 }, 00:21:49.714 { 00:21:49.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.714 "dma_device_type": 2 00:21:49.714 } 00:21:49.714 ], 00:21:49.714 "driver_specific": {} 00:21:49.714 } 00:21:49.714 ] 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.714 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.715 "name": "Existed_Raid", 00:21:49.715 "uuid": "fe7abeab-1261-11ef-99fd-bfc7c66e2865", 00:21:49.715 "strip_size_kb": 0, 00:21:49.715 "state": "online", 00:21:49.715 "raid_level": "raid1", 00:21:49.715 "superblock": true, 00:21:49.715 "num_base_bdevs": 2, 00:21:49.715 "num_base_bdevs_discovered": 2, 00:21:49.715 "num_base_bdevs_operational": 2, 00:21:49.715 "base_bdevs_list": [ 00:21:49.715 { 00:21:49.715 "name": "BaseBdev1", 00:21:49.715 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:49.715 "is_configured": true, 00:21:49.715 "data_offset": 256, 00:21:49.715 "data_size": 7936 00:21:49.715 }, 00:21:49.715 { 00:21:49.715 "name": "BaseBdev2", 00:21:49.715 "uuid": "feed803b-1261-11ef-99fd-bfc7c66e2865", 00:21:49.715 "is_configured": true, 00:21:49.715 "data_offset": 256, 00:21:49.715 "data_size": 7936 00:21:49.715 } 00:21:49.715 ] 00:21:49.715 }' 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.715 02:22:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:50.300 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:50.558 [2024-05-15 02:22:38.332097] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:50.558 "name": "Existed_Raid", 00:21:50.558 "aliases": [ 00:21:50.558 "fe7abeab-1261-11ef-99fd-bfc7c66e2865" 00:21:50.558 ], 00:21:50.558 "product_name": "Raid Volume", 00:21:50.558 "block_size": 4096, 00:21:50.558 "num_blocks": 7936, 00:21:50.558 "uuid": "fe7abeab-1261-11ef-99fd-bfc7c66e2865", 00:21:50.558 "md_size": 32, 00:21:50.558 "md_interleave": false, 00:21:50.558 "dif_type": 0, 00:21:50.558 "assigned_rate_limits": { 00:21:50.558 "rw_ios_per_sec": 0, 00:21:50.558 "rw_mbytes_per_sec": 0, 00:21:50.558 "r_mbytes_per_sec": 0, 00:21:50.558 "w_mbytes_per_sec": 0 00:21:50.558 }, 00:21:50.558 "claimed": false, 00:21:50.558 "zoned": false, 00:21:50.558 "supported_io_types": { 00:21:50.558 "read": true, 00:21:50.558 "write": true, 00:21:50.558 "unmap": false, 00:21:50.558 "write_zeroes": true, 00:21:50.558 "flush": false, 00:21:50.558 "reset": true, 00:21:50.558 "compare": false, 00:21:50.558 "compare_and_write": false, 00:21:50.558 "abort": false, 00:21:50.558 "nvme_admin": false, 00:21:50.558 "nvme_io": false 00:21:50.558 }, 00:21:50.558 "memory_domains": [ 00:21:50.558 { 00:21:50.558 "dma_device_id": "system", 00:21:50.558 "dma_device_type": 1 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.558 "dma_device_type": 2 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "dma_device_id": "system", 00:21:50.558 "dma_device_type": 1 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.558 "dma_device_type": 2 00:21:50.558 } 00:21:50.558 ], 00:21:50.558 "driver_specific": { 00:21:50.558 "raid": { 00:21:50.558 "uuid": "fe7abeab-1261-11ef-99fd-bfc7c66e2865", 00:21:50.558 "strip_size_kb": 0, 00:21:50.558 "state": "online", 00:21:50.558 "raid_level": "raid1", 00:21:50.558 "superblock": true, 00:21:50.558 "num_base_bdevs": 2, 00:21:50.558 "num_base_bdevs_discovered": 2, 00:21:50.558 "num_base_bdevs_operational": 2, 00:21:50.558 "base_bdevs_list": [ 00:21:50.558 { 00:21:50.558 "name": "BaseBdev1", 00:21:50.558 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 256, 00:21:50.558 "data_size": 7936 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "name": "BaseBdev2", 00:21:50.558 "uuid": "feed803b-1261-11ef-99fd-bfc7c66e2865", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 256, 00:21:50.558 "data_size": 7936 00:21:50.558 } 00:21:50.558 ] 00:21:50.558 } 00:21:50.558 } 00:21:50.558 }' 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:21:50.558 BaseBdev2' 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:50.558 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:50.817 "name": "BaseBdev1", 00:21:50.817 "aliases": [ 00:21:50.817 "fd9f020f-1261-11ef-99fd-bfc7c66e2865" 00:21:50.817 ], 00:21:50.817 "product_name": "Malloc disk", 00:21:50.817 "block_size": 4096, 00:21:50.817 "num_blocks": 8192, 00:21:50.817 "uuid": "fd9f020f-1261-11ef-99fd-bfc7c66e2865", 00:21:50.817 "md_size": 32, 00:21:50.817 "md_interleave": false, 00:21:50.817 "dif_type": 0, 00:21:50.817 "assigned_rate_limits": { 00:21:50.817 "rw_ios_per_sec": 0, 00:21:50.817 "rw_mbytes_per_sec": 0, 00:21:50.817 "r_mbytes_per_sec": 0, 00:21:50.817 "w_mbytes_per_sec": 0 00:21:50.817 }, 00:21:50.817 "claimed": true, 00:21:50.817 "claim_type": "exclusive_write", 00:21:50.817 "zoned": false, 00:21:50.817 "supported_io_types": { 00:21:50.817 "read": true, 00:21:50.817 "write": true, 00:21:50.817 "unmap": true, 00:21:50.817 "write_zeroes": true, 00:21:50.817 "flush": true, 00:21:50.817 "reset": true, 00:21:50.817 "compare": false, 00:21:50.817 "compare_and_write": false, 00:21:50.817 "abort": true, 00:21:50.817 "nvme_admin": false, 00:21:50.817 "nvme_io": false 00:21:50.817 }, 00:21:50.817 "memory_domains": [ 00:21:50.817 { 00:21:50.817 "dma_device_id": "system", 00:21:50.817 "dma_device_type": 1 00:21:50.817 }, 00:21:50.817 { 00:21:50.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.817 "dma_device_type": 2 00:21:50.817 } 00:21:50.817 ], 00:21:50.817 "driver_specific": {} 00:21:50.817 }' 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:50.817 02:22:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:51.077 "name": "BaseBdev2", 00:21:51.077 "aliases": [ 00:21:51.077 "feed803b-1261-11ef-99fd-bfc7c66e2865" 00:21:51.077 ], 00:21:51.077 "product_name": "Malloc disk", 00:21:51.077 "block_size": 4096, 00:21:51.077 "num_blocks": 8192, 00:21:51.077 "uuid": "feed803b-1261-11ef-99fd-bfc7c66e2865", 00:21:51.077 "md_size": 32, 00:21:51.077 "md_interleave": false, 00:21:51.077 "dif_type": 0, 00:21:51.077 "assigned_rate_limits": { 00:21:51.077 "rw_ios_per_sec": 0, 00:21:51.077 "rw_mbytes_per_sec": 0, 00:21:51.077 "r_mbytes_per_sec": 0, 00:21:51.077 "w_mbytes_per_sec": 0 00:21:51.077 }, 00:21:51.077 "claimed": true, 00:21:51.077 "claim_type": "exclusive_write", 00:21:51.077 "zoned": false, 00:21:51.077 "supported_io_types": { 00:21:51.077 "read": true, 00:21:51.077 "write": true, 00:21:51.077 "unmap": true, 00:21:51.077 "write_zeroes": true, 00:21:51.077 "flush": true, 00:21:51.077 "reset": true, 00:21:51.077 "compare": false, 00:21:51.077 "compare_and_write": false, 00:21:51.077 "abort": true, 00:21:51.077 "nvme_admin": false, 00:21:51.077 "nvme_io": false 00:21:51.077 }, 00:21:51.077 "memory_domains": [ 00:21:51.077 { 00:21:51.077 "dma_device_id": "system", 00:21:51.077 "dma_device_type": 1 00:21:51.077 }, 00:21:51.077 { 00:21:51.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.077 "dma_device_type": 2 00:21:51.077 } 00:21:51.077 ], 00:21:51.077 "driver_specific": {} 00:21:51.077 }' 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:21:51.077 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:51.335 [2024-05-15 02:22:39.352142] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.595 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.853 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.853 "name": "Existed_Raid", 00:21:51.853 "uuid": "fe7abeab-1261-11ef-99fd-bfc7c66e2865", 00:21:51.853 "strip_size_kb": 0, 00:21:51.853 "state": "online", 00:21:51.853 "raid_level": "raid1", 00:21:51.853 "superblock": true, 00:21:51.853 "num_base_bdevs": 2, 00:21:51.853 "num_base_bdevs_discovered": 1, 00:21:51.853 "num_base_bdevs_operational": 1, 00:21:51.853 "base_bdevs_list": [ 00:21:51.853 { 00:21:51.853 "name": null, 00:21:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.853 "is_configured": false, 00:21:51.853 "data_offset": 256, 00:21:51.853 "data_size": 7936 00:21:51.853 }, 00:21:51.853 { 00:21:51.853 "name": "BaseBdev2", 00:21:51.853 "uuid": "feed803b-1261-11ef-99fd-bfc7c66e2865", 00:21:51.853 "is_configured": true, 00:21:51.853 "data_offset": 256, 00:21:51.853 "data_size": 7936 00:21:51.853 } 00:21:51.853 ] 00:21:51.853 }' 00:21:51.853 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.853 02:22:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.111 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:52.111 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:52.111 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.111 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:52.409 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:52.409 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:52.409 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:52.679 [2024-05-15 02:22:40.600945] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:52.679 [2024-05-15 02:22:40.600977] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.679 [2024-05-15 02:22:40.605827] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.679 [2024-05-15 02:22:40.605842] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.679 [2024-05-15 02:22:40.605846] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c29ca00 name Existed_Raid, state offline 00:21:52.679 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:52.679 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:52.679 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.679 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 64278 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64278 ']' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 64278 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64278 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:52.937 killing process with pid 64278 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64278' 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 64278 00:21:52.937 [2024-05-15 02:22:40.830739] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.937 [2024-05-15 02:22:40.830774] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.937 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 64278 00:21:53.196 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:21:53.196 00:21:53.196 real 0m8.461s 00:21:53.196 user 0m14.802s 00:21:53.196 sys 0m1.422s 00:21:53.196 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:53.196 ************************************ 00:21:53.196 END TEST raid_state_function_test_sb_md_separate 00:21:53.196 ************************************ 00:21:53.196 02:22:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.196 02:22:41 bdev_raid -- bdev/bdev_raid.sh@840 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:53.196 02:22:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:53.196 02:22:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:53.196 02:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.197 ************************************ 00:21:53.197 START TEST raid_superblock_test_md_separate 00:21:53.197 ************************************ 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=64548 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 64548 /var/tmp/spdk-raid.sock 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64548 ']' 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:53.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:53.197 02:22:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.197 [2024-05-15 02:22:41.025077] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:53.197 [2024-05-15 02:22:41.025277] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:53.763 EAL: TSC is not safe to use in SMP mode 00:21:53.763 EAL: TSC is not invariant 00:21:53.763 [2024-05-15 02:22:41.509061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.763 [2024-05-15 02:22:41.589719] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:53.763 [2024-05-15 02:22:41.591681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.763 [2024-05-15 02:22:41.592354] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.763 [2024-05-15 02:22:41.592366] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:54.331 malloc1 00:21:54.331 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.589 [2024-05-15 02:22:42.534388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.589 [2024-05-15 02:22:42.534440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.589 [2024-05-15 02:22:42.534995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86780 00:21:54.589 [2024-05-15 02:22:42.535021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.589 [2024-05-15 02:22:42.535683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.589 [2024-05-15 02:22:42.535710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.589 pt1 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.589 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:54.854 malloc2 00:21:54.854 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.136 [2024-05-15 02:22:42.946413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.136 [2024-05-15 02:22:42.946466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.136 [2024-05-15 02:22:42.946494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86c80 00:21:55.136 [2024-05-15 02:22:42.946502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.136 [2024-05-15 02:22:42.946986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.136 [2024-05-15 02:22:42.947012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.136 pt2 00:21:55.136 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:55.136 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:55.136 02:22:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:55.394 [2024-05-15 02:22:43.214427] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.394 [2024-05-15 02:22:43.214827] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.395 [2024-05-15 02:22:43.214879] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd86f00 00:21:55.395 [2024-05-15 02:22:43.214884] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:55.395 [2024-05-15 02:22:43.214914] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bde9e20 00:21:55.395 [2024-05-15 02:22:43.214938] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd86f00 00:21:55.395 [2024-05-15 02:22:43.214941] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bd86f00 00:21:55.395 [2024-05-15 02:22:43.214953] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.395 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.651 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.651 "name": "raid_bdev1", 00:21:55.651 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:21:55.651 "strip_size_kb": 0, 00:21:55.651 "state": "online", 00:21:55.651 "raid_level": "raid1", 00:21:55.651 "superblock": true, 00:21:55.651 "num_base_bdevs": 2, 00:21:55.651 "num_base_bdevs_discovered": 2, 00:21:55.651 "num_base_bdevs_operational": 2, 00:21:55.651 "base_bdevs_list": [ 00:21:55.651 { 00:21:55.651 "name": "pt1", 00:21:55.651 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:21:55.651 "is_configured": true, 00:21:55.651 "data_offset": 256, 00:21:55.651 "data_size": 7936 00:21:55.651 }, 00:21:55.651 { 00:21:55.651 "name": "pt2", 00:21:55.651 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:21:55.651 "is_configured": true, 00:21:55.651 "data_offset": 256, 00:21:55.651 "data_size": 7936 00:21:55.651 } 00:21:55.651 ] 00:21:55.651 }' 00:21:55.651 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.651 02:22:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:55.910 02:22:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:56.169 [2024-05-15 02:22:44.006501] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:56.169 "name": "raid_bdev1", 00:21:56.169 "aliases": [ 00:21:56.169 "02a39344-1262-11ef-99fd-bfc7c66e2865" 00:21:56.169 ], 00:21:56.169 "product_name": "Raid Volume", 00:21:56.169 "block_size": 4096, 00:21:56.169 "num_blocks": 7936, 00:21:56.169 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:21:56.169 "md_size": 32, 00:21:56.169 "md_interleave": false, 00:21:56.169 "dif_type": 0, 00:21:56.169 "assigned_rate_limits": { 00:21:56.169 "rw_ios_per_sec": 0, 00:21:56.169 "rw_mbytes_per_sec": 0, 00:21:56.169 "r_mbytes_per_sec": 0, 00:21:56.169 "w_mbytes_per_sec": 0 00:21:56.169 }, 00:21:56.169 "claimed": false, 00:21:56.169 "zoned": false, 00:21:56.169 "supported_io_types": { 00:21:56.169 "read": true, 00:21:56.169 "write": true, 00:21:56.169 "unmap": false, 00:21:56.169 "write_zeroes": true, 00:21:56.169 "flush": false, 00:21:56.169 "reset": true, 00:21:56.169 "compare": false, 00:21:56.169 "compare_and_write": false, 00:21:56.169 "abort": false, 00:21:56.169 "nvme_admin": false, 00:21:56.169 "nvme_io": false 00:21:56.169 }, 00:21:56.169 "memory_domains": [ 00:21:56.169 { 00:21:56.169 "dma_device_id": "system", 00:21:56.169 "dma_device_type": 1 00:21:56.169 }, 00:21:56.169 { 00:21:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.169 "dma_device_type": 2 00:21:56.169 }, 00:21:56.169 { 00:21:56.169 "dma_device_id": "system", 00:21:56.169 "dma_device_type": 1 00:21:56.169 }, 00:21:56.169 { 00:21:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.169 "dma_device_type": 2 00:21:56.169 } 00:21:56.169 ], 00:21:56.169 "driver_specific": { 00:21:56.169 "raid": { 00:21:56.169 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:21:56.169 "strip_size_kb": 0, 00:21:56.169 "state": "online", 00:21:56.169 "raid_level": "raid1", 00:21:56.169 "superblock": true, 00:21:56.169 "num_base_bdevs": 2, 00:21:56.169 "num_base_bdevs_discovered": 2, 00:21:56.169 "num_base_bdevs_operational": 2, 00:21:56.169 "base_bdevs_list": [ 00:21:56.169 { 00:21:56.169 "name": "pt1", 00:21:56.169 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:21:56.169 "is_configured": true, 00:21:56.169 "data_offset": 256, 00:21:56.169 "data_size": 7936 00:21:56.169 }, 00:21:56.169 { 00:21:56.169 "name": "pt2", 00:21:56.169 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:21:56.169 "is_configured": true, 00:21:56.169 "data_offset": 256, 00:21:56.169 "data_size": 7936 00:21:56.169 } 00:21:56.169 ] 00:21:56.169 } 00:21:56.169 } 00:21:56.169 }' 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:56.169 pt2' 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:56.169 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:56.428 "name": "pt1", 00:21:56.428 "aliases": [ 00:21:56.428 "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8" 00:21:56.428 ], 00:21:56.428 "product_name": "passthru", 00:21:56.428 "block_size": 4096, 00:21:56.428 "num_blocks": 8192, 00:21:56.428 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:21:56.428 "md_size": 32, 00:21:56.428 "md_interleave": false, 00:21:56.428 "dif_type": 0, 00:21:56.428 "assigned_rate_limits": { 00:21:56.428 "rw_ios_per_sec": 0, 00:21:56.428 "rw_mbytes_per_sec": 0, 00:21:56.428 "r_mbytes_per_sec": 0, 00:21:56.428 "w_mbytes_per_sec": 0 00:21:56.428 }, 00:21:56.428 "claimed": true, 00:21:56.428 "claim_type": "exclusive_write", 00:21:56.428 "zoned": false, 00:21:56.428 "supported_io_types": { 00:21:56.428 "read": true, 00:21:56.428 "write": true, 00:21:56.428 "unmap": true, 00:21:56.428 "write_zeroes": true, 00:21:56.428 "flush": true, 00:21:56.428 "reset": true, 00:21:56.428 "compare": false, 00:21:56.428 "compare_and_write": false, 00:21:56.428 "abort": true, 00:21:56.428 "nvme_admin": false, 00:21:56.428 "nvme_io": false 00:21:56.428 }, 00:21:56.428 "memory_domains": [ 00:21:56.428 { 00:21:56.428 "dma_device_id": "system", 00:21:56.428 "dma_device_type": 1 00:21:56.428 }, 00:21:56.428 { 00:21:56.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.428 "dma_device_type": 2 00:21:56.428 } 00:21:56.428 ], 00:21:56.428 "driver_specific": { 00:21:56.428 "passthru": { 00:21:56.428 "name": "pt1", 00:21:56.428 "base_bdev_name": "malloc1" 00:21:56.428 } 00:21:56.428 } 00:21:56.428 }' 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:56.428 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:56.685 "name": "pt2", 00:21:56.685 "aliases": [ 00:21:56.685 "b8ea8400-dd86-ea5a-82de-5514e9e7221c" 00:21:56.685 ], 00:21:56.685 "product_name": "passthru", 00:21:56.685 "block_size": 4096, 00:21:56.685 "num_blocks": 8192, 00:21:56.685 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:21:56.685 "md_size": 32, 00:21:56.685 "md_interleave": false, 00:21:56.685 "dif_type": 0, 00:21:56.685 "assigned_rate_limits": { 00:21:56.685 "rw_ios_per_sec": 0, 00:21:56.685 "rw_mbytes_per_sec": 0, 00:21:56.685 "r_mbytes_per_sec": 0, 00:21:56.685 "w_mbytes_per_sec": 0 00:21:56.685 }, 00:21:56.685 "claimed": true, 00:21:56.685 "claim_type": "exclusive_write", 00:21:56.685 "zoned": false, 00:21:56.685 "supported_io_types": { 00:21:56.685 "read": true, 00:21:56.685 "write": true, 00:21:56.685 "unmap": true, 00:21:56.685 "write_zeroes": true, 00:21:56.685 "flush": true, 00:21:56.685 "reset": true, 00:21:56.685 "compare": false, 00:21:56.685 "compare_and_write": false, 00:21:56.685 "abort": true, 00:21:56.685 "nvme_admin": false, 00:21:56.685 "nvme_io": false 00:21:56.685 }, 00:21:56.685 "memory_domains": [ 00:21:56.685 { 00:21:56.685 "dma_device_id": "system", 00:21:56.685 "dma_device_type": 1 00:21:56.685 }, 00:21:56.685 { 00:21:56.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.685 "dma_device_type": 2 00:21:56.685 } 00:21:56.685 ], 00:21:56.685 "driver_specific": { 00:21:56.685 "passthru": { 00:21:56.685 "name": "pt2", 00:21:56.685 "base_bdev_name": "malloc2" 00:21:56.685 } 00:21:56.685 } 00:21:56.685 }' 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.685 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:56.942 [2024-05-15 02:22:44.762513] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.942 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02a39344-1262-11ef-99fd-bfc7c66e2865 00:21:56.942 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 02a39344-1262-11ef-99fd-bfc7c66e2865 ']' 00:21:56.942 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:56.942 [2024-05-15 02:22:44.950496] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.942 [2024-05-15 02:22:44.950516] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.942 [2024-05-15 02:22:44.950538] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.942 [2024-05-15 02:22:44.950550] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.942 [2024-05-15 02:22:44.950553] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd86f00 name raid_bdev1, state offline 00:21:57.199 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.200 02:22:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:57.476 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:57.735 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:57.735 02:22:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:57.993 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:58.252 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:58.252 [2024-05-15 02:22:46.194585] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:58.252 [2024-05-15 02:22:46.195013] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:58.252 [2024-05-15 02:22:46.195027] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:58.253 [2024-05-15 02:22:46.195058] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:58.253 [2024-05-15 02:22:46.195066] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.253 [2024-05-15 02:22:46.195070] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd86c80 name raid_bdev1, state configuring 00:21:58.253 request: 00:21:58.253 { 00:21:58.253 "name": "raid_bdev1", 00:21:58.253 "raid_level": "raid1", 00:21:58.253 "base_bdevs": [ 00:21:58.253 "malloc1", 00:21:58.253 "malloc2" 00:21:58.253 ], 00:21:58.253 "superblock": false, 00:21:58.253 "method": "bdev_raid_create", 00:21:58.253 "req_id": 1 00:21:58.253 } 00:21:58.253 Got JSON-RPC error response 00:21:58.253 response: 00:21:58.253 { 00:21:58.253 "code": -17, 00:21:58.253 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:58.253 } 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:58.253 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.512 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:58.512 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:58.512 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:58.770 [2024-05-15 02:22:46.678624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:58.770 [2024-05-15 02:22:46.678677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.771 [2024-05-15 02:22:46.678702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86780 00:21:58.771 [2024-05-15 02:22:46.678708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.771 [2024-05-15 02:22:46.679181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.771 [2024-05-15 02:22:46.679202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:58.771 [2024-05-15 02:22:46.679224] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:58.771 [2024-05-15 02:22:46.679235] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:58.771 pt1 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.771 02:22:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.030 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.030 "name": "raid_bdev1", 00:21:59.030 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:21:59.030 "strip_size_kb": 0, 00:21:59.030 "state": "configuring", 00:21:59.030 "raid_level": "raid1", 00:21:59.030 "superblock": true, 00:21:59.030 "num_base_bdevs": 2, 00:21:59.030 "num_base_bdevs_discovered": 1, 00:21:59.030 "num_base_bdevs_operational": 2, 00:21:59.030 "base_bdevs_list": [ 00:21:59.030 { 00:21:59.030 "name": "pt1", 00:21:59.030 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:21:59.030 "is_configured": true, 00:21:59.030 "data_offset": 256, 00:21:59.030 "data_size": 7936 00:21:59.030 }, 00:21:59.030 { 00:21:59.030 "name": null, 00:21:59.030 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:21:59.030 "is_configured": false, 00:21:59.030 "data_offset": 256, 00:21:59.030 "data_size": 7936 00:21:59.030 } 00:21:59.030 ] 00:21:59.030 }' 00:21:59.030 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.030 02:22:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:59.598 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:59.598 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:59.598 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:59.598 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:59.598 [2024-05-15 02:22:47.614676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:59.598 [2024-05-15 02:22:47.614731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.598 [2024-05-15 02:22:47.614757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86f00 00:21:59.598 [2024-05-15 02:22:47.614764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.598 [2024-05-15 02:22:47.614822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.598 [2024-05-15 02:22:47.614829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:59.598 [2024-05-15 02:22:47.614846] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:59.598 [2024-05-15 02:22:47.614852] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:59.598 [2024-05-15 02:22:47.614880] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd87180 00:21:59.598 [2024-05-15 02:22:47.614888] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:59.598 [2024-05-15 02:22:47.614904] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bde9e20 00:21:59.598 [2024-05-15 02:22:47.614921] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd87180 00:21:59.598 [2024-05-15 02:22:47.614924] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bd87180 00:21:59.598 [2024-05-15 02:22:47.614934] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.858 pt2 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.858 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.858 "name": "raid_bdev1", 00:21:59.858 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:21:59.858 "strip_size_kb": 0, 00:21:59.858 "state": "online", 00:21:59.858 "raid_level": "raid1", 00:21:59.858 "superblock": true, 00:21:59.858 "num_base_bdevs": 2, 00:21:59.859 "num_base_bdevs_discovered": 2, 00:21:59.859 "num_base_bdevs_operational": 2, 00:21:59.859 "base_bdevs_list": [ 00:21:59.859 { 00:21:59.859 "name": "pt1", 00:21:59.859 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:21:59.859 "is_configured": true, 00:21:59.859 "data_offset": 256, 00:21:59.859 "data_size": 7936 00:21:59.859 }, 00:21:59.859 { 00:21:59.859 "name": "pt2", 00:21:59.859 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:21:59.859 "is_configured": true, 00:21:59.859 "data_offset": 256, 00:21:59.859 "data_size": 7936 00:21:59.859 } 00:21:59.859 ] 00:21:59.859 }' 00:21:59.859 02:22:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.859 02:22:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:00.147 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:00.406 [2024-05-15 02:22:48.322748] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:00.406 "name": "raid_bdev1", 00:22:00.406 "aliases": [ 00:22:00.406 "02a39344-1262-11ef-99fd-bfc7c66e2865" 00:22:00.406 ], 00:22:00.406 "product_name": "Raid Volume", 00:22:00.406 "block_size": 4096, 00:22:00.406 "num_blocks": 7936, 00:22:00.406 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:22:00.406 "md_size": 32, 00:22:00.406 "md_interleave": false, 00:22:00.406 "dif_type": 0, 00:22:00.406 "assigned_rate_limits": { 00:22:00.406 "rw_ios_per_sec": 0, 00:22:00.406 "rw_mbytes_per_sec": 0, 00:22:00.406 "r_mbytes_per_sec": 0, 00:22:00.406 "w_mbytes_per_sec": 0 00:22:00.406 }, 00:22:00.406 "claimed": false, 00:22:00.406 "zoned": false, 00:22:00.406 "supported_io_types": { 00:22:00.406 "read": true, 00:22:00.406 "write": true, 00:22:00.406 "unmap": false, 00:22:00.406 "write_zeroes": true, 00:22:00.406 "flush": false, 00:22:00.406 "reset": true, 00:22:00.406 "compare": false, 00:22:00.406 "compare_and_write": false, 00:22:00.406 "abort": false, 00:22:00.406 "nvme_admin": false, 00:22:00.406 "nvme_io": false 00:22:00.406 }, 00:22:00.406 "memory_domains": [ 00:22:00.406 { 00:22:00.406 "dma_device_id": "system", 00:22:00.406 "dma_device_type": 1 00:22:00.406 }, 00:22:00.406 { 00:22:00.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.406 "dma_device_type": 2 00:22:00.406 }, 00:22:00.406 { 00:22:00.406 "dma_device_id": "system", 00:22:00.406 "dma_device_type": 1 00:22:00.406 }, 00:22:00.406 { 00:22:00.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.406 "dma_device_type": 2 00:22:00.406 } 00:22:00.406 ], 00:22:00.406 "driver_specific": { 00:22:00.406 "raid": { 00:22:00.406 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:22:00.406 "strip_size_kb": 0, 00:22:00.406 "state": "online", 00:22:00.406 "raid_level": "raid1", 00:22:00.406 "superblock": true, 00:22:00.406 "num_base_bdevs": 2, 00:22:00.406 "num_base_bdevs_discovered": 2, 00:22:00.406 "num_base_bdevs_operational": 2, 00:22:00.406 "base_bdevs_list": [ 00:22:00.406 { 00:22:00.406 "name": "pt1", 00:22:00.406 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:22:00.406 "is_configured": true, 00:22:00.406 "data_offset": 256, 00:22:00.406 "data_size": 7936 00:22:00.406 }, 00:22:00.406 { 00:22:00.406 "name": "pt2", 00:22:00.406 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:22:00.406 "is_configured": true, 00:22:00.406 "data_offset": 256, 00:22:00.406 "data_size": 7936 00:22:00.406 } 00:22:00.406 ] 00:22:00.406 } 00:22:00.406 } 00:22:00.406 }' 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:00.406 pt2' 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:00.406 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:00.664 "name": "pt1", 00:22:00.664 "aliases": [ 00:22:00.664 "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8" 00:22:00.664 ], 00:22:00.664 "product_name": "passthru", 00:22:00.664 "block_size": 4096, 00:22:00.664 "num_blocks": 8192, 00:22:00.664 "uuid": "b1e47981-0bfc-9759-9a1e-61f4e6a21ea8", 00:22:00.664 "md_size": 32, 00:22:00.664 "md_interleave": false, 00:22:00.664 "dif_type": 0, 00:22:00.664 "assigned_rate_limits": { 00:22:00.664 "rw_ios_per_sec": 0, 00:22:00.664 "rw_mbytes_per_sec": 0, 00:22:00.664 "r_mbytes_per_sec": 0, 00:22:00.664 "w_mbytes_per_sec": 0 00:22:00.664 }, 00:22:00.664 "claimed": true, 00:22:00.664 "claim_type": "exclusive_write", 00:22:00.664 "zoned": false, 00:22:00.664 "supported_io_types": { 00:22:00.664 "read": true, 00:22:00.664 "write": true, 00:22:00.664 "unmap": true, 00:22:00.664 "write_zeroes": true, 00:22:00.664 "flush": true, 00:22:00.664 "reset": true, 00:22:00.664 "compare": false, 00:22:00.664 "compare_and_write": false, 00:22:00.664 "abort": true, 00:22:00.664 "nvme_admin": false, 00:22:00.664 "nvme_io": false 00:22:00.664 }, 00:22:00.664 "memory_domains": [ 00:22:00.664 { 00:22:00.664 "dma_device_id": "system", 00:22:00.664 "dma_device_type": 1 00:22:00.664 }, 00:22:00.664 { 00:22:00.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.664 "dma_device_type": 2 00:22:00.664 } 00:22:00.664 ], 00:22:00.664 "driver_specific": { 00:22:00.664 "passthru": { 00:22:00.664 "name": "pt1", 00:22:00.664 "base_bdev_name": "malloc1" 00:22:00.664 } 00:22:00.664 } 00:22:00.664 }' 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:00.664 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:00.923 "name": "pt2", 00:22:00.923 "aliases": [ 00:22:00.923 "b8ea8400-dd86-ea5a-82de-5514e9e7221c" 00:22:00.923 ], 00:22:00.923 "product_name": "passthru", 00:22:00.923 "block_size": 4096, 00:22:00.923 "num_blocks": 8192, 00:22:00.923 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:22:00.923 "md_size": 32, 00:22:00.923 "md_interleave": false, 00:22:00.923 "dif_type": 0, 00:22:00.923 "assigned_rate_limits": { 00:22:00.923 "rw_ios_per_sec": 0, 00:22:00.923 "rw_mbytes_per_sec": 0, 00:22:00.923 "r_mbytes_per_sec": 0, 00:22:00.923 "w_mbytes_per_sec": 0 00:22:00.923 }, 00:22:00.923 "claimed": true, 00:22:00.923 "claim_type": "exclusive_write", 00:22:00.923 "zoned": false, 00:22:00.923 "supported_io_types": { 00:22:00.923 "read": true, 00:22:00.923 "write": true, 00:22:00.923 "unmap": true, 00:22:00.923 "write_zeroes": true, 00:22:00.923 "flush": true, 00:22:00.923 "reset": true, 00:22:00.923 "compare": false, 00:22:00.923 "compare_and_write": false, 00:22:00.923 "abort": true, 00:22:00.923 "nvme_admin": false, 00:22:00.923 "nvme_io": false 00:22:00.923 }, 00:22:00.923 "memory_domains": [ 00:22:00.923 { 00:22:00.923 "dma_device_id": "system", 00:22:00.923 "dma_device_type": 1 00:22:00.923 }, 00:22:00.923 { 00:22:00.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.923 "dma_device_type": 2 00:22:00.923 } 00:22:00.923 ], 00:22:00.923 "driver_specific": { 00:22:00.923 "passthru": { 00:22:00.923 "name": "pt2", 00:22:00.923 "base_bdev_name": "malloc2" 00:22:00.923 } 00:22:00.923 } 00:22:00.923 }' 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.923 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:01.182 02:22:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:01.440 [2024-05-15 02:22:49.230812] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 02a39344-1262-11ef-99fd-bfc7c66e2865 '!=' 02a39344-1262-11ef-99fd-bfc7c66e2865 ']' 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:01.440 [2024-05-15 02:22:49.426813] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.440 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.441 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.441 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.441 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.441 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.699 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.699 "name": "raid_bdev1", 00:22:01.699 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:22:01.699 "strip_size_kb": 0, 00:22:01.699 "state": "online", 00:22:01.699 "raid_level": "raid1", 00:22:01.699 "superblock": true, 00:22:01.699 "num_base_bdevs": 2, 00:22:01.699 "num_base_bdevs_discovered": 1, 00:22:01.699 "num_base_bdevs_operational": 1, 00:22:01.699 "base_bdevs_list": [ 00:22:01.699 { 00:22:01.699 "name": null, 00:22:01.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.699 "is_configured": false, 00:22:01.699 "data_offset": 256, 00:22:01.699 "data_size": 7936 00:22:01.699 }, 00:22:01.699 { 00:22:01.699 "name": "pt2", 00:22:01.699 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:22:01.699 "is_configured": true, 00:22:01.699 "data_offset": 256, 00:22:01.699 "data_size": 7936 00:22:01.699 } 00:22:01.699 ] 00:22:01.699 }' 00:22:01.699 02:22:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.699 02:22:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:02.321 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:02.321 [2024-05-15 02:22:50.202858] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.321 [2024-05-15 02:22:50.202876] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.321 [2024-05-15 02:22:50.202887] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.321 [2024-05-15 02:22:50.202895] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.321 [2024-05-15 02:22:50.202899] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd87180 name raid_bdev1, state offline 00:22:02.321 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.321 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:02.594 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:02.594 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:02.594 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:02.594 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:02.594 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:02.852 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.852 [2024-05-15 02:22:50.870903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.852 [2024-05-15 02:22:50.870953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.852 [2024-05-15 02:22:50.870976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86f00 00:22:02.852 [2024-05-15 02:22:50.870983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.852 [2024-05-15 02:22:50.871448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.852 [2024-05-15 02:22:50.871475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.852 [2024-05-15 02:22:50.871492] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:02.852 [2024-05-15 02:22:50.871501] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.852 [2024-05-15 02:22:50.871527] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd87180 00:22:02.852 [2024-05-15 02:22:50.871531] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:02.852 [2024-05-15 02:22:50.871548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bde9e20 00:22:02.852 [2024-05-15 02:22:50.871568] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd87180 00:22:02.852 [2024-05-15 02:22:50.871571] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bd87180 00:22:02.852 [2024-05-15 02:22:50.871598] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.111 pt2 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.111 02:22:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.369 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.369 "name": "raid_bdev1", 00:22:03.369 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:22:03.370 "strip_size_kb": 0, 00:22:03.370 "state": "online", 00:22:03.370 "raid_level": "raid1", 00:22:03.370 "superblock": true, 00:22:03.370 "num_base_bdevs": 2, 00:22:03.370 "num_base_bdevs_discovered": 1, 00:22:03.370 "num_base_bdevs_operational": 1, 00:22:03.370 "base_bdevs_list": [ 00:22:03.370 { 00:22:03.370 "name": null, 00:22:03.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.370 "is_configured": false, 00:22:03.370 "data_offset": 256, 00:22:03.370 "data_size": 7936 00:22:03.370 }, 00:22:03.370 { 00:22:03.370 "name": "pt2", 00:22:03.370 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:22:03.370 "is_configured": true, 00:22:03.370 "data_offset": 256, 00:22:03.370 "data_size": 7936 00:22:03.370 } 00:22:03.370 ] 00:22:03.370 }' 00:22:03.370 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.370 02:22:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:03.629 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:03.888 [2024-05-15 02:22:51.666964] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.888 [2024-05-15 02:22:51.666984] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.888 [2024-05-15 02:22:51.666996] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.888 [2024-05-15 02:22:51.667004] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.888 [2024-05-15 02:22:51.667008] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd87180 name raid_bdev1, state offline 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:03.888 02:22:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:04.146 [2024-05-15 02:22:52.159033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:04.146 [2024-05-15 02:22:52.159088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.146 [2024-05-15 02:22:52.159115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bd86c80 00:22:04.146 [2024-05-15 02:22:52.159121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.146 [2024-05-15 02:22:52.159603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.146 [2024-05-15 02:22:52.159622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:04.146 [2024-05-15 02:22:52.159645] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:04.146 [2024-05-15 02:22:52.159656] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:04.146 [2024-05-15 02:22:52.159670] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:04.146 [2024-05-15 02:22:52.159674] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.146 [2024-05-15 02:22:52.159686] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd86780 name raid_bdev1, state configuring 00:22:04.146 [2024-05-15 02:22:52.159696] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:04.146 [2024-05-15 02:22:52.159710] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd86780 00:22:04.146 [2024-05-15 02:22:52.159715] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:04.146 [2024-05-15 02:22:52.159738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bde9e20 00:22:04.146 [2024-05-15 02:22:52.159762] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd86780 00:22:04.146 [2024-05-15 02:22:52.159769] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bd86780 00:22:04.146 [2024-05-15 02:22:52.159787] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.146 pt1 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.405 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.663 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.663 "name": "raid_bdev1", 00:22:04.663 "uuid": "02a39344-1262-11ef-99fd-bfc7c66e2865", 00:22:04.663 "strip_size_kb": 0, 00:22:04.663 "state": "online", 00:22:04.663 "raid_level": "raid1", 00:22:04.663 "superblock": true, 00:22:04.663 "num_base_bdevs": 2, 00:22:04.663 "num_base_bdevs_discovered": 1, 00:22:04.663 "num_base_bdevs_operational": 1, 00:22:04.663 "base_bdevs_list": [ 00:22:04.663 { 00:22:04.663 "name": null, 00:22:04.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.663 "is_configured": false, 00:22:04.663 "data_offset": 256, 00:22:04.663 "data_size": 7936 00:22:04.663 }, 00:22:04.663 { 00:22:04.663 "name": "pt2", 00:22:04.663 "uuid": "b8ea8400-dd86-ea5a-82de-5514e9e7221c", 00:22:04.663 "is_configured": true, 00:22:04.663 "data_offset": 256, 00:22:04.663 "data_size": 7936 00:22:04.663 } 00:22:04.663 ] 00:22:04.663 }' 00:22:04.663 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.663 02:22:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:04.922 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:04.922 02:22:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:05.180 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:05.180 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:05.180 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:05.439 [2024-05-15 02:22:53.319142] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 02a39344-1262-11ef-99fd-bfc7c66e2865 '!=' 02a39344-1262-11ef-99fd-bfc7c66e2865 ']' 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 64548 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64548 ']' 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 64548 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64548 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:05.439 killing process with pid 64548 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64548' 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 64548 00:22:05.439 [2024-05-15 02:22:53.349510] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.439 [2024-05-15 02:22:53.349539] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.439 [2024-05-15 02:22:53.349550] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.439 [2024-05-15 02:22:53.349554] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd86780 name raid_bdev1, state offline 00:22:05.439 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 64548 00:22:05.439 [2024-05-15 02:22:53.359167] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.696 02:22:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:05.696 00:22:05.696 real 0m12.481s 00:22:05.696 user 0m22.193s 00:22:05.696 sys 0m2.060s 00:22:05.696 ************************************ 00:22:05.696 END TEST raid_superblock_test_md_separate 00:22:05.696 ************************************ 00:22:05.696 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:05.696 02:22:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:05.696 02:22:53 bdev_raid -- bdev/bdev_raid.sh@841 -- # '[' '' = true ']' 00:22:05.696 02:22:53 bdev_raid -- bdev/bdev_raid.sh@845 -- # base_malloc_params='-m 32 -i' 00:22:05.696 02:22:53 bdev_raid -- bdev/bdev_raid.sh@846 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:05.696 02:22:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:05.696 02:22:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:05.696 02:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.696 ************************************ 00:22:05.696 START TEST raid_state_function_test_sb_md_interleaved 00:22:05.696 ************************************ 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=64935 00:22:05.696 Process raid pid: 64935 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64935' 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 64935 /var/tmp/spdk-raid.sock 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 64935 ']' 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:05.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.696 02:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:05.696 [2024-05-15 02:22:53.555991] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:05.696 [2024-05-15 02:22:53.556262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:06.261 EAL: TSC is not safe to use in SMP mode 00:22:06.261 EAL: TSC is not invariant 00:22:06.261 [2024-05-15 02:22:54.023116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.261 [2024-05-15 02:22:54.105358] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:06.261 [2024-05-15 02:22:54.107530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.261 [2024-05-15 02:22:54.108214] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.261 [2024-05-15 02:22:54.108231] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.828 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:06.828 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:22:06.828 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:07.088 [2024-05-15 02:22:54.879030] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.088 [2024-05-15 02:22:54.879090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.088 [2024-05-15 02:22:54.879095] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.088 [2024-05-15 02:22:54.879103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.088 02:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.088 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.088 "name": "Existed_Raid", 00:22:07.088 "uuid": "099773e0-1262-11ef-99fd-bfc7c66e2865", 00:22:07.088 "strip_size_kb": 0, 00:22:07.088 "state": "configuring", 00:22:07.088 "raid_level": "raid1", 00:22:07.088 "superblock": true, 00:22:07.088 "num_base_bdevs": 2, 00:22:07.088 "num_base_bdevs_discovered": 0, 00:22:07.088 "num_base_bdevs_operational": 2, 00:22:07.088 "base_bdevs_list": [ 00:22:07.088 { 00:22:07.088 "name": "BaseBdev1", 00:22:07.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.088 "is_configured": false, 00:22:07.088 "data_offset": 0, 00:22:07.088 "data_size": 0 00:22:07.088 }, 00:22:07.088 { 00:22:07.088 "name": "BaseBdev2", 00:22:07.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.088 "is_configured": false, 00:22:07.088 "data_offset": 0, 00:22:07.088 "data_size": 0 00:22:07.088 } 00:22:07.088 ] 00:22:07.088 }' 00:22:07.088 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.088 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.347 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:07.606 [2024-05-15 02:22:55.615047] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:07.606 [2024-05-15 02:22:55.615078] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc77500 name Existed_Raid, state configuring 00:22:07.864 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:07.864 [2024-05-15 02:22:55.811045] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.864 [2024-05-15 02:22:55.811088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.864 [2024-05-15 02:22:55.811092] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.864 [2024-05-15 02:22:55.811099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.864 02:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:08.122 [2024-05-15 02:22:56.064160] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.122 BaseBdev1 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:08.122 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.382 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.641 [ 00:22:08.641 { 00:22:08.641 "name": "BaseBdev1", 00:22:08.641 "aliases": [ 00:22:08.641 "0a4c2103-1262-11ef-99fd-bfc7c66e2865" 00:22:08.641 ], 00:22:08.641 "product_name": "Malloc disk", 00:22:08.641 "block_size": 4128, 00:22:08.641 "num_blocks": 8192, 00:22:08.641 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:08.641 "md_size": 32, 00:22:08.641 "md_interleave": true, 00:22:08.641 "dif_type": 0, 00:22:08.641 "assigned_rate_limits": { 00:22:08.641 "rw_ios_per_sec": 0, 00:22:08.641 "rw_mbytes_per_sec": 0, 00:22:08.641 "r_mbytes_per_sec": 0, 00:22:08.641 "w_mbytes_per_sec": 0 00:22:08.641 }, 00:22:08.641 "claimed": true, 00:22:08.641 "claim_type": "exclusive_write", 00:22:08.641 "zoned": false, 00:22:08.641 "supported_io_types": { 00:22:08.641 "read": true, 00:22:08.641 "write": true, 00:22:08.641 "unmap": true, 00:22:08.641 "write_zeroes": true, 00:22:08.641 "flush": true, 00:22:08.641 "reset": true, 00:22:08.641 "compare": false, 00:22:08.641 "compare_and_write": false, 00:22:08.641 "abort": true, 00:22:08.641 "nvme_admin": false, 00:22:08.641 "nvme_io": false 00:22:08.641 }, 00:22:08.641 "memory_domains": [ 00:22:08.641 { 00:22:08.641 "dma_device_id": "system", 00:22:08.641 "dma_device_type": 1 00:22:08.641 }, 00:22:08.641 { 00:22:08.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.641 "dma_device_type": 2 00:22:08.641 } 00:22:08.641 ], 00:22:08.641 "driver_specific": {} 00:22:08.641 } 00:22:08.641 ] 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.641 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.900 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.900 "name": "Existed_Raid", 00:22:08.900 "uuid": "0a25aac4-1262-11ef-99fd-bfc7c66e2865", 00:22:08.900 "strip_size_kb": 0, 00:22:08.900 "state": "configuring", 00:22:08.900 "raid_level": "raid1", 00:22:08.900 "superblock": true, 00:22:08.900 "num_base_bdevs": 2, 00:22:08.900 "num_base_bdevs_discovered": 1, 00:22:08.900 "num_base_bdevs_operational": 2, 00:22:08.900 "base_bdevs_list": [ 00:22:08.900 { 00:22:08.900 "name": "BaseBdev1", 00:22:08.900 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:08.900 "is_configured": true, 00:22:08.900 "data_offset": 256, 00:22:08.900 "data_size": 7936 00:22:08.900 }, 00:22:08.900 { 00:22:08.900 "name": "BaseBdev2", 00:22:08.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.900 "is_configured": false, 00:22:08.900 "data_offset": 0, 00:22:08.900 "data_size": 0 00:22:08.900 } 00:22:08.900 ] 00:22:08.900 }' 00:22:08.900 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.900 02:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.158 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:09.416 [2024-05-15 02:22:57.291143] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.416 [2024-05-15 02:22:57.291180] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc77500 name Existed_Raid, state configuring 00:22:09.416 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:09.675 [2024-05-15 02:22:57.547148] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.675 [2024-05-15 02:22:57.547805] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.675 [2024-05-15 02:22:57.547844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.675 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.934 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.934 "name": "Existed_Raid", 00:22:09.934 "uuid": "0b2e932d-1262-11ef-99fd-bfc7c66e2865", 00:22:09.934 "strip_size_kb": 0, 00:22:09.934 "state": "configuring", 00:22:09.934 "raid_level": "raid1", 00:22:09.934 "superblock": true, 00:22:09.934 "num_base_bdevs": 2, 00:22:09.934 "num_base_bdevs_discovered": 1, 00:22:09.934 "num_base_bdevs_operational": 2, 00:22:09.934 "base_bdevs_list": [ 00:22:09.934 { 00:22:09.934 "name": "BaseBdev1", 00:22:09.934 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:09.934 "is_configured": true, 00:22:09.934 "data_offset": 256, 00:22:09.934 "data_size": 7936 00:22:09.934 }, 00:22:09.934 { 00:22:09.934 "name": "BaseBdev2", 00:22:09.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.934 "is_configured": false, 00:22:09.934 "data_offset": 0, 00:22:09.934 "data_size": 0 00:22:09.934 } 00:22:09.934 ] 00:22:09.934 }' 00:22:09.934 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.934 02:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.193 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:10.451 [2024-05-15 02:22:58.343241] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.451 [2024-05-15 02:22:58.343303] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cc77a00 00:22:10.451 [2024-05-15 02:22:58.343307] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:10.451 [2024-05-15 02:22:58.343324] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ccdae20 00:22:10.451 [2024-05-15 02:22:58.343336] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cc77a00 00:22:10.451 [2024-05-15 02:22:58.343339] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cc77a00 00:22:10.451 [2024-05-15 02:22:58.343348] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.451 BaseBdev2 00:22:10.451 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:22:10.451 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:10.451 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:10.451 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:22:10.451 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:10.452 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:10.452 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:10.711 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:10.970 [ 00:22:10.970 { 00:22:10.970 "name": "BaseBdev2", 00:22:10.970 "aliases": [ 00:22:10.970 "0ba80b61-1262-11ef-99fd-bfc7c66e2865" 00:22:10.970 ], 00:22:10.970 "product_name": "Malloc disk", 00:22:10.970 "block_size": 4128, 00:22:10.970 "num_blocks": 8192, 00:22:10.970 "uuid": "0ba80b61-1262-11ef-99fd-bfc7c66e2865", 00:22:10.970 "md_size": 32, 00:22:10.970 "md_interleave": true, 00:22:10.970 "dif_type": 0, 00:22:10.970 "assigned_rate_limits": { 00:22:10.970 "rw_ios_per_sec": 0, 00:22:10.970 "rw_mbytes_per_sec": 0, 00:22:10.970 "r_mbytes_per_sec": 0, 00:22:10.970 "w_mbytes_per_sec": 0 00:22:10.970 }, 00:22:10.970 "claimed": true, 00:22:10.970 "claim_type": "exclusive_write", 00:22:10.970 "zoned": false, 00:22:10.970 "supported_io_types": { 00:22:10.970 "read": true, 00:22:10.970 "write": true, 00:22:10.970 "unmap": true, 00:22:10.970 "write_zeroes": true, 00:22:10.970 "flush": true, 00:22:10.970 "reset": true, 00:22:10.970 "compare": false, 00:22:10.970 "compare_and_write": false, 00:22:10.970 "abort": true, 00:22:10.970 "nvme_admin": false, 00:22:10.970 "nvme_io": false 00:22:10.970 }, 00:22:10.970 "memory_domains": [ 00:22:10.970 { 00:22:10.970 "dma_device_id": "system", 00:22:10.970 "dma_device_type": 1 00:22:10.970 }, 00:22:10.970 { 00:22:10.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.970 "dma_device_type": 2 00:22:10.970 } 00:22:10.970 ], 00:22:10.970 "driver_specific": {} 00:22:10.970 } 00:22:10.970 ] 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.970 02:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.229 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.229 "name": "Existed_Raid", 00:22:11.229 "uuid": "0b2e932d-1262-11ef-99fd-bfc7c66e2865", 00:22:11.229 "strip_size_kb": 0, 00:22:11.229 "state": "online", 00:22:11.229 "raid_level": "raid1", 00:22:11.229 "superblock": true, 00:22:11.229 "num_base_bdevs": 2, 00:22:11.229 "num_base_bdevs_discovered": 2, 00:22:11.229 "num_base_bdevs_operational": 2, 00:22:11.229 "base_bdevs_list": [ 00:22:11.229 { 00:22:11.229 "name": "BaseBdev1", 00:22:11.229 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:11.229 "is_configured": true, 00:22:11.229 "data_offset": 256, 00:22:11.229 "data_size": 7936 00:22:11.229 }, 00:22:11.229 { 00:22:11.229 "name": "BaseBdev2", 00:22:11.229 "uuid": "0ba80b61-1262-11ef-99fd-bfc7c66e2865", 00:22:11.229 "is_configured": true, 00:22:11.229 "data_offset": 256, 00:22:11.229 "data_size": 7936 00:22:11.229 } 00:22:11.229 ] 00:22:11.229 }' 00:22:11.229 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.229 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:11.488 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:11.748 [2024-05-15 02:22:59.703307] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:11.748 "name": "Existed_Raid", 00:22:11.748 "aliases": [ 00:22:11.748 "0b2e932d-1262-11ef-99fd-bfc7c66e2865" 00:22:11.748 ], 00:22:11.748 "product_name": "Raid Volume", 00:22:11.748 "block_size": 4128, 00:22:11.748 "num_blocks": 7936, 00:22:11.748 "uuid": "0b2e932d-1262-11ef-99fd-bfc7c66e2865", 00:22:11.748 "md_size": 32, 00:22:11.748 "md_interleave": true, 00:22:11.748 "dif_type": 0, 00:22:11.748 "assigned_rate_limits": { 00:22:11.748 "rw_ios_per_sec": 0, 00:22:11.748 "rw_mbytes_per_sec": 0, 00:22:11.748 "r_mbytes_per_sec": 0, 00:22:11.748 "w_mbytes_per_sec": 0 00:22:11.748 }, 00:22:11.748 "claimed": false, 00:22:11.748 "zoned": false, 00:22:11.748 "supported_io_types": { 00:22:11.748 "read": true, 00:22:11.748 "write": true, 00:22:11.748 "unmap": false, 00:22:11.748 "write_zeroes": true, 00:22:11.748 "flush": false, 00:22:11.748 "reset": true, 00:22:11.748 "compare": false, 00:22:11.748 "compare_and_write": false, 00:22:11.748 "abort": false, 00:22:11.748 "nvme_admin": false, 00:22:11.748 "nvme_io": false 00:22:11.748 }, 00:22:11.748 "memory_domains": [ 00:22:11.748 { 00:22:11.748 "dma_device_id": "system", 00:22:11.748 "dma_device_type": 1 00:22:11.748 }, 00:22:11.748 { 00:22:11.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.748 "dma_device_type": 2 00:22:11.748 }, 00:22:11.748 { 00:22:11.748 "dma_device_id": "system", 00:22:11.748 "dma_device_type": 1 00:22:11.748 }, 00:22:11.748 { 00:22:11.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.748 "dma_device_type": 2 00:22:11.748 } 00:22:11.748 ], 00:22:11.748 "driver_specific": { 00:22:11.748 "raid": { 00:22:11.748 "uuid": "0b2e932d-1262-11ef-99fd-bfc7c66e2865", 00:22:11.748 "strip_size_kb": 0, 00:22:11.748 "state": "online", 00:22:11.748 "raid_level": "raid1", 00:22:11.748 "superblock": true, 00:22:11.748 "num_base_bdevs": 2, 00:22:11.748 "num_base_bdevs_discovered": 2, 00:22:11.748 "num_base_bdevs_operational": 2, 00:22:11.748 "base_bdevs_list": [ 00:22:11.748 { 00:22:11.748 "name": "BaseBdev1", 00:22:11.748 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:11.748 "is_configured": true, 00:22:11.748 "data_offset": 256, 00:22:11.748 "data_size": 7936 00:22:11.748 }, 00:22:11.748 { 00:22:11.748 "name": "BaseBdev2", 00:22:11.748 "uuid": "0ba80b61-1262-11ef-99fd-bfc7c66e2865", 00:22:11.748 "is_configured": true, 00:22:11.748 "data_offset": 256, 00:22:11.748 "data_size": 7936 00:22:11.748 } 00:22:11.748 ] 00:22:11.748 } 00:22:11.748 } 00:22:11.748 }' 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:11.748 BaseBdev2' 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:11.748 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:12.008 "name": "BaseBdev1", 00:22:12.008 "aliases": [ 00:22:12.008 "0a4c2103-1262-11ef-99fd-bfc7c66e2865" 00:22:12.008 ], 00:22:12.008 "product_name": "Malloc disk", 00:22:12.008 "block_size": 4128, 00:22:12.008 "num_blocks": 8192, 00:22:12.008 "uuid": "0a4c2103-1262-11ef-99fd-bfc7c66e2865", 00:22:12.008 "md_size": 32, 00:22:12.008 "md_interleave": true, 00:22:12.008 "dif_type": 0, 00:22:12.008 "assigned_rate_limits": { 00:22:12.008 "rw_ios_per_sec": 0, 00:22:12.008 "rw_mbytes_per_sec": 0, 00:22:12.008 "r_mbytes_per_sec": 0, 00:22:12.008 "w_mbytes_per_sec": 0 00:22:12.008 }, 00:22:12.008 "claimed": true, 00:22:12.008 "claim_type": "exclusive_write", 00:22:12.008 "zoned": false, 00:22:12.008 "supported_io_types": { 00:22:12.008 "read": true, 00:22:12.008 "write": true, 00:22:12.008 "unmap": true, 00:22:12.008 "write_zeroes": true, 00:22:12.008 "flush": true, 00:22:12.008 "reset": true, 00:22:12.008 "compare": false, 00:22:12.008 "compare_and_write": false, 00:22:12.008 "abort": true, 00:22:12.008 "nvme_admin": false, 00:22:12.008 "nvme_io": false 00:22:12.008 }, 00:22:12.008 "memory_domains": [ 00:22:12.008 { 00:22:12.008 "dma_device_id": "system", 00:22:12.008 "dma_device_type": 1 00:22:12.008 }, 00:22:12.008 { 00:22:12.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.008 "dma_device_type": 2 00:22:12.008 } 00:22:12.008 ], 00:22:12.008 "driver_specific": {} 00:22:12.008 }' 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:12.008 02:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:12.008 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:12.008 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:12.008 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:12.008 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:12.575 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:12.576 "name": "BaseBdev2", 00:22:12.576 "aliases": [ 00:22:12.576 "0ba80b61-1262-11ef-99fd-bfc7c66e2865" 00:22:12.576 ], 00:22:12.576 "product_name": "Malloc disk", 00:22:12.576 "block_size": 4128, 00:22:12.576 "num_blocks": 8192, 00:22:12.576 "uuid": "0ba80b61-1262-11ef-99fd-bfc7c66e2865", 00:22:12.576 "md_size": 32, 00:22:12.576 "md_interleave": true, 00:22:12.576 "dif_type": 0, 00:22:12.576 "assigned_rate_limits": { 00:22:12.576 "rw_ios_per_sec": 0, 00:22:12.576 "rw_mbytes_per_sec": 0, 00:22:12.576 "r_mbytes_per_sec": 0, 00:22:12.576 "w_mbytes_per_sec": 0 00:22:12.576 }, 00:22:12.576 "claimed": true, 00:22:12.576 "claim_type": "exclusive_write", 00:22:12.576 "zoned": false, 00:22:12.576 "supported_io_types": { 00:22:12.576 "read": true, 00:22:12.576 "write": true, 00:22:12.576 "unmap": true, 00:22:12.576 "write_zeroes": true, 00:22:12.576 "flush": true, 00:22:12.576 "reset": true, 00:22:12.576 "compare": false, 00:22:12.576 "compare_and_write": false, 00:22:12.576 "abort": true, 00:22:12.576 "nvme_admin": false, 00:22:12.576 "nvme_io": false 00:22:12.576 }, 00:22:12.576 "memory_domains": [ 00:22:12.576 { 00:22:12.576 "dma_device_id": "system", 00:22:12.576 "dma_device_type": 1 00:22:12.576 }, 00:22:12.576 { 00:22:12.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.576 "dma_device_type": 2 00:22:12.576 } 00:22:12.576 ], 00:22:12.576 "driver_specific": {} 00:22:12.576 }' 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:12.576 [2024-05-15 02:23:00.511332] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.576 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.835 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.835 "name": "Existed_Raid", 00:22:12.835 "uuid": "0b2e932d-1262-11ef-99fd-bfc7c66e2865", 00:22:12.835 "strip_size_kb": 0, 00:22:12.835 "state": "online", 00:22:12.835 "raid_level": "raid1", 00:22:12.835 "superblock": true, 00:22:12.835 "num_base_bdevs": 2, 00:22:12.835 "num_base_bdevs_discovered": 1, 00:22:12.835 "num_base_bdevs_operational": 1, 00:22:12.835 "base_bdevs_list": [ 00:22:12.835 { 00:22:12.835 "name": null, 00:22:12.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.835 "is_configured": false, 00:22:12.835 "data_offset": 256, 00:22:12.835 "data_size": 7936 00:22:12.835 }, 00:22:12.835 { 00:22:12.835 "name": "BaseBdev2", 00:22:12.835 "uuid": "0ba80b61-1262-11ef-99fd-bfc7c66e2865", 00:22:12.836 "is_configured": true, 00:22:12.836 "data_offset": 256, 00:22:12.836 "data_size": 7936 00:22:12.836 } 00:22:12.836 ] 00:22:12.836 }' 00:22:12.836 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.836 02:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.104 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:13.104 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:13.104 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.104 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:13.363 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:13.363 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:13.363 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:13.622 [2024-05-15 02:23:01.507990] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:13.622 [2024-05-15 02:23:01.508018] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:13.622 [2024-05-15 02:23:01.512709] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.622 [2024-05-15 02:23:01.512725] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.622 [2024-05-15 02:23:01.512728] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cc77a00 name Existed_Raid, state offline 00:22:13.622 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:13.622 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:13.622 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:22:13.622 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 64935 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 64935 ']' 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 64935 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 64935 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:13.881 killing process with pid 64935 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64935' 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 64935 00:22:13.881 [2024-05-15 02:23:01.780024] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:13.881 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 64935 00:22:13.881 [2024-05-15 02:23:01.780055] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:14.140 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:22:14.140 00:22:14.140 real 0m8.373s 00:22:14.140 user 0m14.529s 00:22:14.140 sys 0m1.496s 00:22:14.140 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:14.140 02:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.140 ************************************ 00:22:14.140 END TEST raid_state_function_test_sb_md_interleaved 00:22:14.140 ************************************ 00:22:14.140 02:23:01 bdev_raid -- bdev/bdev_raid.sh@847 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:14.140 02:23:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:14.140 02:23:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:14.140 02:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:14.140 ************************************ 00:22:14.140 START TEST raid_superblock_test_md_interleaved 00:22:14.140 ************************************ 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=65205 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 65205 /var/tmp/spdk-raid.sock 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65205 ']' 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.140 02:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.140 [2024-05-15 02:23:01.970887] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:14.140 [2024-05-15 02:23:01.971055] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:14.399 EAL: TSC is not safe to use in SMP mode 00:22:14.399 EAL: TSC is not invariant 00:22:14.658 [2024-05-15 02:23:02.426545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.658 [2024-05-15 02:23:02.503322] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:14.658 [2024-05-15 02:23:02.505374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.658 [2024-05-15 02:23:02.506042] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.658 [2024-05-15 02:23:02.506055] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:15.226 02:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:15.485 malloc1 00:22:15.485 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:15.743 [2024-05-15 02:23:03.548282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:15.743 [2024-05-15 02:23:03.548331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.743 [2024-05-15 02:23:03.548849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38f780 00:22:15.743 [2024-05-15 02:23:03.548879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.743 [2024-05-15 02:23:03.549497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.743 [2024-05-15 02:23:03.549525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:15.743 pt1 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:15.743 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:15.743 malloc2 00:22:16.001 02:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.001 [2024-05-15 02:23:04.004308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.001 [2024-05-15 02:23:04.004357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.001 [2024-05-15 02:23:04.004398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38fc80 00:22:16.001 [2024-05-15 02:23:04.004405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.001 [2024-05-15 02:23:04.004792] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.001 [2024-05-15 02:23:04.004817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.001 pt2 00:22:16.001 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.001 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.001 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:16.259 [2024-05-15 02:23:04.256322] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:16.259 [2024-05-15 02:23:04.256691] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.259 [2024-05-15 02:23:04.256740] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d38ff00 00:22:16.259 [2024-05-15 02:23:04.256745] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:16.259 [2024-05-15 02:23:04.256777] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d3f2e20 00:22:16.259 [2024-05-15 02:23:04.256790] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d38ff00 00:22:16.259 [2024-05-15 02:23:04.256793] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d38ff00 00:22:16.259 [2024-05-15 02:23:04.256802] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.259 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.824 "name": "raid_bdev1", 00:22:16.824 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:16.824 "strip_size_kb": 0, 00:22:16.824 "state": "online", 00:22:16.824 "raid_level": "raid1", 00:22:16.824 "superblock": true, 00:22:16.824 "num_base_bdevs": 2, 00:22:16.824 "num_base_bdevs_discovered": 2, 00:22:16.824 "num_base_bdevs_operational": 2, 00:22:16.824 "base_bdevs_list": [ 00:22:16.824 { 00:22:16.824 "name": "pt1", 00:22:16.824 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:16.824 "is_configured": true, 00:22:16.824 "data_offset": 256, 00:22:16.824 "data_size": 7936 00:22:16.824 }, 00:22:16.824 { 00:22:16.824 "name": "pt2", 00:22:16.824 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:16.824 "is_configured": true, 00:22:16.824 "data_offset": 256, 00:22:16.824 "data_size": 7936 00:22:16.824 } 00:22:16.824 ] 00:22:16.824 }' 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:16.824 02:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:17.391 [2024-05-15 02:23:05.112385] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.391 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:17.391 "name": "raid_bdev1", 00:22:17.391 "aliases": [ 00:22:17.391 "0f2e504e-1262-11ef-99fd-bfc7c66e2865" 00:22:17.391 ], 00:22:17.391 "product_name": "Raid Volume", 00:22:17.391 "block_size": 4128, 00:22:17.391 "num_blocks": 7936, 00:22:17.391 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:17.391 "md_size": 32, 00:22:17.391 "md_interleave": true, 00:22:17.391 "dif_type": 0, 00:22:17.391 "assigned_rate_limits": { 00:22:17.391 "rw_ios_per_sec": 0, 00:22:17.391 "rw_mbytes_per_sec": 0, 00:22:17.391 "r_mbytes_per_sec": 0, 00:22:17.391 "w_mbytes_per_sec": 0 00:22:17.391 }, 00:22:17.391 "claimed": false, 00:22:17.391 "zoned": false, 00:22:17.391 "supported_io_types": { 00:22:17.391 "read": true, 00:22:17.391 "write": true, 00:22:17.391 "unmap": false, 00:22:17.391 "write_zeroes": true, 00:22:17.391 "flush": false, 00:22:17.391 "reset": true, 00:22:17.391 "compare": false, 00:22:17.391 "compare_and_write": false, 00:22:17.391 "abort": false, 00:22:17.391 "nvme_admin": false, 00:22:17.391 "nvme_io": false 00:22:17.391 }, 00:22:17.391 "memory_domains": [ 00:22:17.391 { 00:22:17.391 "dma_device_id": "system", 00:22:17.391 "dma_device_type": 1 00:22:17.391 }, 00:22:17.391 { 00:22:17.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.391 "dma_device_type": 2 00:22:17.391 }, 00:22:17.391 { 00:22:17.391 "dma_device_id": "system", 00:22:17.391 "dma_device_type": 1 00:22:17.391 }, 00:22:17.391 { 00:22:17.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.391 "dma_device_type": 2 00:22:17.391 } 00:22:17.391 ], 00:22:17.392 "driver_specific": { 00:22:17.392 "raid": { 00:22:17.392 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:17.392 "strip_size_kb": 0, 00:22:17.392 "state": "online", 00:22:17.392 "raid_level": "raid1", 00:22:17.392 "superblock": true, 00:22:17.392 "num_base_bdevs": 2, 00:22:17.392 "num_base_bdevs_discovered": 2, 00:22:17.392 "num_base_bdevs_operational": 2, 00:22:17.392 "base_bdevs_list": [ 00:22:17.392 { 00:22:17.392 "name": "pt1", 00:22:17.392 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:17.392 "is_configured": true, 00:22:17.392 "data_offset": 256, 00:22:17.392 "data_size": 7936 00:22:17.392 }, 00:22:17.392 { 00:22:17.392 "name": "pt2", 00:22:17.392 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:17.392 "is_configured": true, 00:22:17.392 "data_offset": 256, 00:22:17.392 "data_size": 7936 00:22:17.392 } 00:22:17.392 ] 00:22:17.392 } 00:22:17.392 } 00:22:17.392 }' 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:17.392 pt2' 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:17.392 "name": "pt1", 00:22:17.392 "aliases": [ 00:22:17.392 "f61a7dc8-adb8-e053-a4c7-84388f051572" 00:22:17.392 ], 00:22:17.392 "product_name": "passthru", 00:22:17.392 "block_size": 4128, 00:22:17.392 "num_blocks": 8192, 00:22:17.392 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:17.392 "md_size": 32, 00:22:17.392 "md_interleave": true, 00:22:17.392 "dif_type": 0, 00:22:17.392 "assigned_rate_limits": { 00:22:17.392 "rw_ios_per_sec": 0, 00:22:17.392 "rw_mbytes_per_sec": 0, 00:22:17.392 "r_mbytes_per_sec": 0, 00:22:17.392 "w_mbytes_per_sec": 0 00:22:17.392 }, 00:22:17.392 "claimed": true, 00:22:17.392 "claim_type": "exclusive_write", 00:22:17.392 "zoned": false, 00:22:17.392 "supported_io_types": { 00:22:17.392 "read": true, 00:22:17.392 "write": true, 00:22:17.392 "unmap": true, 00:22:17.392 "write_zeroes": true, 00:22:17.392 "flush": true, 00:22:17.392 "reset": true, 00:22:17.392 "compare": false, 00:22:17.392 "compare_and_write": false, 00:22:17.392 "abort": true, 00:22:17.392 "nvme_admin": false, 00:22:17.392 "nvme_io": false 00:22:17.392 }, 00:22:17.392 "memory_domains": [ 00:22:17.392 { 00:22:17.392 "dma_device_id": "system", 00:22:17.392 "dma_device_type": 1 00:22:17.392 }, 00:22:17.392 { 00:22:17.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.392 "dma_device_type": 2 00:22:17.392 } 00:22:17.392 ], 00:22:17.392 "driver_specific": { 00:22:17.392 "passthru": { 00:22:17.392 "name": "pt1", 00:22:17.392 "base_bdev_name": "malloc1" 00:22:17.392 } 00:22:17.392 } 00:22:17.392 }' 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:17.392 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:17.650 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:17.909 "name": "pt2", 00:22:17.909 "aliases": [ 00:22:17.909 "722016c7-b56e-cb5b-82c9-8e103936674f" 00:22:17.909 ], 00:22:17.909 "product_name": "passthru", 00:22:17.909 "block_size": 4128, 00:22:17.909 "num_blocks": 8192, 00:22:17.909 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:17.909 "md_size": 32, 00:22:17.909 "md_interleave": true, 00:22:17.909 "dif_type": 0, 00:22:17.909 "assigned_rate_limits": { 00:22:17.909 "rw_ios_per_sec": 0, 00:22:17.909 "rw_mbytes_per_sec": 0, 00:22:17.909 "r_mbytes_per_sec": 0, 00:22:17.909 "w_mbytes_per_sec": 0 00:22:17.909 }, 00:22:17.909 "claimed": true, 00:22:17.909 "claim_type": "exclusive_write", 00:22:17.909 "zoned": false, 00:22:17.909 "supported_io_types": { 00:22:17.909 "read": true, 00:22:17.909 "write": true, 00:22:17.909 "unmap": true, 00:22:17.909 "write_zeroes": true, 00:22:17.909 "flush": true, 00:22:17.909 "reset": true, 00:22:17.909 "compare": false, 00:22:17.909 "compare_and_write": false, 00:22:17.909 "abort": true, 00:22:17.909 "nvme_admin": false, 00:22:17.909 "nvme_io": false 00:22:17.909 }, 00:22:17.909 "memory_domains": [ 00:22:17.909 { 00:22:17.909 "dma_device_id": "system", 00:22:17.909 "dma_device_type": 1 00:22:17.909 }, 00:22:17.909 { 00:22:17.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.909 "dma_device_type": 2 00:22:17.909 } 00:22:17.909 ], 00:22:17.909 "driver_specific": { 00:22:17.909 "passthru": { 00:22:17.909 "name": "pt2", 00:22:17.909 "base_bdev_name": "malloc2" 00:22:17.909 } 00:22:17.909 } 00:22:17.909 }' 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:17.909 02:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:18.166 [2024-05-15 02:23:06.096480] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.166 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f2e504e-1262-11ef-99fd-bfc7c66e2865 00:22:18.166 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 0f2e504e-1262-11ef-99fd-bfc7c66e2865 ']' 00:22:18.166 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:18.424 [2024-05-15 02:23:06.340449] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.424 [2024-05-15 02:23:06.340469] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.424 [2024-05-15 02:23:06.340490] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.424 [2024-05-15 02:23:06.340502] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.425 [2024-05-15 02:23:06.340505] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d38ff00 name raid_bdev1, state offline 00:22:18.425 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.425 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:18.683 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:18.683 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:18.683 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.683 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:18.946 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.946 02:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:19.238 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:19.238 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:19.497 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:19.802 [2024-05-15 02:23:07.520511] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:19.802 [2024-05-15 02:23:07.520939] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:19.802 [2024-05-15 02:23:07.520961] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:19.802 [2024-05-15 02:23:07.520989] bdev_raid.c:3046:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:19.803 [2024-05-15 02:23:07.520997] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.803 [2024-05-15 02:23:07.521001] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d38fc80 name raid_bdev1, state configuring 00:22:19.803 request: 00:22:19.803 { 00:22:19.803 "name": "raid_bdev1", 00:22:19.803 "raid_level": "raid1", 00:22:19.803 "base_bdevs": [ 00:22:19.803 "malloc1", 00:22:19.803 "malloc2" 00:22:19.803 ], 00:22:19.803 "superblock": false, 00:22:19.803 "method": "bdev_raid_create", 00:22:19.803 "req_id": 1 00:22:19.803 } 00:22:19.803 Got JSON-RPC error response 00:22:19.803 response: 00:22:19.803 { 00:22:19.803 "code": -17, 00:22:19.803 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:19.803 } 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:19.803 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.061 [2024-05-15 02:23:07.952530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.061 [2024-05-15 02:23:07.952573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.061 [2024-05-15 02:23:07.952598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38f780 00:22:20.061 [2024-05-15 02:23:07.952604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.061 [2024-05-15 02:23:07.953019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.061 [2024-05-15 02:23:07.953044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.061 [2024-05-15 02:23:07.953058] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.061 [2024-05-15 02:23:07.953067] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.061 pt1 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.061 02:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.319 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.319 "name": "raid_bdev1", 00:22:20.319 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:20.319 "strip_size_kb": 0, 00:22:20.319 "state": "configuring", 00:22:20.319 "raid_level": "raid1", 00:22:20.319 "superblock": true, 00:22:20.319 "num_base_bdevs": 2, 00:22:20.319 "num_base_bdevs_discovered": 1, 00:22:20.319 "num_base_bdevs_operational": 2, 00:22:20.319 "base_bdevs_list": [ 00:22:20.319 { 00:22:20.319 "name": "pt1", 00:22:20.319 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:20.319 "is_configured": true, 00:22:20.319 "data_offset": 256, 00:22:20.319 "data_size": 7936 00:22:20.319 }, 00:22:20.319 { 00:22:20.319 "name": null, 00:22:20.319 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:20.319 "is_configured": false, 00:22:20.319 "data_offset": 256, 00:22:20.319 "data_size": 7936 00:22:20.319 } 00:22:20.319 ] 00:22:20.319 }' 00:22:20.319 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.319 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.577 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:20.577 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:20.577 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:20.577 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.835 [2024-05-15 02:23:08.640580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.835 [2024-05-15 02:23:08.640629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.835 [2024-05-15 02:23:08.640653] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38ff00 00:22:20.835 [2024-05-15 02:23:08.640659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.835 [2024-05-15 02:23:08.640715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.835 [2024-05-15 02:23:08.640723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.835 [2024-05-15 02:23:08.640735] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.835 [2024-05-15 02:23:08.640742] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.835 [2024-05-15 02:23:08.640757] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d390180 00:22:20.835 [2024-05-15 02:23:08.640761] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:20.835 [2024-05-15 02:23:08.640776] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d3f2e20 00:22:20.835 [2024-05-15 02:23:08.640787] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d390180 00:22:20.835 [2024-05-15 02:23:08.640790] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d390180 00:22:20.835 [2024-05-15 02:23:08.640799] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.835 pt2 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.835 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.093 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.093 "name": "raid_bdev1", 00:22:21.093 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:21.093 "strip_size_kb": 0, 00:22:21.093 "state": "online", 00:22:21.093 "raid_level": "raid1", 00:22:21.093 "superblock": true, 00:22:21.093 "num_base_bdevs": 2, 00:22:21.093 "num_base_bdevs_discovered": 2, 00:22:21.093 "num_base_bdevs_operational": 2, 00:22:21.093 "base_bdevs_list": [ 00:22:21.093 { 00:22:21.093 "name": "pt1", 00:22:21.093 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:21.093 "is_configured": true, 00:22:21.093 "data_offset": 256, 00:22:21.093 "data_size": 7936 00:22:21.093 }, 00:22:21.093 { 00:22:21.093 "name": "pt2", 00:22:21.093 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:21.093 "is_configured": true, 00:22:21.093 "data_offset": 256, 00:22:21.093 "data_size": 7936 00:22:21.093 } 00:22:21.093 ] 00:22:21.093 }' 00:22:21.093 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.094 02:23:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:21.381 [2024-05-15 02:23:09.348665] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:21.381 "name": "raid_bdev1", 00:22:21.381 "aliases": [ 00:22:21.381 "0f2e504e-1262-11ef-99fd-bfc7c66e2865" 00:22:21.381 ], 00:22:21.381 "product_name": "Raid Volume", 00:22:21.381 "block_size": 4128, 00:22:21.381 "num_blocks": 7936, 00:22:21.381 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:21.381 "md_size": 32, 00:22:21.381 "md_interleave": true, 00:22:21.381 "dif_type": 0, 00:22:21.381 "assigned_rate_limits": { 00:22:21.381 "rw_ios_per_sec": 0, 00:22:21.381 "rw_mbytes_per_sec": 0, 00:22:21.381 "r_mbytes_per_sec": 0, 00:22:21.381 "w_mbytes_per_sec": 0 00:22:21.381 }, 00:22:21.381 "claimed": false, 00:22:21.381 "zoned": false, 00:22:21.381 "supported_io_types": { 00:22:21.381 "read": true, 00:22:21.381 "write": true, 00:22:21.381 "unmap": false, 00:22:21.381 "write_zeroes": true, 00:22:21.381 "flush": false, 00:22:21.381 "reset": true, 00:22:21.381 "compare": false, 00:22:21.381 "compare_and_write": false, 00:22:21.381 "abort": false, 00:22:21.381 "nvme_admin": false, 00:22:21.381 "nvme_io": false 00:22:21.381 }, 00:22:21.381 "memory_domains": [ 00:22:21.381 { 00:22:21.381 "dma_device_id": "system", 00:22:21.381 "dma_device_type": 1 00:22:21.381 }, 00:22:21.381 { 00:22:21.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.381 "dma_device_type": 2 00:22:21.381 }, 00:22:21.381 { 00:22:21.381 "dma_device_id": "system", 00:22:21.381 "dma_device_type": 1 00:22:21.381 }, 00:22:21.381 { 00:22:21.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.381 "dma_device_type": 2 00:22:21.381 } 00:22:21.381 ], 00:22:21.381 "driver_specific": { 00:22:21.381 "raid": { 00:22:21.381 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:21.381 "strip_size_kb": 0, 00:22:21.381 "state": "online", 00:22:21.381 "raid_level": "raid1", 00:22:21.381 "superblock": true, 00:22:21.381 "num_base_bdevs": 2, 00:22:21.381 "num_base_bdevs_discovered": 2, 00:22:21.381 "num_base_bdevs_operational": 2, 00:22:21.381 "base_bdevs_list": [ 00:22:21.381 { 00:22:21.381 "name": "pt1", 00:22:21.381 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:21.381 "is_configured": true, 00:22:21.381 "data_offset": 256, 00:22:21.381 "data_size": 7936 00:22:21.381 }, 00:22:21.381 { 00:22:21.381 "name": "pt2", 00:22:21.381 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:21.381 "is_configured": true, 00:22:21.381 "data_offset": 256, 00:22:21.381 "data_size": 7936 00:22:21.381 } 00:22:21.381 ] 00:22:21.381 } 00:22:21.381 } 00:22:21.381 }' 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:21.381 pt2' 00:22:21.381 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:21.640 "name": "pt1", 00:22:21.640 "aliases": [ 00:22:21.640 "f61a7dc8-adb8-e053-a4c7-84388f051572" 00:22:21.640 ], 00:22:21.640 "product_name": "passthru", 00:22:21.640 "block_size": 4128, 00:22:21.640 "num_blocks": 8192, 00:22:21.640 "uuid": "f61a7dc8-adb8-e053-a4c7-84388f051572", 00:22:21.640 "md_size": 32, 00:22:21.640 "md_interleave": true, 00:22:21.640 "dif_type": 0, 00:22:21.640 "assigned_rate_limits": { 00:22:21.640 "rw_ios_per_sec": 0, 00:22:21.640 "rw_mbytes_per_sec": 0, 00:22:21.640 "r_mbytes_per_sec": 0, 00:22:21.640 "w_mbytes_per_sec": 0 00:22:21.640 }, 00:22:21.640 "claimed": true, 00:22:21.640 "claim_type": "exclusive_write", 00:22:21.640 "zoned": false, 00:22:21.640 "supported_io_types": { 00:22:21.640 "read": true, 00:22:21.640 "write": true, 00:22:21.640 "unmap": true, 00:22:21.640 "write_zeroes": true, 00:22:21.640 "flush": true, 00:22:21.640 "reset": true, 00:22:21.640 "compare": false, 00:22:21.640 "compare_and_write": false, 00:22:21.640 "abort": true, 00:22:21.640 "nvme_admin": false, 00:22:21.640 "nvme_io": false 00:22:21.640 }, 00:22:21.640 "memory_domains": [ 00:22:21.640 { 00:22:21.640 "dma_device_id": "system", 00:22:21.640 "dma_device_type": 1 00:22:21.640 }, 00:22:21.640 { 00:22:21.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.640 "dma_device_type": 2 00:22:21.640 } 00:22:21.640 ], 00:22:21.640 "driver_specific": { 00:22:21.640 "passthru": { 00:22:21.640 "name": "pt1", 00:22:21.640 "base_bdev_name": "malloc1" 00:22:21.640 } 00:22:21.640 } 00:22:21.640 }' 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:21.640 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:21.899 "name": "pt2", 00:22:21.899 "aliases": [ 00:22:21.899 "722016c7-b56e-cb5b-82c9-8e103936674f" 00:22:21.899 ], 00:22:21.899 "product_name": "passthru", 00:22:21.899 "block_size": 4128, 00:22:21.899 "num_blocks": 8192, 00:22:21.899 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:21.899 "md_size": 32, 00:22:21.899 "md_interleave": true, 00:22:21.899 "dif_type": 0, 00:22:21.899 "assigned_rate_limits": { 00:22:21.899 "rw_ios_per_sec": 0, 00:22:21.899 "rw_mbytes_per_sec": 0, 00:22:21.899 "r_mbytes_per_sec": 0, 00:22:21.899 "w_mbytes_per_sec": 0 00:22:21.899 }, 00:22:21.899 "claimed": true, 00:22:21.899 "claim_type": "exclusive_write", 00:22:21.899 "zoned": false, 00:22:21.899 "supported_io_types": { 00:22:21.899 "read": true, 00:22:21.899 "write": true, 00:22:21.899 "unmap": true, 00:22:21.899 "write_zeroes": true, 00:22:21.899 "flush": true, 00:22:21.899 "reset": true, 00:22:21.899 "compare": false, 00:22:21.899 "compare_and_write": false, 00:22:21.899 "abort": true, 00:22:21.899 "nvme_admin": false, 00:22:21.899 "nvme_io": false 00:22:21.899 }, 00:22:21.899 "memory_domains": [ 00:22:21.899 { 00:22:21.899 "dma_device_id": "system", 00:22:21.899 "dma_device_type": 1 00:22:21.899 }, 00:22:21.899 { 00:22:21.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.899 "dma_device_type": 2 00:22:21.899 } 00:22:21.899 ], 00:22:21.899 "driver_specific": { 00:22:21.899 "passthru": { 00:22:21.899 "name": "pt2", 00:22:21.899 "base_bdev_name": "malloc2" 00:22:21.899 } 00:22:21.899 } 00:22:21.899 }' 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:21.899 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.158 02:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:22.158 [2024-05-15 02:23:10.180728] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.415 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 0f2e504e-1262-11ef-99fd-bfc7c66e2865 '!=' 0f2e504e-1262-11ef-99fd-bfc7c66e2865 ']' 00:22:22.415 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:22.415 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:22.415 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:22:22.415 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:22.673 [2024-05-15 02:23:10.448711] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.673 "name": "raid_bdev1", 00:22:22.673 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:22.673 "strip_size_kb": 0, 00:22:22.673 "state": "online", 00:22:22.673 "raid_level": "raid1", 00:22:22.673 "superblock": true, 00:22:22.673 "num_base_bdevs": 2, 00:22:22.673 "num_base_bdevs_discovered": 1, 00:22:22.673 "num_base_bdevs_operational": 1, 00:22:22.673 "base_bdevs_list": [ 00:22:22.673 { 00:22:22.673 "name": null, 00:22:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.673 "is_configured": false, 00:22:22.673 "data_offset": 256, 00:22:22.673 "data_size": 7936 00:22:22.673 }, 00:22:22.673 { 00:22:22.673 "name": "pt2", 00:22:22.673 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:22.673 "is_configured": true, 00:22:22.673 "data_offset": 256, 00:22:22.673 "data_size": 7936 00:22:22.673 } 00:22:22.673 ] 00:22:22.673 }' 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.673 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.931 02:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:23.190 [2024-05-15 02:23:11.136733] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.190 [2024-05-15 02:23:11.136752] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.190 [2024-05-15 02:23:11.136762] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.190 [2024-05-15 02:23:11.136770] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.190 [2024-05-15 02:23:11.136774] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d390180 name raid_bdev1, state offline 00:22:23.190 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:23.190 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.455 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:23.455 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:23.455 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:23.455 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:23.455 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:23.722 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:23.982 [2024-05-15 02:23:11.768768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:23.982 [2024-05-15 02:23:11.768813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.982 [2024-05-15 02:23:11.768837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38ff00 00:22:23.982 [2024-05-15 02:23:11.768844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.982 [2024-05-15 02:23:11.769272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.982 [2024-05-15 02:23:11.769297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:23.982 [2024-05-15 02:23:11.769310] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:23.982 [2024-05-15 02:23:11.769319] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.982 [2024-05-15 02:23:11.769332] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d390180 00:22:23.982 [2024-05-15 02:23:11.769336] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:23.982 [2024-05-15 02:23:11.769352] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d3f2e20 00:22:23.982 [2024-05-15 02:23:11.769362] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d390180 00:22:23.982 [2024-05-15 02:23:11.769365] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d390180 00:22:23.982 [2024-05-15 02:23:11.769372] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.982 pt2 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.982 "name": "raid_bdev1", 00:22:23.982 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:23.982 "strip_size_kb": 0, 00:22:23.982 "state": "online", 00:22:23.982 "raid_level": "raid1", 00:22:23.982 "superblock": true, 00:22:23.982 "num_base_bdevs": 2, 00:22:23.982 "num_base_bdevs_discovered": 1, 00:22:23.982 "num_base_bdevs_operational": 1, 00:22:23.982 "base_bdevs_list": [ 00:22:23.982 { 00:22:23.982 "name": null, 00:22:23.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.982 "is_configured": false, 00:22:23.982 "data_offset": 256, 00:22:23.982 "data_size": 7936 00:22:23.982 }, 00:22:23.982 { 00:22:23.982 "name": "pt2", 00:22:23.982 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:23.982 "is_configured": true, 00:22:23.982 "data_offset": 256, 00:22:23.982 "data_size": 7936 00:22:23.982 } 00:22:23.982 ] 00:22:23.982 }' 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.982 02:23:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.241 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:24.499 [2024-05-15 02:23:12.488800] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.499 [2024-05-15 02:23:12.488824] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.499 [2024-05-15 02:23:12.488838] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.499 [2024-05-15 02:23:12.488847] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.499 [2024-05-15 02:23:12.488850] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d390180 name raid_bdev1, state offline 00:22:24.499 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:24.499 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.758 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:24.758 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:24.758 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:24.758 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:25.016 [2024-05-15 02:23:12.932828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:25.016 [2024-05-15 02:23:12.932871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.016 [2024-05-15 02:23:12.932894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d38fc80 00:22:25.016 [2024-05-15 02:23:12.932901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.016 [2024-05-15 02:23:12.933337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.016 [2024-05-15 02:23:12.933367] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:25.016 [2024-05-15 02:23:12.933381] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:25.016 [2024-05-15 02:23:12.933390] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:25.016 [2024-05-15 02:23:12.933407] bdev_raid.c:3489:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:25.016 [2024-05-15 02:23:12.933410] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:25.016 [2024-05-15 02:23:12.933414] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d38f780 name raid_bdev1, state configuring 00:22:25.016 [2024-05-15 02:23:12.933422] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.016 [2024-05-15 02:23:12.933434] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d38f780 00:22:25.016 [2024-05-15 02:23:12.933437] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:25.016 [2024-05-15 02:23:12.933454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d3f2e20 00:22:25.016 [2024-05-15 02:23:12.933464] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d38f780 00:22:25.016 [2024-05-15 02:23:12.933467] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d38f780 00:22:25.016 [2024-05-15 02:23:12.933474] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.016 pt1 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.016 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.017 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.017 02:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.275 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.275 "name": "raid_bdev1", 00:22:25.275 "uuid": "0f2e504e-1262-11ef-99fd-bfc7c66e2865", 00:22:25.275 "strip_size_kb": 0, 00:22:25.275 "state": "online", 00:22:25.275 "raid_level": "raid1", 00:22:25.275 "superblock": true, 00:22:25.275 "num_base_bdevs": 2, 00:22:25.275 "num_base_bdevs_discovered": 1, 00:22:25.275 "num_base_bdevs_operational": 1, 00:22:25.275 "base_bdevs_list": [ 00:22:25.275 { 00:22:25.275 "name": null, 00:22:25.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.275 "is_configured": false, 00:22:25.275 "data_offset": 256, 00:22:25.275 "data_size": 7936 00:22:25.275 }, 00:22:25.275 { 00:22:25.275 "name": "pt2", 00:22:25.275 "uuid": "722016c7-b56e-cb5b-82c9-8e103936674f", 00:22:25.275 "is_configured": true, 00:22:25.275 "data_offset": 256, 00:22:25.275 "data_size": 7936 00:22:25.275 } 00:22:25.275 ] 00:22:25.275 }' 00:22:25.275 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.275 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.534 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:25.534 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:25.833 [2024-05-15 02:23:13.788905] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 0f2e504e-1262-11ef-99fd-bfc7c66e2865 '!=' 0f2e504e-1262-11ef-99fd-bfc7c66e2865 ']' 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 65205 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65205 ']' 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65205 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65205 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:25.833 killing process with pid 65205 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65205' 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 65205 00:22:25.833 [2024-05-15 02:23:13.819357] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.833 [2024-05-15 02:23:13.819370] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.833 [2024-05-15 02:23:13.819389] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.833 [2024-05-15 02:23:13.819393] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d38f780 name raid_bdev1, state offline 00:22:25.833 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 65205 00:22:25.833 [2024-05-15 02:23:13.828728] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.107 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:26.107 00:22:26.107 real 0m12.003s 00:22:26.107 user 0m21.269s 00:22:26.107 sys 0m2.019s 00:22:26.107 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.107 02:23:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.107 ************************************ 00:22:26.107 END TEST raid_superblock_test_md_interleaved 00:22:26.107 ************************************ 00:22:26.107 02:23:13 bdev_raid -- bdev/bdev_raid.sh@848 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:26.107 02:23:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:22:26.107 02:23:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.107 02:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.107 ************************************ 00:22:26.107 START TEST raid_rebuild_test_sb_md_interleaved 00:22:26.107 ************************************ 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=65588 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 65588 /var/tmp/spdk-raid.sock 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65588 ']' 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.107 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.107 [2024-05-15 02:23:14.021631] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:26.107 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:26.107 Zero copy mechanism will not be used. 00:22:26.107 [2024-05-15 02:23:14.021824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:26.674 EAL: TSC is not safe to use in SMP mode 00:22:26.674 EAL: TSC is not invariant 00:22:26.674 [2024-05-15 02:23:14.523217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.674 [2024-05-15 02:23:14.618701] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:26.674 [2024-05-15 02:23:14.621321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.674 [2024-05-15 02:23:14.622227] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:26.674 [2024-05-15 02:23:14.622243] bdev_raid.c:1433:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.243 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:27.243 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:22:27.243 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:27.244 02:23:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:27.244 BaseBdev1_malloc 00:22:27.244 02:23:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:27.502 [2024-05-15 02:23:15.414105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:27.502 [2024-05-15 02:23:15.414160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.502 [2024-05-15 02:23:15.414743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf4780 00:22:27.502 [2024-05-15 02:23:15.414769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.502 [2024-05-15 02:23:15.415454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.502 [2024-05-15 02:23:15.415487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.502 BaseBdev1 00:22:27.502 02:23:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:27.502 02:23:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:27.760 BaseBdev2_malloc 00:22:27.760 02:23:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:28.018 [2024-05-15 02:23:15.874124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:28.018 [2024-05-15 02:23:15.874175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.018 [2024-05-15 02:23:15.874201] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf4c80 00:22:28.018 [2024-05-15 02:23:15.874207] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.018 [2024-05-15 02:23:15.874689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.018 [2024-05-15 02:23:15.874716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:28.018 BaseBdev2 00:22:28.018 02:23:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:28.276 spare_malloc 00:22:28.276 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:28.276 spare_delay 00:22:28.276 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:28.537 [2024-05-15 02:23:16.498157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:28.537 [2024-05-15 02:23:16.498212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.538 [2024-05-15 02:23:16.498238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf5400 00:22:28.538 [2024-05-15 02:23:16.498245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.538 [2024-05-15 02:23:16.498773] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.538 [2024-05-15 02:23:16.498807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:28.538 spare 00:22:28.538 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:28.810 [2024-05-15 02:23:16.682165] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.810 [2024-05-15 02:23:16.682582] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.810 [2024-05-15 02:23:16.682660] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcf5680 00:22:28.810 [2024-05-15 02:23:16.682665] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:28.810 [2024-05-15 02:23:16.682695] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57e20 00:22:28.810 [2024-05-15 02:23:16.682706] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcf5680 00:22:28.810 [2024-05-15 02:23:16.682709] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcf5680 00:22:28.810 [2024-05-15 02:23:16.682717] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.810 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.069 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:29.069 "name": "raid_bdev1", 00:22:29.069 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:29.069 "strip_size_kb": 0, 00:22:29.069 "state": "online", 00:22:29.069 "raid_level": "raid1", 00:22:29.070 "superblock": true, 00:22:29.070 "num_base_bdevs": 2, 00:22:29.070 "num_base_bdevs_discovered": 2, 00:22:29.070 "num_base_bdevs_operational": 2, 00:22:29.070 "base_bdevs_list": [ 00:22:29.070 { 00:22:29.070 "name": "BaseBdev1", 00:22:29.070 "uuid": "bd35a790-054c-1150-87d6-676f4ba76273", 00:22:29.070 "is_configured": true, 00:22:29.070 "data_offset": 256, 00:22:29.070 "data_size": 7936 00:22:29.070 }, 00:22:29.070 { 00:22:29.070 "name": "BaseBdev2", 00:22:29.070 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:29.070 "is_configured": true, 00:22:29.070 "data_offset": 256, 00:22:29.070 "data_size": 7936 00:22:29.070 } 00:22:29.070 ] 00:22:29.070 }' 00:22:29.070 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:29.070 02:23:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.329 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:29.329 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:29.587 [2024-05-15 02:23:17.394233] bdev_raid.c:1124:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.587 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:29.587 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.587 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:29.847 [2024-05-15 02:23:17.842225] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.847 02:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.106 02:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.106 "name": "raid_bdev1", 00:22:30.106 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:30.106 "strip_size_kb": 0, 00:22:30.106 "state": "online", 00:22:30.106 "raid_level": "raid1", 00:22:30.106 "superblock": true, 00:22:30.106 "num_base_bdevs": 2, 00:22:30.106 "num_base_bdevs_discovered": 1, 00:22:30.106 "num_base_bdevs_operational": 1, 00:22:30.106 "base_bdevs_list": [ 00:22:30.106 { 00:22:30.106 "name": null, 00:22:30.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.106 "is_configured": false, 00:22:30.106 "data_offset": 256, 00:22:30.106 "data_size": 7936 00:22:30.106 }, 00:22:30.106 { 00:22:30.106 "name": "BaseBdev2", 00:22:30.106 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:30.106 "is_configured": true, 00:22:30.106 "data_offset": 256, 00:22:30.106 "data_size": 7936 00:22:30.106 } 00:22:30.106 ] 00:22:30.106 }' 00:22:30.106 02:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.106 02:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.365 02:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:30.623 [2024-05-15 02:23:18.550269] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.623 [2024-05-15 02:23:18.550406] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57ec0 00:22:30.623 [2024-05-15 02:23:18.551215] bdev_raid.c:2793:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:30.623 02:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:31.624 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.882 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.143 "name": "raid_bdev1", 00:22:32.143 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:32.143 "strip_size_kb": 0, 00:22:32.143 "state": "online", 00:22:32.143 "raid_level": "raid1", 00:22:32.143 "superblock": true, 00:22:32.143 "num_base_bdevs": 2, 00:22:32.143 "num_base_bdevs_discovered": 2, 00:22:32.143 "num_base_bdevs_operational": 2, 00:22:32.143 "process": { 00:22:32.143 "type": "rebuild", 00:22:32.143 "target": "spare", 00:22:32.143 "progress": { 00:22:32.143 "blocks": 3328, 00:22:32.143 "percent": 41 00:22:32.143 } 00:22:32.143 }, 00:22:32.143 "base_bdevs_list": [ 00:22:32.143 { 00:22:32.143 "name": "spare", 00:22:32.143 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:32.143 "is_configured": true, 00:22:32.143 "data_offset": 256, 00:22:32.143 "data_size": 7936 00:22:32.143 }, 00:22:32.143 { 00:22:32.143 "name": "BaseBdev2", 00:22:32.143 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:32.143 "is_configured": true, 00:22:32.143 "data_offset": 256, 00:22:32.143 "data_size": 7936 00:22:32.143 } 00:22:32.143 ] 00:22:32.143 }' 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.143 02:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:32.401 [2024-05-15 02:23:20.182667] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.401 [2024-05-15 02:23:20.258270] bdev_raid.c:2486:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:22:32.401 [2024-05-15 02:23:20.258322] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.401 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.660 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.660 "name": "raid_bdev1", 00:22:32.660 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:32.660 "strip_size_kb": 0, 00:22:32.660 "state": "online", 00:22:32.660 "raid_level": "raid1", 00:22:32.660 "superblock": true, 00:22:32.660 "num_base_bdevs": 2, 00:22:32.660 "num_base_bdevs_discovered": 1, 00:22:32.660 "num_base_bdevs_operational": 1, 00:22:32.660 "base_bdevs_list": [ 00:22:32.660 { 00:22:32.660 "name": null, 00:22:32.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.660 "is_configured": false, 00:22:32.660 "data_offset": 256, 00:22:32.660 "data_size": 7936 00:22:32.660 }, 00:22:32.660 { 00:22:32.660 "name": "BaseBdev2", 00:22:32.660 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:32.660 "is_configured": true, 00:22:32.660 "data_offset": 256, 00:22:32.660 "data_size": 7936 00:22:32.660 } 00:22:32.660 ] 00:22:32.660 }' 00:22:32.660 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.660 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.919 02:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.178 "name": "raid_bdev1", 00:22:33.178 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:33.178 "strip_size_kb": 0, 00:22:33.178 "state": "online", 00:22:33.178 "raid_level": "raid1", 00:22:33.178 "superblock": true, 00:22:33.178 "num_base_bdevs": 2, 00:22:33.178 "num_base_bdevs_discovered": 1, 00:22:33.178 "num_base_bdevs_operational": 1, 00:22:33.178 "base_bdevs_list": [ 00:22:33.178 { 00:22:33.178 "name": null, 00:22:33.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.178 "is_configured": false, 00:22:33.178 "data_offset": 256, 00:22:33.178 "data_size": 7936 00:22:33.178 }, 00:22:33.178 { 00:22:33.178 "name": "BaseBdev2", 00:22:33.178 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:33.178 "is_configured": true, 00:22:33.178 "data_offset": 256, 00:22:33.178 "data_size": 7936 00:22:33.178 } 00:22:33.178 ] 00:22:33.178 }' 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:33.178 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:33.437 [2024-05-15 02:23:21.252792] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:33.437 [2024-05-15 02:23:21.252910] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57e20 00:22:33.437 [2024-05-15 02:23:21.253538] bdev_raid.c:2793:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.437 02:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.373 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.632 "name": "raid_bdev1", 00:22:34.632 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:34.632 "strip_size_kb": 0, 00:22:34.632 "state": "online", 00:22:34.632 "raid_level": "raid1", 00:22:34.632 "superblock": true, 00:22:34.632 "num_base_bdevs": 2, 00:22:34.632 "num_base_bdevs_discovered": 2, 00:22:34.632 "num_base_bdevs_operational": 2, 00:22:34.632 "process": { 00:22:34.632 "type": "rebuild", 00:22:34.632 "target": "spare", 00:22:34.632 "progress": { 00:22:34.632 "blocks": 3072, 00:22:34.632 "percent": 38 00:22:34.632 } 00:22:34.632 }, 00:22:34.632 "base_bdevs_list": [ 00:22:34.632 { 00:22:34.632 "name": "spare", 00:22:34.632 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:34.632 "is_configured": true, 00:22:34.632 "data_offset": 256, 00:22:34.632 "data_size": 7936 00:22:34.632 }, 00:22:34.632 { 00:22:34.632 "name": "BaseBdev2", 00:22:34.632 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:34.632 "is_configured": true, 00:22:34.632 "data_offset": 256, 00:22:34.632 "data_size": 7936 00:22:34.632 } 00:22:34.632 ] 00:22:34.632 }' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:34.632 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=624 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.632 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.890 "name": "raid_bdev1", 00:22:34.890 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:34.890 "strip_size_kb": 0, 00:22:34.890 "state": "online", 00:22:34.890 "raid_level": "raid1", 00:22:34.890 "superblock": true, 00:22:34.890 "num_base_bdevs": 2, 00:22:34.890 "num_base_bdevs_discovered": 2, 00:22:34.890 "num_base_bdevs_operational": 2, 00:22:34.890 "process": { 00:22:34.890 "type": "rebuild", 00:22:34.890 "target": "spare", 00:22:34.890 "progress": { 00:22:34.890 "blocks": 3840, 00:22:34.890 "percent": 48 00:22:34.890 } 00:22:34.890 }, 00:22:34.890 "base_bdevs_list": [ 00:22:34.890 { 00:22:34.890 "name": "spare", 00:22:34.890 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:34.890 "is_configured": true, 00:22:34.890 "data_offset": 256, 00:22:34.890 "data_size": 7936 00:22:34.890 }, 00:22:34.890 { 00:22:34.890 "name": "BaseBdev2", 00:22:34.890 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:34.890 "is_configured": true, 00:22:34.890 "data_offset": 256, 00:22:34.890 "data_size": 7936 00:22:34.890 } 00:22:34.890 ] 00:22:34.890 }' 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.890 02:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.264 02:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.264 "name": "raid_bdev1", 00:22:36.264 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:36.264 "strip_size_kb": 0, 00:22:36.264 "state": "online", 00:22:36.264 "raid_level": "raid1", 00:22:36.264 "superblock": true, 00:22:36.264 "num_base_bdevs": 2, 00:22:36.264 "num_base_bdevs_discovered": 2, 00:22:36.264 "num_base_bdevs_operational": 2, 00:22:36.264 "process": { 00:22:36.264 "type": "rebuild", 00:22:36.264 "target": "spare", 00:22:36.264 "progress": { 00:22:36.264 "blocks": 7168, 00:22:36.264 "percent": 90 00:22:36.264 } 00:22:36.264 }, 00:22:36.264 "base_bdevs_list": [ 00:22:36.264 { 00:22:36.264 "name": "spare", 00:22:36.264 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:36.264 "is_configured": true, 00:22:36.264 "data_offset": 256, 00:22:36.264 "data_size": 7936 00:22:36.264 }, 00:22:36.264 { 00:22:36.264 "name": "BaseBdev2", 00:22:36.264 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:36.264 "is_configured": true, 00:22:36.264 "data_offset": 256, 00:22:36.264 "data_size": 7936 00:22:36.264 } 00:22:36.264 ] 00:22:36.264 }' 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:36.264 02:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:36.576 [2024-05-15 02:23:24.366258] bdev_raid.c:2757:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:36.576 [2024-05-15 02:23:24.366296] bdev_raid.c:2476:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:36.576 [2024-05-15 02:23:24.366349] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:37.510 "name": "raid_bdev1", 00:22:37.510 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:37.510 "strip_size_kb": 0, 00:22:37.510 "state": "online", 00:22:37.510 "raid_level": "raid1", 00:22:37.510 "superblock": true, 00:22:37.510 "num_base_bdevs": 2, 00:22:37.510 "num_base_bdevs_discovered": 2, 00:22:37.510 "num_base_bdevs_operational": 2, 00:22:37.510 "base_bdevs_list": [ 00:22:37.510 { 00:22:37.510 "name": "spare", 00:22:37.510 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:37.510 "is_configured": true, 00:22:37.510 "data_offset": 256, 00:22:37.510 "data_size": 7936 00:22:37.510 }, 00:22:37.510 { 00:22:37.510 "name": "BaseBdev2", 00:22:37.510 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:37.510 "is_configured": true, 00:22:37.510 "data_offset": 256, 00:22:37.510 "data_size": 7936 00:22:37.510 } 00:22:37.510 ] 00:22:37.510 }' 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.510 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:37.768 "name": "raid_bdev1", 00:22:37.768 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:37.768 "strip_size_kb": 0, 00:22:37.768 "state": "online", 00:22:37.768 "raid_level": "raid1", 00:22:37.768 "superblock": true, 00:22:37.768 "num_base_bdevs": 2, 00:22:37.768 "num_base_bdevs_discovered": 2, 00:22:37.768 "num_base_bdevs_operational": 2, 00:22:37.768 "base_bdevs_list": [ 00:22:37.768 { 00:22:37.768 "name": "spare", 00:22:37.768 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:37.768 "is_configured": true, 00:22:37.768 "data_offset": 256, 00:22:37.768 "data_size": 7936 00:22:37.768 }, 00:22:37.768 { 00:22:37.768 "name": "BaseBdev2", 00:22:37.768 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:37.768 "is_configured": true, 00:22:37.768 "data_offset": 256, 00:22:37.768 "data_size": 7936 00:22:37.768 } 00:22:37.768 ] 00:22:37.768 }' 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.768 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.025 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.025 "name": "raid_bdev1", 00:22:38.025 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:38.025 "strip_size_kb": 0, 00:22:38.025 "state": "online", 00:22:38.025 "raid_level": "raid1", 00:22:38.025 "superblock": true, 00:22:38.025 "num_base_bdevs": 2, 00:22:38.025 "num_base_bdevs_discovered": 2, 00:22:38.025 "num_base_bdevs_operational": 2, 00:22:38.025 "base_bdevs_list": [ 00:22:38.025 { 00:22:38.025 "name": "spare", 00:22:38.025 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:38.025 "is_configured": true, 00:22:38.025 "data_offset": 256, 00:22:38.025 "data_size": 7936 00:22:38.025 }, 00:22:38.025 { 00:22:38.025 "name": "BaseBdev2", 00:22:38.025 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:38.025 "is_configured": true, 00:22:38.025 "data_offset": 256, 00:22:38.025 "data_size": 7936 00:22:38.025 } 00:22:38.025 ] 00:22:38.025 }' 00:22:38.025 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.025 02:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:38.283 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:38.541 [2024-05-15 02:23:26.525628] bdev_raid.c:2326:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.541 [2024-05-15 02:23:26.525652] bdev_raid.c:1861:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.541 [2024-05-15 02:23:26.525697] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.541 [2024-05-15 02:23:26.525711] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.541 [2024-05-15 02:23:26.525715] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcf5680 name raid_bdev1, state offline 00:22:38.541 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.541 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:38.800 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:38.800 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:38.800 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:38.800 02:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:39.059 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:39.317 [2024-05-15 02:23:27.197676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:39.317 [2024-05-15 02:23:27.197726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.317 [2024-05-15 02:23:27.197752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf5400 00:22:39.317 [2024-05-15 02:23:27.197768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.317 [2024-05-15 02:23:27.198285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.317 [2024-05-15 02:23:27.198317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.317 [2024-05-15 02:23:27.198336] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:39.317 [2024-05-15 02:23:27.198347] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.317 [2024-05-15 02:23:27.198369] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.317 spare 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.317 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.317 [2024-05-15 02:23:27.298389] bdev_raid.c:1711:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcf5680 00:22:39.317 [2024-05-15 02:23:27.298411] bdev_raid.c:1713:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:39.318 [2024-05-15 02:23:27.298442] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57e20 00:22:39.318 [2024-05-15 02:23:27.298458] bdev_raid.c:1741:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcf5680 00:22:39.318 [2024-05-15 02:23:27.298461] bdev_raid.c:1743:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcf5680 00:22:39.318 [2024-05-15 02:23:27.298474] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.575 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.575 "name": "raid_bdev1", 00:22:39.575 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:39.575 "strip_size_kb": 0, 00:22:39.575 "state": "online", 00:22:39.575 "raid_level": "raid1", 00:22:39.575 "superblock": true, 00:22:39.575 "num_base_bdevs": 2, 00:22:39.575 "num_base_bdevs_discovered": 2, 00:22:39.575 "num_base_bdevs_operational": 2, 00:22:39.575 "base_bdevs_list": [ 00:22:39.575 { 00:22:39.575 "name": "spare", 00:22:39.575 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:39.575 "is_configured": true, 00:22:39.575 "data_offset": 256, 00:22:39.575 "data_size": 7936 00:22:39.575 }, 00:22:39.575 { 00:22:39.575 "name": "BaseBdev2", 00:22:39.575 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:39.575 "is_configured": true, 00:22:39.575 "data_offset": 256, 00:22:39.575 "data_size": 7936 00:22:39.575 } 00:22:39.575 ] 00:22:39.575 }' 00:22:39.575 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.575 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.142 02:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.142 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.142 "name": "raid_bdev1", 00:22:40.142 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:40.142 "strip_size_kb": 0, 00:22:40.142 "state": "online", 00:22:40.142 "raid_level": "raid1", 00:22:40.142 "superblock": true, 00:22:40.142 "num_base_bdevs": 2, 00:22:40.142 "num_base_bdevs_discovered": 2, 00:22:40.142 "num_base_bdevs_operational": 2, 00:22:40.142 "base_bdevs_list": [ 00:22:40.142 { 00:22:40.142 "name": "spare", 00:22:40.142 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:40.142 "is_configured": true, 00:22:40.142 "data_offset": 256, 00:22:40.142 "data_size": 7936 00:22:40.142 }, 00:22:40.142 { 00:22:40.142 "name": "BaseBdev2", 00:22:40.142 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:40.142 "is_configured": true, 00:22:40.142 "data_offset": 256, 00:22:40.142 "data_size": 7936 00:22:40.142 } 00:22:40.142 ] 00:22:40.142 }' 00:22:40.142 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.142 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.142 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.142 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.401 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.401 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.401 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.401 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:40.660 [2024-05-15 02:23:28.533758] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.660 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.919 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.919 "name": "raid_bdev1", 00:22:40.919 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:40.919 "strip_size_kb": 0, 00:22:40.919 "state": "online", 00:22:40.919 "raid_level": "raid1", 00:22:40.919 "superblock": true, 00:22:40.919 "num_base_bdevs": 2, 00:22:40.919 "num_base_bdevs_discovered": 1, 00:22:40.919 "num_base_bdevs_operational": 1, 00:22:40.919 "base_bdevs_list": [ 00:22:40.919 { 00:22:40.919 "name": null, 00:22:40.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.919 "is_configured": false, 00:22:40.919 "data_offset": 256, 00:22:40.919 "data_size": 7936 00:22:40.919 }, 00:22:40.919 { 00:22:40.919 "name": "BaseBdev2", 00:22:40.919 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:40.919 "is_configured": true, 00:22:40.919 "data_offset": 256, 00:22:40.919 "data_size": 7936 00:22:40.919 } 00:22:40.919 ] 00:22:40.919 }' 00:22:40.919 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.919 02:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.177 02:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:41.435 [2024-05-15 02:23:29.353817] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.435 [2024-05-15 02:23:29.353910] bdev_raid.c:3504:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:41.435 [2024-05-15 02:23:29.353915] bdev_raid.c:3560:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:41.435 [2024-05-15 02:23:29.353948] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.435 [2024-05-15 02:23:29.354017] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57ec0 00:22:41.435 [2024-05-15 02:23:29.354461] bdev_raid.c:2793:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:41.435 02:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # sleep 1 00:22:42.809 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.809 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:42.809 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:42.809 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:42.810 "name": "raid_bdev1", 00:22:42.810 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:42.810 "strip_size_kb": 0, 00:22:42.810 "state": "online", 00:22:42.810 "raid_level": "raid1", 00:22:42.810 "superblock": true, 00:22:42.810 "num_base_bdevs": 2, 00:22:42.810 "num_base_bdevs_discovered": 2, 00:22:42.810 "num_base_bdevs_operational": 2, 00:22:42.810 "process": { 00:22:42.810 "type": "rebuild", 00:22:42.810 "target": "spare", 00:22:42.810 "progress": { 00:22:42.810 "blocks": 3328, 00:22:42.810 "percent": 41 00:22:42.810 } 00:22:42.810 }, 00:22:42.810 "base_bdevs_list": [ 00:22:42.810 { 00:22:42.810 "name": "spare", 00:22:42.810 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:42.810 "is_configured": true, 00:22:42.810 "data_offset": 256, 00:22:42.810 "data_size": 7936 00:22:42.810 }, 00:22:42.810 { 00:22:42.810 "name": "BaseBdev2", 00:22:42.810 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:42.810 "is_configured": true, 00:22:42.810 "data_offset": 256, 00:22:42.810 "data_size": 7936 00:22:42.810 } 00:22:42.810 ] 00:22:42.810 }' 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.810 02:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:43.069 [2024-05-15 02:23:31.039666] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:43.069 [2024-05-15 02:23:31.062245] bdev_raid.c:2486:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:22:43.069 [2024-05-15 02:23:31.062298] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.069 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.635 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.635 "name": "raid_bdev1", 00:22:43.635 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:43.635 "strip_size_kb": 0, 00:22:43.635 "state": "online", 00:22:43.635 "raid_level": "raid1", 00:22:43.635 "superblock": true, 00:22:43.635 "num_base_bdevs": 2, 00:22:43.635 "num_base_bdevs_discovered": 1, 00:22:43.635 "num_base_bdevs_operational": 1, 00:22:43.635 "base_bdevs_list": [ 00:22:43.635 { 00:22:43.635 "name": null, 00:22:43.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.635 "is_configured": false, 00:22:43.635 "data_offset": 256, 00:22:43.635 "data_size": 7936 00:22:43.635 }, 00:22:43.635 { 00:22:43.635 "name": "BaseBdev2", 00:22:43.635 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:43.635 "is_configured": true, 00:22:43.635 "data_offset": 256, 00:22:43.635 "data_size": 7936 00:22:43.635 } 00:22:43.635 ] 00:22:43.635 }' 00:22:43.635 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.635 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:43.894 [2024-05-15 02:23:31.868763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:43.894 [2024-05-15 02:23:31.868820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.894 [2024-05-15 02:23:31.868847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf5400 00:22:43.894 [2024-05-15 02:23:31.868855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.894 [2024-05-15 02:23:31.868912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.894 [2024-05-15 02:23:31.868927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:43.894 [2024-05-15 02:23:31.868944] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:43.894 [2024-05-15 02:23:31.868948] bdev_raid.c:3504:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:43.894 [2024-05-15 02:23:31.868952] bdev_raid.c:3560:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:43.894 [2024-05-15 02:23:31.868962] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:43.894 [2024-05-15 02:23:31.869028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd57e20 00:22:43.894 [2024-05-15 02:23:31.869444] bdev_raid.c:2793:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.894 spare 00:22:43.894 02:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # sleep 1 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.266 02:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.266 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.266 "name": "raid_bdev1", 00:22:45.266 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:45.266 "strip_size_kb": 0, 00:22:45.267 "state": "online", 00:22:45.267 "raid_level": "raid1", 00:22:45.267 "superblock": true, 00:22:45.267 "num_base_bdevs": 2, 00:22:45.267 "num_base_bdevs_discovered": 2, 00:22:45.267 "num_base_bdevs_operational": 2, 00:22:45.267 "process": { 00:22:45.267 "type": "rebuild", 00:22:45.267 "target": "spare", 00:22:45.267 "progress": { 00:22:45.267 "blocks": 3328, 00:22:45.267 "percent": 41 00:22:45.267 } 00:22:45.267 }, 00:22:45.267 "base_bdevs_list": [ 00:22:45.267 { 00:22:45.267 "name": "spare", 00:22:45.267 "uuid": "bc7567ae-9d63-8256-a089-3c215991ccf4", 00:22:45.267 "is_configured": true, 00:22:45.267 "data_offset": 256, 00:22:45.267 "data_size": 7936 00:22:45.267 }, 00:22:45.267 { 00:22:45.267 "name": "BaseBdev2", 00:22:45.267 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:45.267 "is_configured": true, 00:22:45.267 "data_offset": 256, 00:22:45.267 "data_size": 7936 00:22:45.267 } 00:22:45.267 ] 00:22:45.267 }' 00:22:45.267 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.267 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.267 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.267 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.267 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:45.524 [2024-05-15 02:23:33.465229] bdev_raid.c:2127:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.524 [2024-05-15 02:23:33.476002] bdev_raid.c:2486:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:22:45.524 [2024-05-15 02:23:33.476038] bdev_raid.c: 312:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.524 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.525 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.781 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.781 "name": "raid_bdev1", 00:22:45.781 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:45.781 "strip_size_kb": 0, 00:22:45.781 "state": "online", 00:22:45.781 "raid_level": "raid1", 00:22:45.781 "superblock": true, 00:22:45.781 "num_base_bdevs": 2, 00:22:45.781 "num_base_bdevs_discovered": 1, 00:22:45.781 "num_base_bdevs_operational": 1, 00:22:45.781 "base_bdevs_list": [ 00:22:45.781 { 00:22:45.781 "name": null, 00:22:45.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.781 "is_configured": false, 00:22:45.781 "data_offset": 256, 00:22:45.781 "data_size": 7936 00:22:45.781 }, 00:22:45.781 { 00:22:45.781 "name": "BaseBdev2", 00:22:45.781 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:45.781 "is_configured": true, 00:22:45.781 "data_offset": 256, 00:22:45.781 "data_size": 7936 00:22:45.781 } 00:22:45.781 ] 00:22:45.781 }' 00:22:45.781 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.781 02:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.346 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:46.605 "name": "raid_bdev1", 00:22:46.605 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:46.605 "strip_size_kb": 0, 00:22:46.605 "state": "online", 00:22:46.605 "raid_level": "raid1", 00:22:46.605 "superblock": true, 00:22:46.605 "num_base_bdevs": 2, 00:22:46.605 "num_base_bdevs_discovered": 1, 00:22:46.605 "num_base_bdevs_operational": 1, 00:22:46.605 "base_bdevs_list": [ 00:22:46.605 { 00:22:46.605 "name": null, 00:22:46.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.605 "is_configured": false, 00:22:46.605 "data_offset": 256, 00:22:46.605 "data_size": 7936 00:22:46.605 }, 00:22:46.605 { 00:22:46.605 "name": "BaseBdev2", 00:22:46.605 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:46.605 "is_configured": true, 00:22:46.605 "data_offset": 256, 00:22:46.605 "data_size": 7936 00:22:46.605 } 00:22:46.605 ] 00:22:46.605 }' 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:46.605 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:46.864 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:47.122 [2024-05-15 02:23:34.922524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:47.122 [2024-05-15 02:23:34.922595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.122 [2024-05-15 02:23:34.922620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcf4780 00:22:47.122 [2024-05-15 02:23:34.922628] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.122 [2024-05-15 02:23:34.922685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.122 [2024-05-15 02:23:34.922692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:47.122 [2024-05-15 02:23:34.922708] bdev_raid.c:3692:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:47.122 [2024-05-15 02:23:34.922712] bdev_raid.c:3504:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:47.122 [2024-05-15 02:23:34.922716] bdev_raid.c:3521:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:47.122 BaseBdev1 00:22:47.122 02:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # sleep 1 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.054 02:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.313 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.313 "name": "raid_bdev1", 00:22:48.313 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:48.313 "strip_size_kb": 0, 00:22:48.313 "state": "online", 00:22:48.313 "raid_level": "raid1", 00:22:48.313 "superblock": true, 00:22:48.313 "num_base_bdevs": 2, 00:22:48.313 "num_base_bdevs_discovered": 1, 00:22:48.313 "num_base_bdevs_operational": 1, 00:22:48.313 "base_bdevs_list": [ 00:22:48.313 { 00:22:48.313 "name": null, 00:22:48.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.313 "is_configured": false, 00:22:48.313 "data_offset": 256, 00:22:48.313 "data_size": 7936 00:22:48.313 }, 00:22:48.313 { 00:22:48.313 "name": "BaseBdev2", 00:22:48.313 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:48.313 "is_configured": true, 00:22:48.313 "data_offset": 256, 00:22:48.313 "data_size": 7936 00:22:48.313 } 00:22:48.313 ] 00:22:48.313 }' 00:22:48.313 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.313 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.571 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.828 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.828 "name": "raid_bdev1", 00:22:48.828 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:48.828 "strip_size_kb": 0, 00:22:48.828 "state": "online", 00:22:48.828 "raid_level": "raid1", 00:22:48.828 "superblock": true, 00:22:48.828 "num_base_bdevs": 2, 00:22:48.828 "num_base_bdevs_discovered": 1, 00:22:48.828 "num_base_bdevs_operational": 1, 00:22:48.828 "base_bdevs_list": [ 00:22:48.828 { 00:22:48.828 "name": null, 00:22:48.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.828 "is_configured": false, 00:22:48.829 "data_offset": 256, 00:22:48.829 "data_size": 7936 00:22:48.829 }, 00:22:48.829 { 00:22:48.829 "name": "BaseBdev2", 00:22:48.829 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:48.829 "is_configured": true, 00:22:48.829 "data_offset": 256, 00:22:48.829 "data_size": 7936 00:22:48.829 } 00:22:48.829 ] 00:22:48.829 }' 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.829 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.085 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.085 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.085 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:49.085 02:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:49.085 [2024-05-15 02:23:37.102644] bdev_raid.c:3138:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.085 [2024-05-15 02:23:37.102705] bdev_raid.c:3504:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:49.085 [2024-05-15 02:23:37.102709] bdev_raid.c:3521:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:49.085 request: 00:22:49.085 { 00:22:49.085 "raid_bdev": "raid_bdev1", 00:22:49.085 "base_bdev": "BaseBdev1", 00:22:49.085 "method": "bdev_raid_add_base_bdev", 00:22:49.085 "req_id": 1 00:22:49.085 } 00:22:49.085 Got JSON-RPC error response 00:22:49.085 response: 00:22:49.085 { 00:22:49.085 "code": -22, 00:22:49.085 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:49.085 } 00:22:49.342 02:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:22:49.342 02:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.342 02:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.342 02:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.342 02:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.274 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.531 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.531 "name": "raid_bdev1", 00:22:50.532 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:50.532 "strip_size_kb": 0, 00:22:50.532 "state": "online", 00:22:50.532 "raid_level": "raid1", 00:22:50.532 "superblock": true, 00:22:50.532 "num_base_bdevs": 2, 00:22:50.532 "num_base_bdevs_discovered": 1, 00:22:50.532 "num_base_bdevs_operational": 1, 00:22:50.532 "base_bdevs_list": [ 00:22:50.532 { 00:22:50.532 "name": null, 00:22:50.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.532 "is_configured": false, 00:22:50.532 "data_offset": 256, 00:22:50.532 "data_size": 7936 00:22:50.532 }, 00:22:50.532 { 00:22:50.532 "name": "BaseBdev2", 00:22:50.532 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:50.532 "is_configured": true, 00:22:50.532 "data_offset": 256, 00:22:50.532 "data_size": 7936 00:22:50.532 } 00:22:50.532 ] 00:22:50.532 }' 00:22:50.532 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.532 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.789 02:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.355 "name": "raid_bdev1", 00:22:51.355 "uuid": "169658c6-1262-11ef-99fd-bfc7c66e2865", 00:22:51.355 "strip_size_kb": 0, 00:22:51.355 "state": "online", 00:22:51.355 "raid_level": "raid1", 00:22:51.355 "superblock": true, 00:22:51.355 "num_base_bdevs": 2, 00:22:51.355 "num_base_bdevs_discovered": 1, 00:22:51.355 "num_base_bdevs_operational": 1, 00:22:51.355 "base_bdevs_list": [ 00:22:51.355 { 00:22:51.355 "name": null, 00:22:51.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.355 "is_configured": false, 00:22:51.355 "data_offset": 256, 00:22:51.355 "data_size": 7936 00:22:51.355 }, 00:22:51.355 { 00:22:51.355 "name": "BaseBdev2", 00:22:51.355 "uuid": "88c9639e-6e85-765a-95c8-2ecbef27565e", 00:22:51.355 "is_configured": true, 00:22:51.355 "data_offset": 256, 00:22:51.355 "data_size": 7936 00:22:51.355 } 00:22:51.355 ] 00:22:51.355 }' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # killprocess 65588 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65588 ']' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65588 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65588 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:22:51.355 killing process with pid 65588 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65588' 00:22:51.355 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 65588 00:22:51.355 Received shutdown signal, test time was about 60.000000 seconds 00:22:51.355 00:22:51.355 Latency(us) 00:22:51.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.355 =================================================================================================================== 00:22:51.355 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.356 [2024-05-15 02:23:39.125158] bdev_raid.c:1375:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.356 [2024-05-15 02:23:39.125195] bdev_raid.c: 453:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.356 [2024-05-15 02:23:39.125207] bdev_raid.c: 430:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.356 [2024-05-15 02:23:39.125212] bdev_raid.c: 348:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcf5680 name raid_bdev1, state offline 00:22:51.356 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 65588 00:22:51.356 [2024-05-15 02:23:39.139618] bdev_raid.c:1392:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.356 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # return 0 00:22:51.356 00:22:51.356 real 0m25.275s 00:22:51.356 user 0m38.859s 00:22:51.356 sys 0m2.383s 00:22:51.356 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:51.356 02:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.356 ************************************ 00:22:51.356 END TEST raid_rebuild_test_sb_md_interleaved 00:22:51.356 ************************************ 00:22:51.356 02:23:39 bdev_raid -- bdev/bdev_raid.sh@850 -- # rm -f /raidrandtest 00:22:51.356 00:22:51.356 real 10m11.236s 00:22:51.356 user 18m17.779s 00:22:51.356 sys 1m27.619s 00:22:51.356 02:23:39 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:51.356 ************************************ 00:22:51.356 END TEST bdev_raid 00:22:51.356 ************************************ 00:22:51.356 02:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:51.356 02:23:39 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:51.356 02:23:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:51.356 02:23:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:51.356 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:22:51.356 ************************************ 00:22:51.356 START TEST bdevperf_config 00:22:51.356 ************************************ 00:22:51.356 02:23:39 bdevperf_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:51.613 * Looking for test storage... 00:22:51.613 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:22:51.613 02:23:39 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:22:51.614 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:22:51.614 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:51.614 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:51.614 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:22:51.614 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:51.614 02:23:39 bdevperf_config -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-15 02:23:39.564739] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:54.894 [2024-05-15 02:23:39.564895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:54.894 Using job config with 4 jobs 00:22:54.894 EAL: TSC is not safe to use in SMP mode 00:22:54.894 EAL: TSC is not invariant 00:22:54.894 [2024-05-15 02:23:40.010614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.894 [2024-05-15 02:23:40.091421] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:54.894 [2024-05-15 02:23:40.093701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.894 cpumask for '\''job0'\'' is too big 00:22:54.894 cpumask for '\''job1'\'' is too big 00:22:54.894 cpumask for '\''job2'\'' is too big 00:22:54.894 cpumask for '\''job3'\'' is too big 00:22:54.894 Running I/O for 2 seconds... 00:22:54.894 00:22:54.894 Latency(us) 00:22:54.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401447.22 392.04 0.00 0.00 637.46 168.72 1334.12 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401431.97 392.02 0.00 0.00 637.35 173.59 1123.47 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401465.41 392.06 0.00 0.00 637.17 170.67 1139.08 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401445.95 392.04 0.00 0.00 637.08 161.89 1131.27 00:22:54.894 =================================================================================================================== 00:22:54.894 Total : 1605790.55 1568.15 0.00 0.00 637.26 161.89 1334.12' 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-15 02:23:39.564739] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:54.894 [2024-05-15 02:23:39.564895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:54.894 Using job config with 4 jobs 00:22:54.894 EAL: TSC is not safe to use in SMP mode 00:22:54.894 EAL: TSC is not invariant 00:22:54.894 [2024-05-15 02:23:40.010614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.894 [2024-05-15 02:23:40.091421] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:54.894 [2024-05-15 02:23:40.093701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.894 cpumask for '\''job0'\'' is too big 00:22:54.894 cpumask for '\''job1'\'' is too big 00:22:54.894 cpumask for '\''job2'\'' is too big 00:22:54.894 cpumask for '\''job3'\'' is too big 00:22:54.894 Running I/O for 2 seconds... 00:22:54.894 00:22:54.894 Latency(us) 00:22:54.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401447.22 392.04 0.00 0.00 637.46 168.72 1334.12 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401431.97 392.02 0.00 0.00 637.35 173.59 1123.47 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401465.41 392.06 0.00 0.00 637.17 170.67 1139.08 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401445.95 392.04 0.00 0.00 637.08 161.89 1131.27 00:22:54.894 =================================================================================================================== 00:22:54.894 Total : 1605790.55 1568.15 0.00 0.00 637.26 161.89 1334.12' 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 02:23:39.564739] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:54.894 [2024-05-15 02:23:39.564895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:54.894 Using job config with 4 jobs 00:22:54.894 EAL: TSC is not safe to use in SMP mode 00:22:54.894 EAL: TSC is not invariant 00:22:54.894 [2024-05-15 02:23:40.010614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.894 [2024-05-15 02:23:40.091421] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:54.894 [2024-05-15 02:23:40.093701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.894 cpumask for '\''job0'\'' is too big 00:22:54.894 cpumask for '\''job1'\'' is too big 00:22:54.894 cpumask for '\''job2'\'' is too big 00:22:54.894 cpumask for '\''job3'\'' is too big 00:22:54.894 Running I/O for 2 seconds... 00:22:54.894 00:22:54.894 Latency(us) 00:22:54.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401447.22 392.04 0.00 0.00 637.46 168.72 1334.12 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401431.97 392.02 0.00 0.00 637.35 173.59 1123.47 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401465.41 392.06 0.00 0.00 637.17 170.67 1139.08 00:22:54.894 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:54.894 Malloc0 : 2.00 401445.95 392.04 0.00 0.00 637.08 161.89 1131.27 00:22:54.894 =================================================================================================================== 00:22:54.894 Total : 1605790.55 1568.15 0.00 0.00 637.26 161.89 1334.12' 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:22:54.894 02:23:42 bdevperf_config -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:54.894 [2024-05-15 02:23:42.293695] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:54.894 [2024-05-15 02:23:42.293874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:54.894 EAL: TSC is not safe to use in SMP mode 00:22:54.894 EAL: TSC is not invariant 00:22:54.894 [2024-05-15 02:23:42.836859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.152 [2024-05-15 02:23:42.929947] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:55.152 [2024-05-15 02:23:42.932638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.152 cpumask for 'job0' is too big 00:22:55.152 cpumask for 'job1' is too big 00:22:55.152 cpumask for 'job2' is too big 00:22:55.152 cpumask for 'job3' is too big 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:22:57.676 Running I/O for 2 seconds... 00:22:57.676 00:22:57.676 Latency(us) 00:22:57.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.676 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:57.676 Malloc0 : 2.00 351882.47 343.64 0.00 0.00 727.23 245.76 1747.62 00:22:57.676 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:57.676 Malloc0 : 2.00 351912.63 343.66 0.00 0.00 726.98 236.01 1607.19 00:22:57.676 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:57.676 Malloc0 : 2.00 351959.30 343.71 0.00 0.00 726.71 236.98 1490.16 00:22:57.676 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:57.676 Malloc0 : 2.00 352030.39 343.78 0.00 0.00 726.39 101.91 1521.37 00:22:57.676 =================================================================================================================== 00:22:57.676 Total : 1407784.78 1374.79 0.00 0.00 726.83 101.91 1747.62' 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:57.676 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:57.676 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:57.676 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:22:57.676 02:23:45 bdevperf_config -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:00.250 02:23:47 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-15 02:23:45.149280] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:00.250 [2024-05-15 02:23:45.149486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:00.250 Using job config with 3 jobs 00:23:00.250 EAL: TSC is not safe to use in SMP mode 00:23:00.250 EAL: TSC is not invariant 00:23:00.250 [2024-05-15 02:23:45.593179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.250 [2024-05-15 02:23:45.687131] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:00.250 [2024-05-15 02:23:45.689821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.250 cpumask for '\''job0'\'' is too big 00:23:00.250 cpumask for '\''job1'\'' is too big 00:23:00.250 cpumask for '\''job2'\'' is too big 00:23:00.250 Running I/O for 2 seconds... 00:23:00.250 00:23:00.250 Latency(us) 00:23:00.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444458.66 434.04 0.00 0.00 575.70 273.07 1794.43 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444477.09 434.06 0.00 0.00 575.51 226.25 1778.83 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444461.76 434.04 0.00 0.00 575.41 130.68 1778.83 00:23:00.250 =================================================================================================================== 00:23:00.250 Total : 1333397.51 1302.15 0.00 0.00 575.54 130.68 1794.43' 00:23:00.250 02:23:47 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-15 02:23:45.149280] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:00.250 [2024-05-15 02:23:45.149486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:00.250 Using job config with 3 jobs 00:23:00.250 EAL: TSC is not safe to use in SMP mode 00:23:00.250 EAL: TSC is not invariant 00:23:00.250 [2024-05-15 02:23:45.593179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.250 [2024-05-15 02:23:45.687131] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:00.250 [2024-05-15 02:23:45.689821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.250 cpumask for '\''job0'\'' is too big 00:23:00.250 cpumask for '\''job1'\'' is too big 00:23:00.250 cpumask for '\''job2'\'' is too big 00:23:00.250 Running I/O for 2 seconds... 00:23:00.250 00:23:00.250 Latency(us) 00:23:00.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444458.66 434.04 0.00 0.00 575.70 273.07 1794.43 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444477.09 434.06 0.00 0.00 575.51 226.25 1778.83 00:23:00.250 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.250 Malloc0 : 2.00 444461.76 434.04 0.00 0.00 575.41 130.68 1778.83 00:23:00.250 =================================================================================================================== 00:23:00.250 Total : 1333397.51 1302.15 0.00 0.00 575.54 130.68 1794.43' 00:23:00.250 02:23:47 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 02:23:45.149280] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:00.250 [2024-05-15 02:23:45.149486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:00.250 Using job config with 3 jobs 00:23:00.250 EAL: TSC is not safe to use in SMP mode 00:23:00.250 EAL: TSC is not invariant 00:23:00.250 [2024-05-15 02:23:45.593179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.250 [2024-05-15 02:23:45.687131] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:00.250 [2024-05-15 02:23:45.689821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.250 cpumask for '\''job0'\'' is too big 00:23:00.250 cpumask for '\''job1'\'' is too big 00:23:00.250 cpumask for '\''job2'\'' is too big 00:23:00.250 Running I/O for 2 seconds... 00:23:00.250 00:23:00.250 Latency(us) 00:23:00.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.251 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.251 Malloc0 : 2.00 444458.66 434.04 0.00 0.00 575.70 273.07 1794.43 00:23:00.251 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.251 Malloc0 : 2.00 444477.09 434.06 0.00 0.00 575.51 226.25 1778.83 00:23:00.251 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:00.251 Malloc0 : 2.00 444461.76 434.04 0.00 0.00 575.41 130.68 1778.83 00:23:00.251 =================================================================================================================== 00:23:00.251 Total : 1333397.51 1302.15 0.00 0.00 575.54 130.68 1794.43' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:00.251 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:00.251 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:00.251 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:00.251 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:00.251 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:00.251 02:23:47 bdevperf_config -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:02.777 02:23:50 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-15 02:23:47.913325] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:02.777 [2024-05-15 02:23:47.913540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:02.777 Using job config with 4 jobs 00:23:02.777 EAL: TSC is not safe to use in SMP mode 00:23:02.777 EAL: TSC is not invariant 00:23:02.777 [2024-05-15 02:23:48.368698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.777 [2024-05-15 02:23:48.446138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:02.777 [2024-05-15 02:23:48.448278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.777 cpumask for '\''job0'\'' is too big 00:23:02.777 cpumask for '\''job1'\'' is too big 00:23:02.777 cpumask for '\''job2'\'' is too big 00:23:02.777 cpumask for '\''job3'\'' is too big 00:23:02.777 Running I/O for 2 seconds... 00:23:02.777 00:23:02.777 Latency(us) 00:23:02.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167845.47 163.91 0.00 0.00 1524.87 448.61 2808.68 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167837.84 163.90 0.00 0.00 1524.75 438.86 2777.47 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167828.79 163.90 0.00 0.00 1524.32 415.45 2324.96 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167819.82 163.89 0.00 0.00 1524.19 364.74 2324.96 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167812.28 163.88 0.00 0.00 1523.87 407.65 1903.66 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167877.48 163.94 0.00 0.00 1523.10 364.74 1880.25 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167868.27 163.93 0.00 0.00 1522.76 382.29 1755.42 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167858.97 163.92 0.00 0.00 1522.70 308.17 1763.23 00:23:02.777 =================================================================================================================== 00:23:02.777 Total : 1342748.91 1311.28 0.00 0.00 1523.82 308.17 2808.68' 00:23:02.777 02:23:50 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-15 02:23:47.913325] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:02.777 [2024-05-15 02:23:47.913540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:02.777 Using job config with 4 jobs 00:23:02.777 EAL: TSC is not safe to use in SMP mode 00:23:02.777 EAL: TSC is not invariant 00:23:02.777 [2024-05-15 02:23:48.368698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.777 [2024-05-15 02:23:48.446138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:02.777 [2024-05-15 02:23:48.448278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.777 cpumask for '\''job0'\'' is too big 00:23:02.777 cpumask for '\''job1'\'' is too big 00:23:02.777 cpumask for '\''job2'\'' is too big 00:23:02.777 cpumask for '\''job3'\'' is too big 00:23:02.777 Running I/O for 2 seconds... 00:23:02.777 00:23:02.777 Latency(us) 00:23:02.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167845.47 163.91 0.00 0.00 1524.87 448.61 2808.68 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167837.84 163.90 0.00 0.00 1524.75 438.86 2777.47 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167828.79 163.90 0.00 0.00 1524.32 415.45 2324.96 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167819.82 163.89 0.00 0.00 1524.19 364.74 2324.96 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167812.28 163.88 0.00 0.00 1523.87 407.65 1903.66 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167877.48 163.94 0.00 0.00 1523.10 364.74 1880.25 00:23:02.777 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc0 : 2.00 167868.27 163.93 0.00 0.00 1522.76 382.29 1755.42 00:23:02.777 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.777 Malloc1 : 2.00 167858.97 163.92 0.00 0.00 1522.70 308.17 1763.23 00:23:02.777 =================================================================================================================== 00:23:02.777 Total : 1342748.91 1311.28 0.00 0.00 1523.82 308.17 2808.68' 00:23:02.777 02:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-15 02:23:47.913325] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:02.777 [2024-05-15 02:23:47.913540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:02.777 Using job config with 4 jobs 00:23:02.777 EAL: TSC is not safe to use in SMP mode 00:23:02.777 EAL: TSC is not invariant 00:23:02.777 [2024-05-15 02:23:48.368698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.777 [2024-05-15 02:23:48.446138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:02.777 [2024-05-15 02:23:48.448278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.777 cpumask for '\''job0'\'' is too big 00:23:02.777 cpumask for '\''job1'\'' is too big 00:23:02.777 cpumask for '\''job2'\'' is too big 00:23:02.777 cpumask for '\''job3'\'' is too big 00:23:02.778 Running I/O for 2 seconds... 00:23:02.778 00:23:02.778 Latency(us) 00:23:02.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.778 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc0 : 2.00 167845.47 163.91 0.00 0.00 1524.87 448.61 2808.68 00:23:02.778 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc1 : 2.00 167837.84 163.90 0.00 0.00 1524.75 438.86 2777.47 00:23:02.778 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc0 : 2.00 167828.79 163.90 0.00 0.00 1524.32 415.45 2324.96 00:23:02.778 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc1 : 2.00 167819.82 163.89 0.00 0.00 1524.19 364.74 2324.96 00:23:02.778 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc0 : 2.00 167812.28 163.88 0.00 0.00 1523.87 407.65 1903.66 00:23:02.778 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc1 : 2.00 167877.48 163.94 0.00 0.00 1523.10 364.74 1880.25 00:23:02.778 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc0 : 2.00 167868.27 163.93 0.00 0.00 1522.76 382.29 1755.42 00:23:02.778 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:02.778 Malloc1 : 2.00 167858.97 163.92 0.00 0.00 1522.70 308.17 1763.23 00:23:02.778 =================================================================================================================== 00:23:02.778 Total : 1342748.91 1311.28 0.00 0.00 1523.82 308.17 2808.68' 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:02.778 02:23:50 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:02.778 00:23:02.778 real 0m11.291s 00:23:02.778 user 0m9.020s 00:23:02.778 sys 0m2.348s 00:23:02.778 02:23:50 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:02.778 ************************************ 00:23:02.778 END TEST bdevperf_config 00:23:02.778 ************************************ 00:23:02.778 02:23:50 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:23:02.778 02:23:50 -- spdk/autotest.sh@188 -- # uname -s 00:23:02.778 02:23:50 -- spdk/autotest.sh@188 -- # [[ FreeBSD == Linux ]] 00:23:02.778 02:23:50 -- spdk/autotest.sh@194 -- # uname -s 00:23:02.778 02:23:50 -- spdk/autotest.sh@194 -- # [[ FreeBSD == Linux ]] 00:23:02.778 02:23:50 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:23:02.778 02:23:50 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:23:02.778 02:23:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:02.778 02:23:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:02.778 02:23:50 -- common/autotest_common.sh@10 -- # set +x 00:23:02.778 ************************************ 00:23:02.778 START TEST blockdev_nvme 00:23:02.778 ************************************ 00:23:02.778 02:23:50 blockdev_nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:23:03.035 * Looking for test storage... 00:23:03.035 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:03.035 02:23:50 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66323 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:03.035 02:23:50 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66323 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 66323 ']' 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.035 02:23:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.035 [2024-05-15 02:23:50.883539] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:03.035 [2024-05-15 02:23:50.883804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:03.600 EAL: TSC is not safe to use in SMP mode 00:23:03.600 EAL: TSC is not invariant 00:23:03.600 [2024-05-15 02:23:51.392668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.600 [2024-05-15 02:23:51.473649] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:03.600 [2024-05-15 02:23:51.475809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.165 02:23:51 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.165 02:23:51 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:04.165 02:23:51 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:23:04.165 02:23:51 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 [2024-05-15 02:23:51.955739] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2ba48d99-1262-11ef-99fd-bfc7c66e2865"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2ba48d99-1262-11ef-99fd-bfc7c66e2865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:23:04.165 02:23:52 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66323 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 66323 ']' 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 66323 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@954 -- # ps -c -o command 66323 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@954 -- # tail -1 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:23:04.165 killing process with pid 66323 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66323' 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 66323 00:23:04.165 02:23:52 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 66323 00:23:04.423 02:23:52 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:04.423 02:23:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:23:04.423 02:23:52 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:23:04.423 02:23:52 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.423 02:23:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.423 ************************************ 00:23:04.423 START TEST bdev_hello_world 00:23:04.423 ************************************ 00:23:04.423 02:23:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:23:04.423 [2024-05-15 02:23:52.356715] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:04.423 [2024-05-15 02:23:52.356932] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:04.988 EAL: TSC is not safe to use in SMP mode 00:23:04.988 EAL: TSC is not invariant 00:23:04.988 [2024-05-15 02:23:52.831638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.988 [2024-05-15 02:23:52.919104] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:04.988 [2024-05-15 02:23:52.921445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.988 [2024-05-15 02:23:52.978545] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:05.265 [2024-05-15 02:23:53.048846] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:05.265 [2024-05-15 02:23:53.048907] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:23:05.265 [2024-05-15 02:23:53.048929] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:05.265 [2024-05-15 02:23:53.049708] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:05.265 [2024-05-15 02:23:53.050066] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:05.265 [2024-05-15 02:23:53.050090] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:05.265 [2024-05-15 02:23:53.050314] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:05.265 00:23:05.265 [2024-05-15 02:23:53.050332] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:05.265 00:23:05.265 real 0m0.851s 00:23:05.265 user 0m0.321s 00:23:05.265 sys 0m0.529s 00:23:05.265 02:23:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.265 02:23:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:05.265 ************************************ 00:23:05.265 END TEST bdev_hello_world 00:23:05.265 ************************************ 00:23:05.265 02:23:53 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:23:05.265 02:23:53 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:05.265 02:23:53 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.265 02:23:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:05.265 ************************************ 00:23:05.265 START TEST bdev_bounds 00:23:05.265 ************************************ 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66390 00:23:05.265 Process bdevio pid: 66390 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66390' 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66390 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 66390 ']' 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.265 02:23:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:05.265 [2024-05-15 02:23:53.252084] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:05.265 [2024-05-15 02:23:53.252487] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:05.828 EAL: TSC is not safe to use in SMP mode 00:23:05.828 EAL: TSC is not invariant 00:23:05.828 [2024-05-15 02:23:53.716550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:05.828 [2024-05-15 02:23:53.809808] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:05.828 [2024-05-15 02:23:53.809866] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:05.828 [2024-05-15 02:23:53.809878] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:23:05.828 [2024-05-15 02:23:53.814119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.828 [2024-05-15 02:23:53.814042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.828 [2024-05-15 02:23:53.814109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.084 [2024-05-15 02:23:53.873229] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:06.342 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:06.342 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:23:06.342 02:23:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:06.599 I/O targets: 00:23:06.599 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:06.599 00:23:06.599 00:23:06.599 CUnit - A unit testing framework for C - Version 2.1-3 00:23:06.599 http://cunit.sourceforge.net/ 00:23:06.599 00:23:06.599 00:23:06.599 Suite: bdevio tests on: Nvme0n1 00:23:06.599 Test: blockdev write read block ...passed 00:23:06.599 Test: blockdev write zeroes read block ...passed 00:23:06.599 Test: blockdev write zeroes read no split ...passed 00:23:06.599 Test: blockdev write zeroes read split ...passed 00:23:06.599 Test: blockdev write zeroes read split partial ...passed 00:23:06.599 Test: blockdev reset ...[2024-05-15 02:23:54.473144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:23:06.599 [2024-05-15 02:23:54.474608] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:06.599 passed 00:23:06.599 Test: blockdev write read 8 blocks ...passed 00:23:06.599 Test: blockdev write read size > 128k ...passed 00:23:06.599 Test: blockdev write read invalid size ...passed 00:23:06.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:06.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:06.599 Test: blockdev write read max offset ...passed 00:23:06.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:06.599 Test: blockdev writev readv 8 blocks ...passed 00:23:06.599 Test: blockdev writev readv 30 x 1block ...passed 00:23:06.599 Test: blockdev writev readv block ...passed 00:23:06.599 Test: blockdev writev readv size > 128k ...passed 00:23:06.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:06.599 Test: blockdev comparev and writev ...[2024-05-15 02:23:54.478809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7947000 len:0x1000 00:23:06.599 [2024-05-15 02:23:54.478855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:06.599 passed 00:23:06.599 Test: blockdev nvme passthru rw ...passed 00:23:06.599 Test: blockdev nvme passthru vendor specific ...passed 00:23:06.599 Test: blockdev nvme admin passthru ...[2024-05-15 02:23:54.479324] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:23:06.599 [2024-05-15 02:23:54.479340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:06.599 passed 00:23:06.599 Test: blockdev copy ...passed 00:23:06.599 00:23:06.599 Run Summary: Type Total Ran Passed Failed Inactive 00:23:06.599 suites 1 1 n/a 0 0 00:23:06.599 tests 23 23 23 0 0 00:23:06.599 asserts 152 152 152 0 n/a 00:23:06.599 00:23:06.599 Elapsed time = 0.023 seconds 00:23:06.599 0 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66390 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 66390 ']' 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 66390 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 66390 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66390' 00:23:06.599 killing process with pid 66390 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 66390 00:23:06.599 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 66390 00:23:06.856 02:23:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:23:06.856 00:23:06.856 real 0m1.424s 00:23:06.856 user 0m2.993s 00:23:06.856 sys 0m0.603s 00:23:06.856 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.856 02:23:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:06.856 ************************************ 00:23:06.856 END TEST bdev_bounds 00:23:06.856 ************************************ 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:06.856 ************************************ 00:23:06.856 START TEST bdev_nbd 00:23:06.856 ************************************ 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:23:06.856 00:23:06.856 real 0m0.004s 00:23:06.856 user 0m0.007s 00:23:06.856 sys 0m0.001s 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.856 ************************************ 00:23:06.856 END TEST bdev_nbd 00:23:06.856 ************************************ 00:23:06.856 02:23:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:23:06.856 skipping fio tests on NVMe due to multi-ns failures. 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:06.856 02:23:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.856 02:23:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:06.856 ************************************ 00:23:06.856 START TEST bdev_verify 00:23:06.856 ************************************ 00:23:06.856 02:23:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:06.856 [2024-05-15 02:23:54.758660] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:06.856 [2024-05-15 02:23:54.758849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:07.422 EAL: TSC is not safe to use in SMP mode 00:23:07.422 EAL: TSC is not invariant 00:23:07.422 [2024-05-15 02:23:55.215448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.422 [2024-05-15 02:23:55.307753] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:07.422 [2024-05-15 02:23:55.307825] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:07.422 [2024-05-15 02:23:55.311171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.422 [2024-05-15 02:23:55.311162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.422 [2024-05-15 02:23:55.369781] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:07.422 Running I/O for 5 seconds... 00:23:12.683 00:23:12.683 Latency(us) 00:23:12.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:12.683 Verification LBA range: start 0x0 length 0xa0000 00:23:12.683 Nvme0n1 : 5.00 21139.71 82.58 0.00 0.00 6045.64 651.46 12732.68 00:23:12.683 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.683 Verification LBA range: start 0xa0000 length 0xa0000 00:23:12.683 Nvme0n1 : 5.00 20885.30 81.58 0.00 0.00 6120.62 522.73 12420.60 00:23:12.683 =================================================================================================================== 00:23:12.683 Total : 42025.01 164.16 0.00 0.00 6082.90 522.73 12732.68 00:23:13.267 00:23:13.268 real 0m6.382s 00:23:13.268 user 0m11.574s 00:23:13.268 sys 0m0.510s 00:23:13.268 02:24:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:13.268 ************************************ 00:23:13.268 END TEST bdev_verify 00:23:13.268 ************************************ 00:23:13.268 02:24:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:13.268 02:24:01 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:13.268 02:24:01 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:23:13.268 02:24:01 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:13.268 02:24:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:13.268 ************************************ 00:23:13.268 START TEST bdev_verify_big_io 00:23:13.268 ************************************ 00:23:13.268 02:24:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:13.268 [2024-05-15 02:24:01.188801] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:13.268 [2024-05-15 02:24:01.189061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:13.835 EAL: TSC is not safe to use in SMP mode 00:23:13.835 EAL: TSC is not invariant 00:23:13.835 [2024-05-15 02:24:01.686920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:13.835 [2024-05-15 02:24:01.780105] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:13.835 [2024-05-15 02:24:01.780180] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:13.835 [2024-05-15 02:24:01.783574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.835 [2024-05-15 02:24:01.783567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.835 [2024-05-15 02:24:01.842199] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:14.092 Running I/O for 5 seconds... 00:23:19.613 00:23:19.613 Latency(us) 00:23:19.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.613 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:19.613 Verification LBA range: start 0x0 length 0xa000 00:23:19.613 Nvme0n1 : 5.01 7743.45 483.97 0.00 0.00 16439.39 205.77 46436.82 00:23:19.613 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:19.613 Verification LBA range: start 0xa000 length 0xa000 00:23:19.613 Nvme0n1 : 5.01 7580.77 473.80 0.00 0.00 16794.55 249.66 35701.43 00:23:19.613 =================================================================================================================== 00:23:19.613 Total : 15324.22 957.76 0.00 0.00 16615.05 205.77 46436.82 00:23:22.891 00:23:22.891 real 0m9.444s 00:23:22.891 user 0m17.610s 00:23:22.891 sys 0m0.552s 00:23:22.891 02:24:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:22.891 02:24:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:22.891 ************************************ 00:23:22.891 END TEST bdev_verify_big_io 00:23:22.891 ************************************ 00:23:22.891 02:24:10 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:22.891 02:24:10 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:22.891 02:24:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:22.891 02:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:22.891 ************************************ 00:23:22.891 START TEST bdev_write_zeroes 00:23:22.891 ************************************ 00:23:22.891 02:24:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:22.891 [2024-05-15 02:24:10.676443] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:22.891 [2024-05-15 02:24:10.676658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:23.149 EAL: TSC is not safe to use in SMP mode 00:23:23.149 EAL: TSC is not invariant 00:23:23.149 [2024-05-15 02:24:11.134958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.406 [2024-05-15 02:24:11.230524] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:23.406 [2024-05-15 02:24:11.233264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.406 [2024-05-15 02:24:11.291867] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:23.406 Running I/O for 1 seconds... 00:23:24.339 00:23:24.339 Latency(us) 00:23:24.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.339 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:24.339 Nvme0n1 : 1.00 60596.99 236.71 0.00 0.00 2110.85 427.15 14729.96 00:23:24.339 =================================================================================================================== 00:23:24.339 Total : 60596.99 236.71 0.00 0.00 2110.85 427.15 14729.96 00:23:24.597 00:23:24.597 real 0m1.874s 00:23:24.597 user 0m1.367s 00:23:24.597 sys 0m0.504s 00:23:24.597 02:24:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:24.597 02:24:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:24.597 ************************************ 00:23:24.597 END TEST bdev_write_zeroes 00:23:24.597 ************************************ 00:23:24.597 02:24:12 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:24.597 02:24:12 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:24.597 02:24:12 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.597 02:24:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:24.597 ************************************ 00:23:24.597 START TEST bdev_json_nonenclosed 00:23:24.597 ************************************ 00:23:24.597 02:24:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:24.597 [2024-05-15 02:24:12.598306] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:24.597 [2024-05-15 02:24:12.598601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:25.163 EAL: TSC is not safe to use in SMP mode 00:23:25.163 EAL: TSC is not invariant 00:23:25.163 [2024-05-15 02:24:13.093892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.163 [2024-05-15 02:24:13.186514] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:25.420 [2024-05-15 02:24:13.189379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.420 [2024-05-15 02:24:13.189433] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:25.420 [2024-05-15 02:24:13.189445] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:25.420 [2024-05-15 02:24:13.189456] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:25.420 00:23:25.420 real 0m0.708s 00:23:25.420 user 0m0.162s 00:23:25.420 sys 0m0.544s 00:23:25.420 ************************************ 00:23:25.420 END TEST bdev_json_nonenclosed 00:23:25.420 ************************************ 00:23:25.420 02:24:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:25.420 02:24:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:25.420 02:24:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:25.420 02:24:13 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:23:25.420 02:24:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:25.420 02:24:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:25.420 ************************************ 00:23:25.420 START TEST bdev_json_nonarray 00:23:25.420 ************************************ 00:23:25.420 02:24:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:25.420 [2024-05-15 02:24:13.348690] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:25.420 [2024-05-15 02:24:13.348920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:25.984 EAL: TSC is not safe to use in SMP mode 00:23:25.984 EAL: TSC is not invariant 00:23:25.984 [2024-05-15 02:24:13.822579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.984 [2024-05-15 02:24:13.915381] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:25.984 [2024-05-15 02:24:13.918022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.984 [2024-05-15 02:24:13.918072] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:25.984 [2024-05-15 02:24:13.918085] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:25.984 [2024-05-15 02:24:13.918095] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:26.242 00:23:26.242 real 0m0.685s 00:23:26.242 user 0m0.164s 00:23:26.242 sys 0m0.520s 00:23:26.242 02:24:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.242 02:24:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:26.242 ************************************ 00:23:26.242 END TEST bdev_json_nonarray 00:23:26.242 ************************************ 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:23:26.242 02:24:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:23:26.242 00:23:26.242 real 0m23.368s 00:23:26.242 user 0m35.825s 00:23:26.242 sys 0m4.781s 00:23:26.242 02:24:14 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.242 02:24:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.242 ************************************ 00:23:26.242 END TEST blockdev_nvme 00:23:26.242 ************************************ 00:23:26.242 02:24:14 -- spdk/autotest.sh@209 -- # uname -s 00:23:26.242 02:24:14 -- spdk/autotest.sh@209 -- # [[ FreeBSD == Linux ]] 00:23:26.242 02:24:14 -- spdk/autotest.sh@212 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:23:26.242 02:24:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:26.242 02:24:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.242 02:24:14 -- common/autotest_common.sh@10 -- # set +x 00:23:26.242 ************************************ 00:23:26.242 START TEST nvme 00:23:26.242 ************************************ 00:23:26.242 02:24:14 nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:23:26.499 * Looking for test storage... 00:23:26.499 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:23:26.499 02:24:14 nvme -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:26.499 hw.nic_uio.bdfs="0:16:0" 00:23:26.499 02:24:14 nvme -- nvme/nvme.sh@79 -- # uname 00:23:26.499 02:24:14 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:23:26.499 02:24:14 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:23:26.499 02:24:14 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:23:26.499 02:24:14 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.499 02:24:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.499 ************************************ 00:23:26.499 START TEST nvme_reset 00:23:26.499 ************************************ 00:23:26.499 02:24:14 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:23:27.067 EAL: TSC is not safe to use in SMP mode 00:23:27.067 EAL: TSC is not invariant 00:23:27.067 [2024-05-15 02:24:14.953201] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:27.067 Initializing NVMe Controllers 00:23:27.067 Skipping QEMU NVMe SSD at 0000:00:10.0 00:23:27.067 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:23:27.067 00:23:27.067 real 0m0.516s 00:23:27.067 user 0m0.005s 00:23:27.067 sys 0m0.511s 00:23:27.067 02:24:15 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:27.067 02:24:15 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:23:27.067 ************************************ 00:23:27.067 END TEST nvme_reset 00:23:27.067 ************************************ 00:23:27.067 02:24:15 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:23:27.067 02:24:15 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:27.067 02:24:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:27.067 02:24:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:27.067 ************************************ 00:23:27.067 START TEST nvme_identify 00:23:27.067 ************************************ 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:23:27.067 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:23:27.067 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:23:27.067 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:23:27.067 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:27.067 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:27.327 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:27.327 02:24:15 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:23:27.327 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:23:27.585 EAL: TSC is not safe to use in SMP mode 00:23:27.585 EAL: TSC is not invariant 00:23:27.585 [2024-05-15 02:24:15.606391] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:27.843 ===================================================== 00:23:27.843 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:27.843 ===================================================== 00:23:27.843 Controller Capabilities/Features 00:23:27.843 ================================ 00:23:27.843 Vendor ID: 1b36 00:23:27.843 Subsystem Vendor ID: 1af4 00:23:27.843 Serial Number: 12340 00:23:27.843 Model Number: QEMU NVMe Ctrl 00:23:27.843 Firmware Version: 8.0.0 00:23:27.843 Recommended Arb Burst: 6 00:23:27.843 IEEE OUI Identifier: 00 54 52 00:23:27.843 Multi-path I/O 00:23:27.843 May have multiple subsystem ports: No 00:23:27.843 May have multiple controllers: No 00:23:27.843 Associated with SR-IOV VF: No 00:23:27.843 Max Data Transfer Size: 524288 00:23:27.843 Max Number of Namespaces: 256 00:23:27.843 Max Number of I/O Queues: 64 00:23:27.843 NVMe Specification Version (VS): 1.4 00:23:27.843 NVMe Specification Version (Identify): 1.4 00:23:27.843 Maximum Queue Entries: 2048 00:23:27.843 Contiguous Queues Required: Yes 00:23:27.843 Arbitration Mechanisms Supported 00:23:27.843 Weighted Round Robin: Not Supported 00:23:27.843 Vendor Specific: Not Supported 00:23:27.843 Reset Timeout: 7500 ms 00:23:27.843 Doorbell Stride: 4 bytes 00:23:27.843 NVM Subsystem Reset: Not Supported 00:23:27.843 Command Sets Supported 00:23:27.843 NVM Command Set: Supported 00:23:27.843 Boot Partition: Not Supported 00:23:27.843 Memory Page Size Minimum: 4096 bytes 00:23:27.843 Memory Page Size Maximum: 65536 bytes 00:23:27.843 Persistent Memory Region: Not Supported 00:23:27.843 Optional Asynchronous Events Supported 00:23:27.843 Namespace Attribute Notices: Supported 00:23:27.843 Firmware Activation Notices: Not Supported 00:23:27.843 ANA Change Notices: Not Supported 00:23:27.843 PLE Aggregate Log Change Notices: Not Supported 00:23:27.843 LBA Status Info Alert Notices: Not Supported 00:23:27.843 EGE Aggregate Log Change Notices: Not Supported 00:23:27.843 Normal NVM Subsystem Shutdown event: Not Supported 00:23:27.843 Zone Descriptor Change Notices: Not Supported 00:23:27.843 Discovery Log Change Notices: Not Supported 00:23:27.843 Controller Attributes 00:23:27.843 128-bit Host Identifier: Not Supported 00:23:27.843 Non-Operational Permissive Mode: Not Supported 00:23:27.843 NVM Sets: Not Supported 00:23:27.843 Read Recovery Levels: Not Supported 00:23:27.843 Endurance Groups: Not Supported 00:23:27.843 Predictable Latency Mode: Not Supported 00:23:27.843 Traffic Based Keep ALive: Not Supported 00:23:27.843 Namespace Granularity: Not Supported 00:23:27.843 SQ Associations: Not Supported 00:23:27.843 UUID List: Not Supported 00:23:27.843 Multi-Domain Subsystem: Not Supported 00:23:27.843 Fixed Capacity Management: Not Supported 00:23:27.843 Variable Capacity Management: Not Supported 00:23:27.843 Delete Endurance Group: Not Supported 00:23:27.843 Delete NVM Set: Not Supported 00:23:27.843 Extended LBA Formats Supported: Supported 00:23:27.843 Flexible Data Placement Supported: Not Supported 00:23:27.843 00:23:27.843 Controller Memory Buffer Support 00:23:27.843 ================================ 00:23:27.843 Supported: No 00:23:27.843 00:23:27.843 Persistent Memory Region Support 00:23:27.843 ================================ 00:23:27.843 Supported: No 00:23:27.843 00:23:27.843 Admin Command Set Attributes 00:23:27.843 ============================ 00:23:27.843 Security Send/Receive: Not Supported 00:23:27.843 Format NVM: Supported 00:23:27.843 Firmware Activate/Download: Not Supported 00:23:27.843 Namespace Management: Supported 00:23:27.843 Device Self-Test: Not Supported 00:23:27.843 Directives: Supported 00:23:27.843 NVMe-MI: Not Supported 00:23:27.843 Virtualization Management: Not Supported 00:23:27.843 Doorbell Buffer Config: Supported 00:23:27.843 Get LBA Status Capability: Not Supported 00:23:27.843 Command & Feature Lockdown Capability: Not Supported 00:23:27.843 Abort Command Limit: 4 00:23:27.843 Async Event Request Limit: 4 00:23:27.843 Number of Firmware Slots: N/A 00:23:27.843 Firmware Slot 1 Read-Only: N/A 00:23:27.843 Firmware Activation Without Reset: N/A 00:23:27.843 Multiple Update Detection Support: N/A 00:23:27.843 Firmware Update Granularity: No Information Provided 00:23:27.843 Per-Namespace SMART Log: Yes 00:23:27.843 Asymmetric Namespace Access Log Page: Not Supported 00:23:27.843 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:23:27.843 Command Effects Log Page: Supported 00:23:27.843 Get Log Page Extended Data: Supported 00:23:27.843 Telemetry Log Pages: Not Supported 00:23:27.843 Persistent Event Log Pages: Not Supported 00:23:27.843 Supported Log Pages Log Page: May Support 00:23:27.843 Commands Supported & Effects Log Page: Not Supported 00:23:27.843 Feature Identifiers & Effects Log Page:May Support 00:23:27.843 NVMe-MI Commands & Effects Log Page: May Support 00:23:27.843 Data Area 4 for Telemetry Log: Not Supported 00:23:27.843 Error Log Page Entries Supported: 1 00:23:27.843 Keep Alive: Not Supported 00:23:27.843 00:23:27.843 NVM Command Set Attributes 00:23:27.843 ========================== 00:23:27.843 Submission Queue Entry Size 00:23:27.843 Max: 64 00:23:27.843 Min: 64 00:23:27.843 Completion Queue Entry Size 00:23:27.843 Max: 16 00:23:27.843 Min: 16 00:23:27.843 Number of Namespaces: 256 00:23:27.843 Compare Command: Supported 00:23:27.843 Write Uncorrectable Command: Not Supported 00:23:27.843 Dataset Management Command: Supported 00:23:27.843 Write Zeroes Command: Supported 00:23:27.843 Set Features Save Field: Supported 00:23:27.843 Reservations: Not Supported 00:23:27.843 Timestamp: Supported 00:23:27.843 Copy: Supported 00:23:27.843 Volatile Write Cache: Present 00:23:27.843 Atomic Write Unit (Normal): 1 00:23:27.843 Atomic Write Unit (PFail): 1 00:23:27.843 Atomic Compare & Write Unit: 1 00:23:27.843 Fused Compare & Write: Not Supported 00:23:27.843 Scatter-Gather List 00:23:27.843 SGL Command Set: Supported 00:23:27.843 SGL Keyed: Not Supported 00:23:27.843 SGL Bit Bucket Descriptor: Not Supported 00:23:27.843 SGL Metadata Pointer: Not Supported 00:23:27.843 Oversized SGL: Not Supported 00:23:27.843 SGL Metadata Address: Not Supported 00:23:27.843 SGL Offset: Not Supported 00:23:27.843 Transport SGL Data Block: Not Supported 00:23:27.843 Replay Protected Memory Block: Not Supported 00:23:27.843 00:23:27.843 Firmware Slot Information 00:23:27.843 ========================= 00:23:27.843 Active slot: 1 00:23:27.843 Slot 1 Firmware Revision: 1.0 00:23:27.843 00:23:27.843 00:23:27.843 Commands Supported and Effects 00:23:27.843 ============================== 00:23:27.843 Admin Commands 00:23:27.843 -------------- 00:23:27.843 Delete I/O Submission Queue (00h): Supported 00:23:27.843 Create I/O Submission Queue (01h): Supported 00:23:27.843 Get Log Page (02h): Supported 00:23:27.843 Delete I/O Completion Queue (04h): Supported 00:23:27.843 Create I/O Completion Queue (05h): Supported 00:23:27.843 Identify (06h): Supported 00:23:27.843 Abort (08h): Supported 00:23:27.843 Set Features (09h): Supported 00:23:27.843 Get Features (0Ah): Supported 00:23:27.843 Asynchronous Event Request (0Ch): Supported 00:23:27.843 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:27.843 Directive Send (19h): Supported 00:23:27.843 Directive Receive (1Ah): Supported 00:23:27.843 Virtualization Management (1Ch): Supported 00:23:27.843 Doorbell Buffer Config (7Ch): Supported 00:23:27.843 Format NVM (80h): Supported LBA-Change 00:23:27.843 I/O Commands 00:23:27.843 ------------ 00:23:27.843 Flush (00h): Supported LBA-Change 00:23:27.843 Write (01h): Supported LBA-Change 00:23:27.843 Read (02h): Supported 00:23:27.843 Compare (05h): Supported 00:23:27.843 Write Zeroes (08h): Supported LBA-Change 00:23:27.843 Dataset Management (09h): Supported LBA-Change 00:23:27.843 Unknown (0Ch): Supported 00:23:27.843 Unknown (12h): Supported 00:23:27.843 Copy (19h): Supported LBA-Change 00:23:27.843 Unknown (1Dh): Supported LBA-Change 00:23:27.843 00:23:27.843 Error Log 00:23:27.843 ========= 00:23:27.843 00:23:27.843 Arbitration 00:23:27.843 =========== 00:23:27.843 Arbitration Burst: no limit 00:23:27.843 00:23:27.843 Power Management 00:23:27.843 ================ 00:23:27.843 Number of Power States: 1 00:23:27.843 Current Power State: Power State #0 00:23:27.843 Power State #0: 00:23:27.843 Max Power: 25.00 W 00:23:27.843 Non-Operational State: Operational 00:23:27.843 Entry Latency: 16 microseconds 00:23:27.843 Exit Latency: 4 microseconds 00:23:27.843 Relative Read Throughput: 0 00:23:27.843 Relative Read Latency: 0 00:23:27.843 Relative Write Throughput: 0 00:23:27.843 Relative Write Latency: 0 00:23:27.843 Idle Power: Not Reported 00:23:27.843 Active Power: Not Reported 00:23:27.843 Non-Operational Permissive Mode: Not Supported 00:23:27.843 00:23:27.843 Health Information 00:23:27.843 ================== 00:23:27.843 Critical Warnings: 00:23:27.843 Available Spare Space: OK 00:23:27.843 Temperature: OK 00:23:27.843 Device Reliability: OK 00:23:27.843 Read Only: No 00:23:27.843 Volatile Memory Backup: OK 00:23:27.843 Current Temperature: 323 Kelvin (50 Celsius) 00:23:27.843 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:27.843 Available Spare: 0% 00:23:27.843 Available Spare Threshold: 0% 00:23:27.843 Life Percentage Used: 0% 00:23:27.843 Data Units Read: 11523 00:23:27.843 Data Units Written: 11507 00:23:27.843 Host Read Commands: 287220 00:23:27.843 Host Write Commands: 287069 00:23:27.844 Controller Busy Time: 0 minutes 00:23:27.844 Power Cycles: 0 00:23:27.844 Power On Hours: 0 hours 00:23:27.844 Unsafe Shutdowns: 0 00:23:27.844 Unrecoverable Media Errors: 0 00:23:27.844 Lifetime Error Log Entries: 0 00:23:27.844 Warning Temperature Time: 0 minutes 00:23:27.844 Critical Temperature Time: 0 minutes 00:23:27.844 00:23:27.844 Number of Queues 00:23:27.844 ================ 00:23:27.844 Number of I/O Submission Queues: 64 00:23:27.844 Number of I/O Completion Queues: 64 00:23:27.844 00:23:27.844 ZNS Specific Controller Data 00:23:27.844 ============================ 00:23:27.844 Zone Append Size Limit: 0 00:23:27.844 00:23:27.844 00:23:27.844 Active Namespaces 00:23:27.844 ================= 00:23:27.844 Namespace ID:1 00:23:27.844 Error Recovery Timeout: Unlimited 00:23:27.844 Command Set Identifier: NVM (00h) 00:23:27.844 Deallocate: Supported 00:23:27.844 Deallocated/Unwritten Error: Supported 00:23:27.844 Deallocated Read Value: All 0x00 00:23:27.844 Deallocate in Write Zeroes: Not Supported 00:23:27.844 Deallocated Guard Field: 0xFFFF 00:23:27.844 Flush: Supported 00:23:27.844 Reservation: Not Supported 00:23:27.844 Namespace Sharing Capabilities: Private 00:23:27.844 Size (in LBAs): 1310720 (5GiB) 00:23:27.844 Capacity (in LBAs): 1310720 (5GiB) 00:23:27.844 Utilization (in LBAs): 1310720 (5GiB) 00:23:27.844 Thin Provisioning: Not Supported 00:23:27.844 Per-NS Atomic Units: No 00:23:27.844 Maximum Single Source Range Length: 128 00:23:27.844 Maximum Copy Length: 128 00:23:27.844 Maximum Source Range Count: 128 00:23:27.844 NGUID/EUI64 Never Reused: No 00:23:27.844 Namespace Write Protected: No 00:23:27.844 Number of LBA Formats: 8 00:23:27.844 Current LBA Format: LBA Format #04 00:23:27.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:27.844 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:27.844 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:27.844 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:27.844 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:27.844 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:27.844 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:27.844 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:27.844 00:23:27.844 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:23:27.844 02:24:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:28.410 EAL: TSC is not safe to use in SMP mode 00:23:28.410 EAL: TSC is not invariant 00:23:28.410 [2024-05-15 02:24:16.153030] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:28.410 ===================================================== 00:23:28.410 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:28.410 ===================================================== 00:23:28.410 Controller Capabilities/Features 00:23:28.410 ================================ 00:23:28.410 Vendor ID: 1b36 00:23:28.410 Subsystem Vendor ID: 1af4 00:23:28.410 Serial Number: 12340 00:23:28.410 Model Number: QEMU NVMe Ctrl 00:23:28.410 Firmware Version: 8.0.0 00:23:28.410 Recommended Arb Burst: 6 00:23:28.410 IEEE OUI Identifier: 00 54 52 00:23:28.410 Multi-path I/O 00:23:28.410 May have multiple subsystem ports: No 00:23:28.410 May have multiple controllers: No 00:23:28.410 Associated with SR-IOV VF: No 00:23:28.410 Max Data Transfer Size: 524288 00:23:28.410 Max Number of Namespaces: 256 00:23:28.410 Max Number of I/O Queues: 64 00:23:28.410 NVMe Specification Version (VS): 1.4 00:23:28.410 NVMe Specification Version (Identify): 1.4 00:23:28.410 Maximum Queue Entries: 2048 00:23:28.410 Contiguous Queues Required: Yes 00:23:28.410 Arbitration Mechanisms Supported 00:23:28.410 Weighted Round Robin: Not Supported 00:23:28.410 Vendor Specific: Not Supported 00:23:28.410 Reset Timeout: 7500 ms 00:23:28.410 Doorbell Stride: 4 bytes 00:23:28.410 NVM Subsystem Reset: Not Supported 00:23:28.410 Command Sets Supported 00:23:28.410 NVM Command Set: Supported 00:23:28.410 Boot Partition: Not Supported 00:23:28.410 Memory Page Size Minimum: 4096 bytes 00:23:28.410 Memory Page Size Maximum: 65536 bytes 00:23:28.410 Persistent Memory Region: Not Supported 00:23:28.410 Optional Asynchronous Events Supported 00:23:28.410 Namespace Attribute Notices: Supported 00:23:28.410 Firmware Activation Notices: Not Supported 00:23:28.410 ANA Change Notices: Not Supported 00:23:28.410 PLE Aggregate Log Change Notices: Not Supported 00:23:28.410 LBA Status Info Alert Notices: Not Supported 00:23:28.410 EGE Aggregate Log Change Notices: Not Supported 00:23:28.410 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.410 Zone Descriptor Change Notices: Not Supported 00:23:28.410 Discovery Log Change Notices: Not Supported 00:23:28.410 Controller Attributes 00:23:28.410 128-bit Host Identifier: Not Supported 00:23:28.410 Non-Operational Permissive Mode: Not Supported 00:23:28.410 NVM Sets: Not Supported 00:23:28.410 Read Recovery Levels: Not Supported 00:23:28.411 Endurance Groups: Not Supported 00:23:28.411 Predictable Latency Mode: Not Supported 00:23:28.411 Traffic Based Keep ALive: Not Supported 00:23:28.411 Namespace Granularity: Not Supported 00:23:28.411 SQ Associations: Not Supported 00:23:28.411 UUID List: Not Supported 00:23:28.411 Multi-Domain Subsystem: Not Supported 00:23:28.411 Fixed Capacity Management: Not Supported 00:23:28.411 Variable Capacity Management: Not Supported 00:23:28.411 Delete Endurance Group: Not Supported 00:23:28.411 Delete NVM Set: Not Supported 00:23:28.411 Extended LBA Formats Supported: Supported 00:23:28.411 Flexible Data Placement Supported: Not Supported 00:23:28.411 00:23:28.411 Controller Memory Buffer Support 00:23:28.411 ================================ 00:23:28.411 Supported: No 00:23:28.411 00:23:28.411 Persistent Memory Region Support 00:23:28.411 ================================ 00:23:28.411 Supported: No 00:23:28.411 00:23:28.411 Admin Command Set Attributes 00:23:28.411 ============================ 00:23:28.411 Security Send/Receive: Not Supported 00:23:28.411 Format NVM: Supported 00:23:28.411 Firmware Activate/Download: Not Supported 00:23:28.411 Namespace Management: Supported 00:23:28.411 Device Self-Test: Not Supported 00:23:28.411 Directives: Supported 00:23:28.411 NVMe-MI: Not Supported 00:23:28.411 Virtualization Management: Not Supported 00:23:28.411 Doorbell Buffer Config: Supported 00:23:28.411 Get LBA Status Capability: Not Supported 00:23:28.411 Command & Feature Lockdown Capability: Not Supported 00:23:28.411 Abort Command Limit: 4 00:23:28.411 Async Event Request Limit: 4 00:23:28.411 Number of Firmware Slots: N/A 00:23:28.411 Firmware Slot 1 Read-Only: N/A 00:23:28.411 Firmware Activation Without Reset: N/A 00:23:28.411 Multiple Update Detection Support: N/A 00:23:28.411 Firmware Update Granularity: No Information Provided 00:23:28.411 Per-Namespace SMART Log: Yes 00:23:28.411 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.411 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:23:28.411 Command Effects Log Page: Supported 00:23:28.411 Get Log Page Extended Data: Supported 00:23:28.411 Telemetry Log Pages: Not Supported 00:23:28.411 Persistent Event Log Pages: Not Supported 00:23:28.411 Supported Log Pages Log Page: May Support 00:23:28.411 Commands Supported & Effects Log Page: Not Supported 00:23:28.411 Feature Identifiers & Effects Log Page:May Support 00:23:28.411 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.411 Data Area 4 for Telemetry Log: Not Supported 00:23:28.411 Error Log Page Entries Supported: 1 00:23:28.411 Keep Alive: Not Supported 00:23:28.411 00:23:28.411 NVM Command Set Attributes 00:23:28.411 ========================== 00:23:28.411 Submission Queue Entry Size 00:23:28.411 Max: 64 00:23:28.411 Min: 64 00:23:28.411 Completion Queue Entry Size 00:23:28.411 Max: 16 00:23:28.411 Min: 16 00:23:28.411 Number of Namespaces: 256 00:23:28.411 Compare Command: Supported 00:23:28.411 Write Uncorrectable Command: Not Supported 00:23:28.411 Dataset Management Command: Supported 00:23:28.411 Write Zeroes Command: Supported 00:23:28.411 Set Features Save Field: Supported 00:23:28.411 Reservations: Not Supported 00:23:28.411 Timestamp: Supported 00:23:28.411 Copy: Supported 00:23:28.411 Volatile Write Cache: Present 00:23:28.411 Atomic Write Unit (Normal): 1 00:23:28.411 Atomic Write Unit (PFail): 1 00:23:28.411 Atomic Compare & Write Unit: 1 00:23:28.411 Fused Compare & Write: Not Supported 00:23:28.411 Scatter-Gather List 00:23:28.411 SGL Command Set: Supported 00:23:28.411 SGL Keyed: Not Supported 00:23:28.411 SGL Bit Bucket Descriptor: Not Supported 00:23:28.411 SGL Metadata Pointer: Not Supported 00:23:28.411 Oversized SGL: Not Supported 00:23:28.411 SGL Metadata Address: Not Supported 00:23:28.411 SGL Offset: Not Supported 00:23:28.411 Transport SGL Data Block: Not Supported 00:23:28.411 Replay Protected Memory Block: Not Supported 00:23:28.411 00:23:28.411 Firmware Slot Information 00:23:28.411 ========================= 00:23:28.411 Active slot: 1 00:23:28.411 Slot 1 Firmware Revision: 1.0 00:23:28.411 00:23:28.411 00:23:28.411 Commands Supported and Effects 00:23:28.411 ============================== 00:23:28.411 Admin Commands 00:23:28.411 -------------- 00:23:28.411 Delete I/O Submission Queue (00h): Supported 00:23:28.411 Create I/O Submission Queue (01h): Supported 00:23:28.411 Get Log Page (02h): Supported 00:23:28.411 Delete I/O Completion Queue (04h): Supported 00:23:28.411 Create I/O Completion Queue (05h): Supported 00:23:28.411 Identify (06h): Supported 00:23:28.411 Abort (08h): Supported 00:23:28.411 Set Features (09h): Supported 00:23:28.411 Get Features (0Ah): Supported 00:23:28.411 Asynchronous Event Request (0Ch): Supported 00:23:28.411 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:28.411 Directive Send (19h): Supported 00:23:28.411 Directive Receive (1Ah): Supported 00:23:28.411 Virtualization Management (1Ch): Supported 00:23:28.411 Doorbell Buffer Config (7Ch): Supported 00:23:28.411 Format NVM (80h): Supported LBA-Change 00:23:28.411 I/O Commands 00:23:28.411 ------------ 00:23:28.411 Flush (00h): Supported LBA-Change 00:23:28.411 Write (01h): Supported LBA-Change 00:23:28.411 Read (02h): Supported 00:23:28.411 Compare (05h): Supported 00:23:28.411 Write Zeroes (08h): Supported LBA-Change 00:23:28.411 Dataset Management (09h): Supported LBA-Change 00:23:28.411 Unknown (0Ch): Supported 00:23:28.411 Unknown (12h): Supported 00:23:28.411 Copy (19h): Supported LBA-Change 00:23:28.411 Unknown (1Dh): Supported LBA-Change 00:23:28.411 00:23:28.411 Error Log 00:23:28.411 ========= 00:23:28.411 00:23:28.411 Arbitration 00:23:28.411 =========== 00:23:28.411 Arbitration Burst: no limit 00:23:28.411 00:23:28.411 Power Management 00:23:28.411 ================ 00:23:28.411 Number of Power States: 1 00:23:28.411 Current Power State: Power State #0 00:23:28.411 Power State #0: 00:23:28.411 Max Power: 25.00 W 00:23:28.411 Non-Operational State: Operational 00:23:28.411 Entry Latency: 16 microseconds 00:23:28.411 Exit Latency: 4 microseconds 00:23:28.411 Relative Read Throughput: 0 00:23:28.411 Relative Read Latency: 0 00:23:28.411 Relative Write Throughput: 0 00:23:28.411 Relative Write Latency: 0 00:23:28.411 Idle Power: Not Reported 00:23:28.411 Active Power: Not Reported 00:23:28.411 Non-Operational Permissive Mode: Not Supported 00:23:28.411 00:23:28.411 Health Information 00:23:28.411 ================== 00:23:28.411 Critical Warnings: 00:23:28.411 Available Spare Space: OK 00:23:28.411 Temperature: OK 00:23:28.411 Device Reliability: OK 00:23:28.411 Read Only: No 00:23:28.411 Volatile Memory Backup: OK 00:23:28.411 Current Temperature: 323 Kelvin (50 Celsius) 00:23:28.411 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:28.411 Available Spare: 0% 00:23:28.411 Available Spare Threshold: 0% 00:23:28.411 Life Percentage Used: 0% 00:23:28.411 Data Units Read: 11523 00:23:28.411 Data Units Written: 11507 00:23:28.411 Host Read Commands: 287220 00:23:28.411 Host Write Commands: 287069 00:23:28.411 Controller Busy Time: 0 minutes 00:23:28.411 Power Cycles: 0 00:23:28.411 Power On Hours: 0 hours 00:23:28.411 Unsafe Shutdowns: 0 00:23:28.411 Unrecoverable Media Errors: 0 00:23:28.411 Lifetime Error Log Entries: 0 00:23:28.412 Warning Temperature Time: 0 minutes 00:23:28.412 Critical Temperature Time: 0 minutes 00:23:28.412 00:23:28.412 Number of Queues 00:23:28.412 ================ 00:23:28.412 Number of I/O Submission Queues: 64 00:23:28.412 Number of I/O Completion Queues: 64 00:23:28.412 00:23:28.412 ZNS Specific Controller Data 00:23:28.412 ============================ 00:23:28.412 Zone Append Size Limit: 0 00:23:28.412 00:23:28.412 00:23:28.412 Active Namespaces 00:23:28.412 ================= 00:23:28.412 Namespace ID:1 00:23:28.412 Error Recovery Timeout: Unlimited 00:23:28.412 Command Set Identifier: NVM (00h) 00:23:28.412 Deallocate: Supported 00:23:28.412 Deallocated/Unwritten Error: Supported 00:23:28.412 Deallocated Read Value: All 0x00 00:23:28.412 Deallocate in Write Zeroes: Not Supported 00:23:28.412 Deallocated Guard Field: 0xFFFF 00:23:28.412 Flush: Supported 00:23:28.412 Reservation: Not Supported 00:23:28.412 Namespace Sharing Capabilities: Private 00:23:28.412 Size (in LBAs): 1310720 (5GiB) 00:23:28.412 Capacity (in LBAs): 1310720 (5GiB) 00:23:28.412 Utilization (in LBAs): 1310720 (5GiB) 00:23:28.412 Thin Provisioning: Not Supported 00:23:28.412 Per-NS Atomic Units: No 00:23:28.412 Maximum Single Source Range Length: 128 00:23:28.412 Maximum Copy Length: 128 00:23:28.412 Maximum Source Range Count: 128 00:23:28.412 NGUID/EUI64 Never Reused: No 00:23:28.412 Namespace Write Protected: No 00:23:28.412 Number of LBA Formats: 8 00:23:28.412 Current LBA Format: LBA Format #04 00:23:28.412 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:28.412 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:28.412 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:28.412 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:28.412 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:28.412 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:28.412 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:28.412 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:28.412 00:23:28.412 00:23:28.412 real 0m1.150s 00:23:28.412 user 0m0.036s 00:23:28.412 sys 0m1.129s 00:23:28.412 02:24:16 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.412 02:24:16 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.412 ************************************ 00:23:28.412 END TEST nvme_identify 00:23:28.412 ************************************ 00:23:28.412 02:24:16 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:23:28.412 02:24:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:28.412 02:24:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.412 02:24:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:28.412 ************************************ 00:23:28.412 START TEST nvme_perf 00:23:28.412 ************************************ 00:23:28.412 02:24:16 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:23:28.412 02:24:16 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:23:28.977 EAL: TSC is not safe to use in SMP mode 00:23:28.977 EAL: TSC is not invariant 00:23:28.977 [2024-05-15 02:24:16.708475] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:29.939 Initializing NVMe Controllers 00:23:29.939 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:29.939 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:29.939 Initialization complete. Launching workers. 00:23:29.939 ======================================================== 00:23:29.939 Latency(us) 00:23:29.939 Device Information : IOPS MiB/s Average min max 00:23:29.939 PCIE (0000:00:10.0) NSID 1 from core 0: 90122.85 1056.13 1420.49 206.00 4657.60 00:23:29.939 ======================================================== 00:23:29.939 Total : 90122.85 1056.13 1420.49 206.00 4657.60 00:23:29.939 00:23:29.939 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:29.939 ================================================================================= 00:23:29.939 1.00000% : 1154.679us 00:23:29.939 10.00000% : 1256.104us 00:23:29.939 25.00000% : 1318.519us 00:23:29.939 50.00000% : 1412.141us 00:23:29.939 75.00000% : 1505.764us 00:23:29.939 90.00000% : 1583.783us 00:23:29.939 95.00000% : 1630.594us 00:23:29.939 98.00000% : 1708.613us 00:23:29.939 99.00000% : 1817.839us 00:23:29.939 99.50000% : 2122.113us 00:23:29.939 99.90000% : 4150.603us 00:23:29.939 99.99000% : 4618.716us 00:23:29.939 99.99900% : 4681.131us 00:23:29.939 99.99990% : 4681.131us 00:23:29.939 99.99999% : 4681.131us 00:23:29.939 00:23:29.939 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:29.939 ============================================================================== 00:23:29.939 Range in us Cumulative IO count 00:23:29.939 205.775 - 206.750: 0.0011% ( 1) 00:23:29.939 218.453 - 219.428: 0.0022% ( 1) 00:23:29.939 220.403 - 221.379: 0.0044% ( 2) 00:23:29.939 234.057 - 235.032: 0.0055% ( 1) 00:23:29.939 236.982 - 237.958: 0.0067% ( 1) 00:23:29.939 245.759 - 246.735: 0.0078% ( 1) 00:23:29.939 246.735 - 247.710: 0.0089% ( 1) 00:23:29.939 248.685 - 249.660: 0.0100% ( 1) 00:23:29.939 249.660 - 251.611: 0.0122% ( 2) 00:23:29.939 251.611 - 253.561: 0.0133% ( 1) 00:23:29.939 296.472 - 298.422: 0.0144% ( 1) 00:23:29.939 298.422 - 300.373: 0.0155% ( 1) 00:23:29.939 300.373 - 302.323: 0.0177% ( 2) 00:23:29.939 302.323 - 304.274: 0.0200% ( 2) 00:23:29.939 304.274 - 306.224: 0.0222% ( 2) 00:23:29.939 306.224 - 308.174: 0.0244% ( 2) 00:23:29.939 308.174 - 310.125: 0.0255% ( 1) 00:23:29.939 366.689 - 368.639: 0.0288% ( 3) 00:23:29.939 370.590 - 372.540: 0.0299% ( 1) 00:23:29.939 550.033 - 553.934: 0.0311% ( 1) 00:23:29.939 553.934 - 557.835: 0.0355% ( 4) 00:23:29.939 557.835 - 561.736: 0.0388% ( 3) 00:23:29.939 561.736 - 565.637: 0.0433% ( 4) 00:23:29.939 565.637 - 569.538: 0.0466% ( 3) 00:23:29.939 569.538 - 573.439: 0.0510% ( 4) 00:23:29.939 573.439 - 577.340: 0.0555% ( 4) 00:23:29.939 577.340 - 581.240: 0.0588% ( 3) 00:23:29.939 581.240 - 585.141: 0.0621% ( 3) 00:23:29.939 725.575 - 729.476: 0.0654% ( 3) 00:23:29.939 729.476 - 733.377: 0.0677% ( 2) 00:23:29.939 733.377 - 737.278: 0.0699% ( 2) 00:23:29.939 737.278 - 741.179: 0.0710% ( 1) 00:23:29.939 752.882 - 756.783: 0.0743% ( 3) 00:23:29.939 756.783 - 760.684: 0.0787% ( 4) 00:23:29.939 760.684 - 764.585: 0.0821% ( 3) 00:23:29.939 764.585 - 768.486: 0.0865% ( 4) 00:23:29.939 768.486 - 772.387: 0.0909% ( 4) 00:23:29.939 772.387 - 776.288: 0.0954% ( 4) 00:23:29.939 776.288 - 780.189: 0.0987% ( 3) 00:23:29.939 780.189 - 784.090: 0.1031% ( 4) 00:23:29.939 784.090 - 787.990: 0.1076% ( 4) 00:23:29.939 787.990 - 791.891: 0.1120% ( 4) 00:23:29.939 791.891 - 795.792: 0.1164% ( 4) 00:23:29.939 795.792 - 799.693: 0.1209% ( 4) 00:23:29.939 799.693 - 803.594: 0.1242% ( 3) 00:23:29.939 803.594 - 807.495: 0.1286% ( 4) 00:23:29.939 807.495 - 811.396: 0.1331% ( 4) 00:23:29.939 811.396 - 815.297: 0.1375% ( 4) 00:23:29.939 815.297 - 819.198: 0.1408% ( 3) 00:23:29.939 819.198 - 823.099: 0.1453% ( 4) 00:23:29.939 823.099 - 827.000: 0.1464% ( 1) 00:23:29.939 1022.047 - 1029.849: 0.1519% ( 5) 00:23:29.939 1029.849 - 1037.651: 0.1586% ( 6) 00:23:29.939 1037.651 - 1045.453: 0.1763% ( 16) 00:23:29.939 1045.453 - 1053.255: 0.1941% ( 16) 00:23:29.939 1053.255 - 1061.056: 0.2118% ( 16) 00:23:29.939 1061.056 - 1068.858: 0.2407% ( 26) 00:23:29.939 1068.858 - 1076.660: 0.2784% ( 34) 00:23:29.939 1076.660 - 1084.462: 0.3371% ( 53) 00:23:29.939 1084.462 - 1092.264: 0.3871% ( 45) 00:23:29.939 1092.264 - 1100.066: 0.4447% ( 52) 00:23:29.939 1100.066 - 1107.868: 0.5102% ( 59) 00:23:29.939 1107.868 - 1115.670: 0.5800% ( 63) 00:23:29.939 1115.670 - 1123.472: 0.6665% ( 78) 00:23:29.939 1123.472 - 1131.273: 0.7608% ( 85) 00:23:29.939 1131.273 - 1139.075: 0.8639% ( 93) 00:23:29.939 1139.075 - 1146.877: 0.9893% ( 113) 00:23:29.939 1146.877 - 1154.679: 1.1523% ( 147) 00:23:29.939 1154.679 - 1162.481: 1.3453% ( 174) 00:23:29.939 1162.481 - 1170.283: 1.5815% ( 213) 00:23:29.939 1170.283 - 1178.085: 1.9031% ( 290) 00:23:29.939 1178.085 - 1185.887: 2.2735% ( 334) 00:23:29.939 1185.887 - 1193.689: 2.7149% ( 398) 00:23:29.939 1193.689 - 1201.490: 3.2761% ( 506) 00:23:29.939 1201.490 - 1209.292: 3.9437% ( 602) 00:23:29.939 1209.292 - 1217.094: 4.7267% ( 706) 00:23:29.939 1217.094 - 1224.896: 5.6339% ( 818) 00:23:29.939 1224.896 - 1232.698: 6.6509% ( 917) 00:23:29.939 1232.698 - 1240.500: 7.7710% ( 1010) 00:23:29.939 1240.500 - 1248.302: 9.0209% ( 1127) 00:23:29.939 1248.302 - 1256.104: 10.3374% ( 1187) 00:23:29.939 1256.104 - 1263.905: 11.8135% ( 1331) 00:23:29.939 1263.905 - 1271.707: 13.4382% ( 1465) 00:23:29.939 1271.707 - 1279.509: 15.1728% ( 1564) 00:23:29.939 1279.509 - 1287.311: 17.0670% ( 1708) 00:23:29.939 1287.311 - 1295.113: 19.0323% ( 1772) 00:23:29.939 1295.113 - 1302.915: 21.0818% ( 1848) 00:23:29.939 1302.915 - 1310.717: 23.2777% ( 1980) 00:23:29.939 1310.717 - 1318.519: 25.4847% ( 1990) 00:23:29.939 1318.519 - 1326.321: 27.7571% ( 2049) 00:23:29.939 1326.321 - 1334.122: 30.0362% ( 2055) 00:23:29.939 1334.122 - 1341.924: 32.3152% ( 2055) 00:23:29.939 1341.924 - 1349.726: 34.5577% ( 2022) 00:23:29.939 1349.726 - 1357.528: 36.7714% ( 1996) 00:23:29.939 1357.528 - 1365.330: 39.0028% ( 2012) 00:23:29.939 1365.330 - 1373.132: 41.2386% ( 2016) 00:23:29.939 1373.132 - 1380.934: 43.4356% ( 1981) 00:23:29.939 1380.934 - 1388.736: 45.6171% ( 1967) 00:23:29.939 1388.736 - 1396.538: 47.7686% ( 1940) 00:23:29.939 1396.538 - 1404.339: 49.9201% ( 1940) 00:23:29.939 1404.339 - 1412.141: 52.0994% ( 1965) 00:23:29.939 1412.141 - 1419.943: 54.1966% ( 1891) 00:23:29.939 1419.943 - 1427.745: 56.2938% ( 1891) 00:23:29.939 1427.745 - 1435.547: 58.3699% ( 1872) 00:23:29.939 1435.547 - 1443.349: 60.4217% ( 1850) 00:23:29.939 1443.349 - 1451.151: 62.4601% ( 1838) 00:23:29.939 1451.151 - 1458.953: 64.4774% ( 1819) 00:23:29.939 1458.953 - 1466.755: 66.4837% ( 1809) 00:23:29.939 1466.755 - 1474.556: 68.4567% ( 1779) 00:23:29.939 1474.556 - 1482.358: 70.3919% ( 1745) 00:23:29.939 1482.358 - 1490.160: 72.2684% ( 1692) 00:23:29.939 1490.160 - 1497.962: 74.1017% ( 1653) 00:23:29.939 1497.962 - 1505.764: 75.9227% ( 1642) 00:23:29.939 1505.764 - 1513.566: 77.6728% ( 1578) 00:23:29.939 1513.566 - 1521.368: 79.3818% ( 1541) 00:23:29.939 1521.368 - 1529.170: 81.0698% ( 1522) 00:23:29.939 1529.170 - 1536.971: 82.7067% ( 1476) 00:23:29.939 1536.971 - 1544.773: 84.3060% ( 1442) 00:23:29.939 1544.773 - 1552.575: 85.7732% ( 1323) 00:23:29.939 1552.575 - 1560.377: 87.1728% ( 1262) 00:23:29.939 1560.377 - 1568.179: 88.4460% ( 1148) 00:23:29.939 1568.179 - 1575.981: 89.5861% ( 1028) 00:23:29.939 1575.981 - 1583.783: 90.6497% ( 959) 00:23:29.940 1583.783 - 1591.585: 91.6157% ( 871) 00:23:29.940 1591.585 - 1599.387: 92.4818% ( 781) 00:23:29.940 1599.387 - 1607.188: 93.2848% ( 724) 00:23:29.940 1607.188 - 1614.990: 93.9968% ( 642) 00:23:29.940 1614.990 - 1622.792: 94.6511% ( 590) 00:23:29.940 1622.792 - 1630.594: 95.2045% ( 499) 00:23:29.940 1630.594 - 1638.396: 95.6991% ( 446) 00:23:29.940 1638.396 - 1646.198: 96.1527% ( 409) 00:23:29.940 1646.198 - 1654.000: 96.5520% ( 360) 00:23:29.940 1654.000 - 1661.802: 96.8714% ( 288) 00:23:29.940 1661.802 - 1669.604: 97.1431% ( 245) 00:23:29.940 1669.604 - 1677.405: 97.3738% ( 208) 00:23:29.940 1677.405 - 1685.207: 97.5712% ( 178) 00:23:29.940 1685.207 - 1693.009: 97.7498% ( 161) 00:23:29.940 1693.009 - 1700.811: 97.9150% ( 149) 00:23:29.940 1700.811 - 1708.613: 98.0481% ( 120) 00:23:29.940 1708.613 - 1716.415: 98.1668% ( 107) 00:23:29.940 1716.415 - 1724.217: 98.2710% ( 94) 00:23:29.940 1724.217 - 1732.019: 98.3575% ( 78) 00:23:29.940 1732.019 - 1739.821: 98.4407% ( 75) 00:23:29.940 1739.821 - 1747.622: 98.5172% ( 69) 00:23:29.940 1747.622 - 1755.424: 98.5738% ( 51) 00:23:29.940 1755.424 - 1763.226: 98.6337% ( 54) 00:23:29.940 1763.226 - 1771.028: 98.6891% ( 50) 00:23:29.940 1771.028 - 1778.830: 98.7412% ( 47) 00:23:29.940 1778.830 - 1786.632: 98.7867% ( 41) 00:23:29.940 1786.632 - 1794.434: 98.8377% ( 46) 00:23:29.940 1794.434 - 1802.236: 98.8954% ( 52) 00:23:29.940 1802.236 - 1810.037: 98.9508% ( 50) 00:23:29.940 1810.037 - 1817.839: 99.0019% ( 46) 00:23:29.940 1817.839 - 1825.641: 99.0429% ( 37) 00:23:29.940 1825.641 - 1833.443: 99.0850% ( 38) 00:23:29.940 1833.443 - 1841.245: 99.1216% ( 33) 00:23:29.940 1841.245 - 1849.047: 99.1560% ( 31) 00:23:29.940 1849.047 - 1856.849: 99.1804% ( 22) 00:23:29.940 1856.849 - 1864.651: 99.2037% ( 21) 00:23:29.940 1864.651 - 1872.453: 99.2292% ( 23) 00:23:29.940 1872.453 - 1880.254: 99.2525% ( 21) 00:23:29.940 1880.254 - 1888.056: 99.2758% ( 21) 00:23:29.940 1888.056 - 1895.858: 99.2991% ( 21) 00:23:29.940 1895.858 - 1903.660: 99.3224% ( 21) 00:23:29.940 1903.660 - 1911.462: 99.3423% ( 18) 00:23:29.940 1911.462 - 1919.264: 99.3534% ( 10) 00:23:29.940 1919.264 - 1927.066: 99.3634% ( 9) 00:23:29.940 1927.066 - 1934.868: 99.3734% ( 9) 00:23:29.940 1934.868 - 1942.670: 99.3834% ( 9) 00:23:29.940 1942.670 - 1950.471: 99.3911% ( 7) 00:23:29.940 1950.471 - 1958.273: 99.3967% ( 5) 00:23:29.940 1958.273 - 1966.075: 99.4056% ( 8) 00:23:29.940 1966.075 - 1973.877: 99.4166% ( 10) 00:23:29.940 1973.877 - 1981.679: 99.4244% ( 7) 00:23:29.940 1981.679 - 1989.481: 99.4300% ( 5) 00:23:29.940 1989.481 - 1997.283: 99.4377% ( 7) 00:23:29.940 1997.283 - 2012.887: 99.4510% ( 12) 00:23:29.940 2012.887 - 2028.490: 99.4632% ( 11) 00:23:29.940 2028.490 - 2044.094: 99.4754% ( 11) 00:23:29.940 2044.094 - 2059.698: 99.4865% ( 10) 00:23:29.940 2059.698 - 2075.302: 99.4876% ( 1) 00:23:29.940 2090.905 - 2106.509: 99.4898% ( 2) 00:23:29.940 2106.509 - 2122.113: 99.5065% ( 15) 00:23:29.940 2122.113 - 2137.717: 99.5231% ( 15) 00:23:29.940 2137.717 - 2153.320: 99.5397% ( 15) 00:23:29.940 2153.320 - 2168.924: 99.5553% ( 14) 00:23:29.940 2168.924 - 2184.528: 99.5653% ( 9) 00:23:29.940 2340.566 - 2356.169: 99.5741% ( 8) 00:23:29.940 2465.396 - 2481.000: 99.5752% ( 1) 00:23:29.940 2481.000 - 2496.603: 99.5808% ( 5) 00:23:29.940 2496.603 - 2512.207: 99.5874% ( 6) 00:23:29.940 2512.207 - 2527.811: 99.5952% ( 7) 00:23:29.940 2527.811 - 2543.415: 99.6030% ( 7) 00:23:29.940 2543.415 - 2559.018: 99.6096% ( 6) 00:23:29.940 2559.018 - 2574.622: 99.6229% ( 12) 00:23:29.940 2574.622 - 2590.226: 99.6351% ( 11) 00:23:29.940 2590.226 - 2605.830: 99.6495% ( 13) 00:23:29.940 2605.830 - 2621.434: 99.6640% ( 13) 00:23:29.940 2621.434 - 2637.037: 99.6784% ( 13) 00:23:29.940 2637.037 - 2652.641: 99.6906% ( 11) 00:23:29.940 2652.641 - 2668.245: 99.6972% ( 6) 00:23:29.940 2668.245 - 2683.849: 99.7050% ( 7) 00:23:29.940 2683.849 - 2699.452: 99.7128% ( 7) 00:23:29.940 2699.452 - 2715.056: 99.7161% ( 3) 00:23:29.940 3542.056 - 3557.660: 99.7172% ( 1) 00:23:29.940 3557.660 - 3573.264: 99.7194% ( 2) 00:23:29.940 3573.264 - 3588.867: 99.7238% ( 4) 00:23:29.940 3588.867 - 3604.471: 99.7272% ( 3) 00:23:29.940 3604.471 - 3620.075: 99.7316% ( 4) 00:23:29.940 3620.075 - 3635.679: 99.7338% ( 2) 00:23:29.940 3635.679 - 3651.282: 99.7383% ( 4) 00:23:29.940 3651.282 - 3666.886: 99.7416% ( 3) 00:23:29.940 3666.886 - 3682.490: 99.7449% ( 3) 00:23:29.940 3682.490 - 3698.094: 99.7494% ( 4) 00:23:29.940 3698.094 - 3713.698: 99.7538% ( 4) 00:23:29.940 3713.698 - 3729.301: 99.7571% ( 3) 00:23:29.940 3729.301 - 3744.905: 99.7604% ( 3) 00:23:29.940 3744.905 - 3760.509: 99.7649% ( 4) 00:23:29.940 3760.509 - 3776.113: 99.7682% ( 3) 00:23:29.940 3776.113 - 3791.716: 99.7704% ( 2) 00:23:29.940 3791.716 - 3807.320: 99.7738% ( 3) 00:23:29.940 3807.320 - 3822.924: 99.7782% ( 4) 00:23:29.940 3822.924 - 3838.528: 99.7815% ( 3) 00:23:29.940 3838.528 - 3854.132: 99.7860% ( 4) 00:23:29.940 3854.132 - 3869.735: 99.7893% ( 3) 00:23:29.940 3869.735 - 3885.339: 99.7937% ( 4) 00:23:29.940 3885.339 - 3900.943: 99.7970% ( 3) 00:23:29.940 3900.943 - 3916.547: 99.8037% ( 6) 00:23:29.940 3916.547 - 3932.150: 99.8092% ( 5) 00:23:29.940 3932.150 - 3947.754: 99.8170% ( 7) 00:23:29.940 3947.754 - 3963.358: 99.8237% ( 6) 00:23:29.940 3963.358 - 3978.962: 99.8303% ( 6) 00:23:29.940 3978.962 - 3994.565: 99.8359% ( 5) 00:23:29.940 3994.565 - 4025.773: 99.8503% ( 13) 00:23:29.940 4025.773 - 4056.981: 99.8636% ( 12) 00:23:29.940 4056.981 - 4088.188: 99.8769% ( 12) 00:23:29.940 4088.188 - 4119.396: 99.8913% ( 13) 00:23:29.940 4119.396 - 4150.603: 99.9024% ( 10) 00:23:29.940 4150.603 - 4181.811: 99.9079% ( 5) 00:23:29.940 4181.811 - 4213.018: 99.9135% ( 5) 00:23:29.940 4213.018 - 4244.226: 99.9201% ( 6) 00:23:29.940 4244.226 - 4275.433: 99.9257% ( 5) 00:23:29.940 4275.433 - 4306.641: 99.9312% ( 5) 00:23:29.940 4306.641 - 4337.848: 99.9379% ( 6) 00:23:29.940 4337.848 - 4369.056: 99.9434% ( 5) 00:23:29.940 4369.056 - 4400.264: 99.9501% ( 6) 00:23:29.940 4400.264 - 4431.471: 99.9556% ( 5) 00:23:29.940 4431.471 - 4462.679: 99.9623% ( 6) 00:23:29.940 4462.679 - 4493.886: 99.9678% ( 5) 00:23:29.940 4493.886 - 4525.094: 99.9745% ( 6) 00:23:29.940 4525.094 - 4556.301: 99.9800% ( 5) 00:23:29.940 4556.301 - 4587.509: 99.9867% ( 6) 00:23:29.940 4587.509 - 4618.716: 99.9922% ( 5) 00:23:29.940 4618.716 - 4649.924: 99.9978% ( 5) 00:23:29.940 4649.924 - 4681.131: 100.0000% ( 2) 00:23:29.940 00:23:29.940 02:24:17 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:23:30.506 EAL: TSC is not safe to use in SMP mode 00:23:30.506 EAL: TSC is not invariant 00:23:30.506 [2024-05-15 02:24:18.294377] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:31.438 Initializing NVMe Controllers 00:23:31.438 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:31.438 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:31.438 Initialization complete. Launching workers. 00:23:31.438 ======================================================== 00:23:31.438 Latency(us) 00:23:31.438 Device Information : IOPS MiB/s Average min max 00:23:31.438 PCIE (0000:00:10.0) NSID 1 from core 0: 65636.55 769.18 1950.27 234.82 4195.52 00:23:31.438 ======================================================== 00:23:31.438 Total : 65636.55 769.18 1950.27 234.82 4195.52 00:23:31.438 00:23:31.438 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:31.438 ================================================================================= 00:23:31.438 1.00000% : 1357.528us 00:23:31.438 10.00000% : 1622.792us 00:23:31.438 25.00000% : 1739.821us 00:23:31.438 50.00000% : 1895.858us 00:23:31.438 75.00000% : 2137.717us 00:23:31.438 90.00000% : 2371.773us 00:23:31.438 95.00000% : 2496.603us 00:23:31.438 98.00000% : 2761.868us 00:23:31.438 99.00000% : 2933.509us 00:23:31.438 99.50000% : 3027.132us 00:23:31.438 99.90000% : 3432.830us 00:23:31.438 99.99000% : 4181.811us 00:23:31.438 99.99900% : 4213.018us 00:23:31.438 99.99990% : 4213.018us 00:23:31.438 99.99999% : 4213.018us 00:23:31.438 00:23:31.438 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:31.438 ============================================================================== 00:23:31.438 Range in us Cumulative IO count 00:23:31.438 234.057 - 235.032: 0.0030% ( 2) 00:23:31.438 236.982 - 237.958: 0.0046% ( 1) 00:23:31.438 269.165 - 271.116: 0.0061% ( 1) 00:23:31.438 271.116 - 273.066: 0.0076% ( 1) 00:23:31.438 304.274 - 306.224: 0.0107% ( 2) 00:23:31.438 306.224 - 308.174: 0.0152% ( 3) 00:23:31.438 308.174 - 310.125: 0.0183% ( 2) 00:23:31.438 310.125 - 312.075: 0.0198% ( 1) 00:23:31.438 312.075 - 314.026: 0.0213% ( 1) 00:23:31.438 314.026 - 315.976: 0.0274% ( 4) 00:23:31.438 315.976 - 317.927: 0.0289% ( 1) 00:23:31.438 317.927 - 319.877: 0.0304% ( 1) 00:23:31.438 319.877 - 321.828: 0.0320% ( 1) 00:23:31.438 321.828 - 323.778: 0.0335% ( 1) 00:23:31.438 323.778 - 325.729: 0.0350% ( 1) 00:23:31.438 325.729 - 327.679: 0.0396% ( 3) 00:23:31.438 329.630 - 331.580: 0.0426% ( 2) 00:23:31.438 331.580 - 333.531: 0.0441% ( 1) 00:23:31.438 397.896 - 399.847: 0.0472% ( 2) 00:23:31.438 399.847 - 401.797: 0.0502% ( 2) 00:23:31.439 401.797 - 403.748: 0.0548% ( 3) 00:23:31.439 403.748 - 405.698: 0.0563% ( 1) 00:23:31.439 407.649 - 409.599: 0.0593% ( 2) 00:23:31.439 409.599 - 411.549: 0.0609% ( 1) 00:23:31.439 411.549 - 413.500: 0.0624% ( 1) 00:23:31.439 413.500 - 415.450: 0.0639% ( 1) 00:23:31.439 415.450 - 417.401: 0.0654% ( 1) 00:23:31.439 454.460 - 456.410: 0.0685% ( 2) 00:23:31.439 456.410 - 458.361: 0.0700% ( 1) 00:23:31.439 538.330 - 542.231: 0.0776% ( 5) 00:23:31.439 542.231 - 546.132: 0.0852% ( 5) 00:23:31.439 546.132 - 550.033: 0.0913% ( 4) 00:23:31.439 550.033 - 553.934: 0.0974% ( 4) 00:23:31.439 553.934 - 557.835: 0.1050% ( 5) 00:23:31.439 557.835 - 561.736: 0.1096% ( 3) 00:23:31.439 608.547 - 612.448: 0.1111% ( 1) 00:23:31.439 612.448 - 616.349: 0.1141% ( 2) 00:23:31.439 709.972 - 713.873: 0.1187% ( 3) 00:23:31.439 713.873 - 717.773: 0.1202% ( 1) 00:23:31.439 815.297 - 819.198: 0.1217% ( 1) 00:23:31.439 819.198 - 823.099: 0.1248% ( 2) 00:23:31.439 823.099 - 827.000: 0.1293% ( 3) 00:23:31.439 827.000 - 830.901: 0.1430% ( 9) 00:23:31.439 830.901 - 834.802: 0.1461% ( 2) 00:23:31.439 834.802 - 838.703: 0.1476% ( 1) 00:23:31.439 838.703 - 842.604: 0.1507% ( 2) 00:23:31.439 846.505 - 850.406: 0.1643% ( 9) 00:23:31.439 850.406 - 854.306: 0.1704% ( 4) 00:23:31.439 854.306 - 858.207: 0.1887% ( 12) 00:23:31.439 858.207 - 862.108: 0.2024% ( 9) 00:23:31.439 862.108 - 866.009: 0.2130% ( 7) 00:23:31.439 866.009 - 869.910: 0.2161% ( 2) 00:23:31.439 869.910 - 873.811: 0.2191% ( 2) 00:23:31.439 873.811 - 877.712: 0.2222% ( 2) 00:23:31.439 877.712 - 881.613: 0.2252% ( 2) 00:23:31.439 881.613 - 885.514: 0.2283% ( 2) 00:23:31.439 885.514 - 889.415: 0.2313% ( 2) 00:23:32.006 889.415 - 893.316: 0.2343% ( 2) 00:23:32.006 893.316 - 897.217: 0.2374% ( 2) 00:23:32.006 897.217 - 901.118: 0.2404% ( 2) 00:23:32.006 951.830 - 955.731: 0.2420% ( 1) 00:23:32.006 959.632 - 963.533: 0.2511% ( 6) 00:23:32.006 963.533 - 967.434: 0.2557% ( 3) 00:23:32.006 967.434 - 971.335: 0.2602% ( 3) 00:23:32.006 971.335 - 975.236: 0.2633% ( 2) 00:23:32.006 975.236 - 979.137: 0.2663% ( 2) 00:23:32.006 979.137 - 983.038: 0.2693% ( 2) 00:23:32.006 983.038 - 986.939: 0.2709% ( 1) 00:23:32.006 986.939 - 990.839: 0.2739% ( 2) 00:23:32.006 990.839 - 994.740: 0.2770% ( 2) 00:23:32.006 994.740 - 998.641: 0.2815% ( 3) 00:23:32.006 998.641 - 1006.443: 0.2876% ( 4) 00:23:32.006 1006.443 - 1014.245: 0.2937% ( 4) 00:23:32.006 1014.245 - 1022.047: 0.2998% ( 4) 00:23:32.006 1022.047 - 1029.849: 0.3013% ( 1) 00:23:32.006 1068.858 - 1076.660: 0.3043% ( 2) 00:23:32.006 1076.660 - 1084.462: 0.3120% ( 5) 00:23:32.006 1084.462 - 1092.264: 0.3180% ( 4) 00:23:32.006 1092.264 - 1100.066: 0.3211% ( 2) 00:23:32.006 1100.066 - 1107.868: 0.3257% ( 3) 00:23:32.006 1107.868 - 1115.670: 0.3333% ( 5) 00:23:32.006 1115.670 - 1123.472: 0.3409% ( 5) 00:23:32.006 1123.472 - 1131.273: 0.3454% ( 3) 00:23:32.006 1131.273 - 1139.075: 0.3485% ( 2) 00:23:32.006 1139.075 - 1146.877: 0.3546% ( 4) 00:23:32.006 1146.877 - 1154.679: 0.3759% ( 14) 00:23:32.006 1154.679 - 1162.481: 0.3835% ( 5) 00:23:32.006 1162.481 - 1170.283: 0.3941% ( 7) 00:23:32.006 1170.283 - 1178.085: 0.4033% ( 6) 00:23:32.006 1178.085 - 1185.887: 0.4124% ( 6) 00:23:32.006 1185.887 - 1193.689: 0.4276% ( 10) 00:23:32.006 1193.689 - 1201.490: 0.4413% ( 9) 00:23:32.006 1201.490 - 1209.292: 0.4687% ( 18) 00:23:32.006 1209.292 - 1217.094: 0.4778% ( 6) 00:23:32.006 1217.094 - 1224.896: 0.4915% ( 9) 00:23:32.006 1224.896 - 1232.698: 0.5067% ( 10) 00:23:32.006 1232.698 - 1240.500: 0.5417% ( 23) 00:23:32.006 1240.500 - 1248.302: 0.5828% ( 27) 00:23:32.006 1248.302 - 1256.104: 0.6102% ( 18) 00:23:32.006 1256.104 - 1263.905: 0.6376% ( 18) 00:23:32.006 1263.905 - 1271.707: 0.6635% ( 17) 00:23:32.006 1271.707 - 1279.509: 0.6924% ( 19) 00:23:32.006 1279.509 - 1287.311: 0.7183% ( 17) 00:23:32.006 1287.311 - 1295.113: 0.7426% ( 16) 00:23:32.006 1295.113 - 1302.915: 0.7654% ( 15) 00:23:32.006 1302.915 - 1310.717: 0.7928% ( 18) 00:23:32.006 1310.717 - 1318.519: 0.8354% ( 28) 00:23:32.006 1318.519 - 1326.321: 0.8826% ( 31) 00:23:32.006 1326.321 - 1334.122: 0.9222% ( 26) 00:23:32.006 1334.122 - 1341.924: 0.9511% ( 19) 00:23:32.006 1341.924 - 1349.726: 0.9846% ( 22) 00:23:32.006 1349.726 - 1357.528: 1.0196% ( 23) 00:23:32.006 1357.528 - 1365.330: 1.0530% ( 22) 00:23:32.006 1365.330 - 1373.132: 1.0850% ( 21) 00:23:32.006 1373.132 - 1380.934: 1.1215% ( 24) 00:23:32.006 1380.934 - 1388.736: 1.1763% ( 36) 00:23:32.006 1388.736 - 1396.538: 1.2265% ( 33) 00:23:32.006 1396.538 - 1404.339: 1.2813% ( 36) 00:23:32.006 1404.339 - 1412.141: 1.3559% ( 49) 00:23:32.006 1412.141 - 1419.943: 1.4228% ( 44) 00:23:32.006 1419.943 - 1427.745: 1.5020% ( 52) 00:23:32.006 1427.745 - 1435.547: 1.6252% ( 81) 00:23:32.006 1435.547 - 1443.349: 1.7241% ( 65) 00:23:32.006 1443.349 - 1451.151: 1.8094% ( 56) 00:23:32.006 1451.151 - 1458.953: 1.8976% ( 58) 00:23:32.006 1458.953 - 1466.755: 1.9981% ( 66) 00:23:32.006 1466.755 - 1474.556: 2.1152% ( 77) 00:23:32.006 1474.556 - 1482.358: 2.2994% ( 121) 00:23:32.006 1482.358 - 1490.160: 2.4972% ( 130) 00:23:32.006 1490.160 - 1497.962: 2.6813% ( 121) 00:23:32.006 1497.962 - 1505.764: 2.9172% ( 155) 00:23:32.006 1505.764 - 1513.566: 3.1713% ( 167) 00:23:32.006 1513.566 - 1521.368: 3.4194% ( 163) 00:23:32.006 1521.368 - 1529.170: 3.7465% ( 215) 00:23:32.006 1529.170 - 1536.971: 4.0235% ( 182) 00:23:32.006 1536.971 - 1544.773: 4.3735% ( 230) 00:23:32.006 1544.773 - 1552.575: 4.8559% ( 317) 00:23:32.006 1552.575 - 1560.377: 5.2942% ( 288) 00:23:32.006 1560.377 - 1568.179: 5.7339% ( 289) 00:23:32.006 1568.179 - 1575.981: 6.2909% ( 366) 00:23:32.006 1575.981 - 1583.783: 6.8555% ( 371) 00:23:32.006 1583.783 - 1591.585: 7.4535% ( 393) 00:23:32.006 1591.585 - 1599.387: 8.0866% ( 416) 00:23:32.006 1599.387 - 1607.188: 8.8840% ( 524) 00:23:32.006 1607.188 - 1614.990: 9.6905% ( 530) 00:23:32.006 1614.990 - 1622.792: 10.4240% ( 482) 00:23:32.006 1622.792 - 1630.594: 11.1772% ( 495) 00:23:32.006 1630.594 - 1638.396: 11.9305% ( 495) 00:23:32.006 1638.396 - 1646.198: 12.7948% ( 568) 00:23:32.006 1646.198 - 1654.000: 13.6029% ( 531) 00:23:32.006 1654.000 - 1661.802: 14.4611% ( 564) 00:23:32.006 1661.802 - 1669.604: 15.3803% ( 604) 00:23:32.006 1669.604 - 1677.405: 16.4227% ( 685) 00:23:32.006 1677.405 - 1685.207: 17.4575% ( 680) 00:23:32.006 1685.207 - 1693.009: 18.5760% ( 735) 00:23:32.006 1693.009 - 1700.811: 19.6366% ( 697) 00:23:32.006 1700.811 - 1708.613: 20.7262% ( 716) 00:23:32.006 1708.613 - 1716.415: 21.8294% ( 725) 00:23:32.006 1716.415 - 1724.217: 22.9571% ( 741) 00:23:32.006 1724.217 - 1732.019: 24.1745% ( 800) 00:23:32.006 1732.019 - 1739.821: 25.3264% ( 757) 00:23:32.006 1739.821 - 1747.622: 26.3916% ( 700) 00:23:32.006 1747.622 - 1755.424: 27.4614% ( 703) 00:23:32.006 1755.424 - 1763.226: 28.5632% ( 724) 00:23:32.006 1763.226 - 1771.028: 29.7136% ( 756) 00:23:32.006 1771.028 - 1778.830: 30.9340% ( 802) 00:23:32.006 1778.830 - 1786.632: 32.1651% ( 809) 00:23:32.006 1786.632 - 1794.434: 33.4130% ( 820) 00:23:32.006 1794.434 - 1802.236: 34.7780% ( 897) 00:23:32.006 1802.236 - 1810.037: 36.0852% ( 859) 00:23:32.006 1810.037 - 1817.839: 37.3969% ( 862) 00:23:32.006 1817.839 - 1825.641: 38.7847% ( 912) 00:23:32.006 1825.641 - 1833.443: 40.0645% ( 841) 00:23:32.006 1833.443 - 1841.245: 41.4113% ( 885) 00:23:32.006 1841.245 - 1849.047: 42.7337% ( 869) 00:23:32.006 1849.047 - 1856.849: 43.9724% ( 814) 00:23:32.006 1856.849 - 1864.651: 45.2704% ( 853) 00:23:32.006 1864.651 - 1872.453: 46.5061% ( 812) 00:23:32.006 1872.453 - 1880.254: 47.8224% ( 865) 00:23:32.006 1880.254 - 1888.056: 49.1204% ( 853) 00:23:32.006 1888.056 - 1895.858: 50.4093% ( 847) 00:23:32.006 1895.858 - 1903.660: 51.6754% ( 832) 00:23:32.006 1903.660 - 1911.462: 52.8685% ( 784) 00:23:32.006 1911.462 - 1919.264: 54.0478% ( 775) 00:23:32.006 1919.264 - 1927.066: 55.1420% ( 719) 00:23:32.006 1927.066 - 1934.868: 56.2331% ( 717) 00:23:32.006 1934.868 - 1942.670: 57.3744% ( 750) 00:23:32.006 1942.670 - 1950.471: 58.4244% ( 690) 00:23:32.006 1950.471 - 1958.273: 59.4409% ( 668) 00:23:32.006 1958.273 - 1966.075: 60.4559% ( 667) 00:23:32.006 1966.075 - 1973.877: 61.4024% ( 622) 00:23:32.006 1973.877 - 1981.679: 62.3368% ( 614) 00:23:32.006 1981.679 - 1989.481: 63.2377% ( 592) 00:23:32.006 1989.481 - 1997.283: 64.1096% ( 573) 00:23:32.006 1997.283 - 2012.887: 65.8003% ( 1111) 00:23:32.006 2012.887 - 2028.490: 67.2505% ( 953) 00:23:32.006 2028.490 - 2044.094: 68.4238% ( 771) 00:23:32.006 2044.094 - 2059.698: 69.6914% ( 833) 00:23:32.006 2059.698 - 2075.302: 70.9225% ( 809) 00:23:32.006 2075.302 - 2090.905: 72.0851% ( 764) 00:23:32.006 2090.905 - 2106.509: 73.3025% ( 800) 00:23:32.006 2106.509 - 2122.113: 74.4651% ( 764) 00:23:32.006 2122.113 - 2137.717: 75.4908% ( 674) 00:23:32.006 2137.717 - 2153.320: 76.4555% ( 634) 00:23:32.006 2153.320 - 2168.924: 77.4553% ( 657) 00:23:32.006 2168.924 - 2184.528: 78.5282% ( 705) 00:23:32.006 2184.528 - 2200.132: 79.6467% ( 735) 00:23:32.006 2200.132 - 2215.736: 80.7088% ( 698) 00:23:32.006 2215.736 - 2231.339: 81.7482% ( 683) 00:23:32.006 2231.339 - 2246.943: 82.7571% ( 663) 00:23:32.006 2246.943 - 2262.547: 83.8954% ( 748) 00:23:32.006 2262.547 - 2278.151: 84.9728% ( 708) 00:23:32.006 2278.151 - 2293.754: 86.0836% ( 730) 00:23:32.006 2293.754 - 2309.358: 87.0804% ( 655) 00:23:32.007 2309.358 - 2324.962: 88.1076% ( 675) 00:23:32.007 2324.962 - 2340.566: 89.0860% ( 643) 00:23:32.007 2340.566 - 2356.169: 89.9915% ( 595) 00:23:32.007 2356.169 - 2371.773: 90.8787% ( 583) 00:23:32.007 2371.773 - 2387.377: 91.6517% ( 508) 00:23:32.007 2387.377 - 2402.981: 92.3593% ( 465) 00:23:32.007 2402.981 - 2418.585: 93.0198% ( 434) 00:23:32.007 2418.585 - 2434.188: 93.5782% ( 367) 00:23:32.007 2434.188 - 2449.792: 94.0256% ( 294) 00:23:32.007 2449.792 - 2465.396: 94.3924% ( 241) 00:23:32.007 2465.396 - 2481.000: 94.7317% ( 223) 00:23:32.007 2481.000 - 2496.603: 95.0300% ( 196) 00:23:32.007 2496.603 - 2512.207: 95.2704% ( 158) 00:23:32.007 2512.207 - 2527.811: 95.4926% ( 146) 00:23:32.007 2527.811 - 2543.415: 95.7300% ( 156) 00:23:32.007 2543.415 - 2559.018: 95.9506% ( 145) 00:23:32.007 2559.018 - 2574.622: 96.1819% ( 152) 00:23:32.007 2574.622 - 2590.226: 96.4239% ( 159) 00:23:32.007 2590.226 - 2605.830: 96.6796% ( 168) 00:23:32.007 2605.830 - 2621.434: 96.8576% ( 117) 00:23:32.007 2621.434 - 2637.037: 97.0615% ( 134) 00:23:32.007 2637.037 - 2652.641: 97.2380% ( 116) 00:23:32.007 2652.641 - 2668.245: 97.3811% ( 94) 00:23:32.007 2668.245 - 2683.849: 97.5135% ( 87) 00:23:32.007 2683.849 - 2699.452: 97.6459% ( 87) 00:23:32.007 2699.452 - 2715.056: 97.7493% ( 68) 00:23:32.007 2715.056 - 2730.660: 97.8528% ( 68) 00:23:32.007 2730.660 - 2746.264: 97.9624% ( 72) 00:23:32.007 2746.264 - 2761.868: 98.0522% ( 59) 00:23:32.007 2761.868 - 2777.471: 98.1359% ( 55) 00:23:32.007 2777.471 - 2793.075: 98.2317% ( 63) 00:23:32.007 2793.075 - 2808.679: 98.3261% ( 62) 00:23:32.007 2808.679 - 2824.283: 98.4022% ( 50) 00:23:32.007 2824.283 - 2839.886: 98.4950% ( 61) 00:23:32.007 2839.886 - 2855.490: 98.6228% ( 84) 00:23:32.007 2855.490 - 2871.094: 98.7309% ( 71) 00:23:32.007 2871.094 - 2886.698: 98.8328% ( 67) 00:23:32.007 2886.698 - 2902.301: 98.9180% ( 56) 00:23:32.007 2902.301 - 2917.905: 98.9759% ( 38) 00:23:32.007 2917.905 - 2933.509: 99.0656% ( 59) 00:23:32.007 2933.509 - 2949.113: 99.1326% ( 44) 00:23:32.007 2949.113 - 2964.717: 99.2148% ( 54) 00:23:32.007 2964.717 - 2980.320: 99.2909% ( 50) 00:23:32.007 2980.320 - 2995.924: 99.3715% ( 53) 00:23:32.007 2995.924 - 3011.528: 99.4400% ( 45) 00:23:32.007 3011.528 - 3027.132: 99.5039% ( 42) 00:23:32.007 3027.132 - 3042.735: 99.5572% ( 35) 00:23:32.007 3042.735 - 3058.339: 99.5998% ( 28) 00:23:32.007 3058.339 - 3073.943: 99.6348% ( 23) 00:23:32.007 3073.943 - 3089.547: 99.6698% ( 23) 00:23:32.007 3089.547 - 3105.150: 99.7017% ( 21) 00:23:32.007 3105.150 - 3120.754: 99.7215% ( 13) 00:23:32.007 3120.754 - 3136.358: 99.7367% ( 10) 00:23:32.007 3136.358 - 3151.962: 99.7489% ( 8) 00:23:32.007 3151.962 - 3167.566: 99.7580% ( 6) 00:23:32.007 3167.566 - 3183.169: 99.7687% ( 7) 00:23:32.007 3183.169 - 3198.773: 99.7778% ( 6) 00:23:32.007 3198.773 - 3214.377: 99.7854% ( 5) 00:23:32.007 3214.377 - 3229.981: 99.7900% ( 3) 00:23:32.007 3229.981 - 3245.584: 99.7946% ( 3) 00:23:32.007 3245.584 - 3261.188: 99.8007% ( 4) 00:23:32.007 3261.188 - 3276.792: 99.8052% ( 3) 00:23:32.007 3276.792 - 3292.396: 99.8113% ( 4) 00:23:32.007 3292.396 - 3308.000: 99.8235% ( 8) 00:23:32.007 3308.000 - 3323.603: 99.8372% ( 9) 00:23:32.007 3323.603 - 3339.207: 99.8493% ( 8) 00:23:32.007 3339.207 - 3354.811: 99.8630% ( 9) 00:23:32.007 3354.811 - 3370.415: 99.8767% ( 9) 00:23:32.007 3370.415 - 3386.018: 99.8828% ( 4) 00:23:32.007 3386.018 - 3401.622: 99.8889% ( 4) 00:23:32.007 3401.622 - 3417.226: 99.8965% ( 5) 00:23:32.007 3417.226 - 3432.830: 99.9026% ( 4) 00:23:32.007 3432.830 - 3448.433: 99.9072% ( 3) 00:23:32.007 3448.433 - 3464.037: 99.9117% ( 3) 00:23:32.007 3464.037 - 3479.641: 99.9133% ( 1) 00:23:32.007 3542.056 - 3557.660: 99.9148% ( 1) 00:23:32.007 3557.660 - 3573.264: 99.9224% ( 5) 00:23:32.007 3588.867 - 3604.471: 99.9239% ( 1) 00:23:32.007 3604.471 - 3620.075: 99.9315% ( 5) 00:23:32.007 4088.188 - 4119.396: 99.9498% ( 12) 00:23:32.007 4119.396 - 4150.603: 99.9711% ( 14) 00:23:32.007 4150.603 - 4181.811: 99.9909% ( 13) 00:23:32.007 4181.811 - 4213.018: 100.0000% ( 6) 00:23:32.007 00:23:32.007 02:24:19 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:23:32.007 00:23:32.007 real 0m3.621s 00:23:32.007 user 0m2.541s 00:23:32.007 sys 0m1.077s 00:23:32.007 02:24:19 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:32.007 02:24:19 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.007 ************************************ 00:23:32.007 END TEST nvme_perf 00:23:32.007 ************************************ 00:23:32.007 02:24:19 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:23:32.007 02:24:19 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:32.007 02:24:19 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.007 02:24:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.007 ************************************ 00:23:32.007 START TEST nvme_hello_world 00:23:32.007 ************************************ 00:23:32.007 02:24:19 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:23:32.572 EAL: TSC is not safe to use in SMP mode 00:23:32.572 EAL: TSC is not invariant 00:23:32.572 [2024-05-15 02:24:20.417827] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:32.572 Initializing NVMe Controllers 00:23:32.572 Attaching to 0000:00:10.0 00:23:32.572 Attached to 0000:00:10.0 00:23:32.572 Namespace ID: 1 size: 5GB 00:23:32.572 Initialization complete. 00:23:32.572 INFO: using host memory buffer for IO 00:23:32.572 Hello world! 00:23:32.572 00:23:32.572 real 0m0.541s 00:23:32.572 user 0m0.018s 00:23:32.572 sys 0m0.522s 00:23:32.572 ************************************ 00:23:32.572 END TEST nvme_hello_world 00:23:32.572 ************************************ 00:23:32.572 02:24:20 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:32.572 02:24:20 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:32.572 02:24:20 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:23:32.572 02:24:20 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:32.572 02:24:20 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.572 02:24:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.572 ************************************ 00:23:32.572 START TEST nvme_sgl 00:23:32.572 ************************************ 00:23:32.573 02:24:20 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:23:33.138 EAL: TSC is not safe to use in SMP mode 00:23:33.138 EAL: TSC is not invariant 00:23:33.138 [2024-05-15 02:24:20.982865] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:33.138 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:23:33.138 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:23:33.138 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:23:33.138 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:23:33.138 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:23:33.138 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:23:33.138 NVMe Readv/Writev Request test 00:23:33.138 Attaching to 0000:00:10.0 00:23:33.138 Attached to 0000:00:10.0 00:23:33.138 0000:00:10.0: build_io_request_2 test passed 00:23:33.138 0000:00:10.0: build_io_request_4 test passed 00:23:33.138 0000:00:10.0: build_io_request_5 test passed 00:23:33.138 0000:00:10.0: build_io_request_6 test passed 00:23:33.138 0000:00:10.0: build_io_request_7 test passed 00:23:33.138 0000:00:10.0: build_io_request_10 test passed 00:23:33.138 Cleaning up... 00:23:33.138 00:23:33.138 real 0m0.549s 00:23:33.138 user 0m0.015s 00:23:33.138 sys 0m0.533s 00:23:33.138 02:24:21 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.138 ************************************ 00:23:33.138 END TEST nvme_sgl 00:23:33.138 ************************************ 00:23:33.138 02:24:21 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:23:33.138 02:24:21 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:23:33.138 02:24:21 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:33.138 02:24:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.138 02:24:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:33.138 ************************************ 00:23:33.138 START TEST nvme_e2edp 00:23:33.138 ************************************ 00:23:33.138 02:24:21 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:23:33.707 EAL: TSC is not safe to use in SMP mode 00:23:33.707 EAL: TSC is not invariant 00:23:33.707 [2024-05-15 02:24:21.583599] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:33.707 NVMe Write/Read with End-to-End data protection test 00:23:33.707 Attaching to 0000:00:10.0 00:23:33.707 Attached to 0000:00:10.0 00:23:33.707 Cleaning up... 00:23:33.707 00:23:33.707 real 0m0.539s 00:23:33.707 user 0m0.025s 00:23:33.707 sys 0m0.513s 00:23:33.707 02:24:21 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.707 ************************************ 00:23:33.707 END TEST nvme_e2edp 00:23:33.707 ************************************ 00:23:33.707 02:24:21 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:23:33.707 02:24:21 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:23:33.707 02:24:21 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:33.707 02:24:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.707 02:24:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:33.707 ************************************ 00:23:33.707 START TEST nvme_reserve 00:23:33.707 ************************************ 00:23:33.707 02:24:21 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:23:34.275 EAL: TSC is not safe to use in SMP mode 00:23:34.275 EAL: TSC is not invariant 00:23:34.275 [2024-05-15 02:24:22.160080] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:34.275 ===================================================== 00:23:34.275 NVMe Controller at PCI bus 0, device 16, function 0 00:23:34.275 ===================================================== 00:23:34.275 Reservations: Not Supported 00:23:34.275 Reservation test passed 00:23:34.275 00:23:34.275 real 0m0.550s 00:23:34.275 user 0m0.003s 00:23:34.275 sys 0m0.546s 00:23:34.275 02:24:22 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:34.275 02:24:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:23:34.275 ************************************ 00:23:34.275 END TEST nvme_reserve 00:23:34.275 ************************************ 00:23:34.275 02:24:22 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:23:34.275 02:24:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:34.275 02:24:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:34.275 02:24:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:34.275 ************************************ 00:23:34.275 START TEST nvme_err_injection 00:23:34.275 ************************************ 00:23:34.275 02:24:22 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:23:34.854 EAL: TSC is not safe to use in SMP mode 00:23:34.854 EAL: TSC is not invariant 00:23:34.854 [2024-05-15 02:24:22.746583] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:34.854 NVMe Error Injection test 00:23:34.854 Attaching to 0000:00:10.0 00:23:34.854 Attached to 0000:00:10.0 00:23:34.854 0000:00:10.0: get features failed as expected 00:23:34.854 0000:00:10.0: get features successfully as expected 00:23:34.854 0000:00:10.0: read failed as expected 00:23:34.854 0000:00:10.0: read successfully as expected 00:23:34.854 Cleaning up... 00:23:34.854 00:23:34.854 real 0m0.548s 00:23:34.854 user 0m0.018s 00:23:34.854 sys 0m0.529s 00:23:34.854 02:24:22 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:34.854 02:24:22 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:23:34.854 ************************************ 00:23:34.854 END TEST nvme_err_injection 00:23:34.854 ************************************ 00:23:34.854 02:24:22 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:23:34.854 02:24:22 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:23:34.854 02:24:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:34.854 02:24:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:34.854 ************************************ 00:23:34.854 START TEST nvme_overhead 00:23:34.854 ************************************ 00:23:34.854 02:24:22 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:23:35.421 EAL: TSC is not safe to use in SMP mode 00:23:35.421 EAL: TSC is not invariant 00:23:35.421 [2024-05-15 02:24:23.377841] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:36.797 Initializing NVMe Controllers 00:23:36.797 Attaching to 0000:00:10.0 00:23:36.797 Attached to 0000:00:10.0 00:23:36.797 Initialization complete. Launching workers. 00:23:36.797 submit (in ns) avg, min, max = 10359.6, 7972.4, 88681.7 00:23:36.797 complete (in ns) avg, min, max = 7365.7, 5726.7, 143840.6 00:23:36.797 00:23:36.797 Submit histogram 00:23:36.797 ================ 00:23:36.797 Range in us Cumulative Count 00:23:36.797 7.924 - 7.985: 0.0083% ( 1) 00:23:36.797 7.985 - 8.046: 0.0167% ( 1) 00:23:36.797 8.107 - 8.168: 0.0250% ( 1) 00:23:36.797 8.168 - 8.229: 0.0584% ( 4) 00:23:36.797 8.229 - 8.290: 0.2168% ( 19) 00:23:36.797 8.290 - 8.350: 0.6170% ( 48) 00:23:36.797 8.350 - 8.411: 1.2591% ( 77) 00:23:36.797 8.411 - 8.472: 1.8094% ( 66) 00:23:36.797 8.472 - 8.533: 2.6182% ( 97) 00:23:36.797 8.533 - 8.594: 3.8022% ( 142) 00:23:36.797 8.594 - 8.655: 5.1613% ( 163) 00:23:36.797 8.655 - 8.716: 6.0702% ( 109) 00:23:36.797 8.716 - 8.777: 6.5788% ( 61) 00:23:36.797 8.777 - 8.838: 6.8457% ( 32) 00:23:36.797 8.838 - 8.899: 7.0541% ( 25) 00:23:36.797 8.899 - 8.960: 7.5961% ( 65) 00:23:36.797 8.960 - 9.021: 8.2381% ( 77) 00:23:36.797 9.021 - 9.082: 9.5889% ( 162) 00:23:36.797 9.082 - 9.143: 12.5657% ( 357) 00:23:36.797 9.143 - 9.204: 18.7276% ( 739) 00:23:36.797 9.204 - 9.265: 27.2326% ( 1020) 00:23:36.797 9.265 - 9.326: 33.9531% ( 806) 00:23:36.797 9.326 - 9.387: 37.5302% ( 429) 00:23:36.797 9.387 - 9.448: 39.5898% ( 247) 00:23:36.797 9.448 - 9.509: 40.8989% ( 157) 00:23:36.797 9.509 - 9.570: 41.9828% ( 130) 00:23:36.797 9.570 - 9.630: 43.8506% ( 224) 00:23:36.797 9.630 - 9.691: 48.2198% ( 524) 00:23:36.797 9.691 - 9.752: 54.8236% ( 792) 00:23:36.797 9.752 - 9.813: 61.1023% ( 753) 00:23:36.797 9.813 - 9.874: 65.3214% ( 506) 00:23:36.797 9.874 - 9.935: 67.9480% ( 315) 00:23:36.797 9.935 - 9.996: 70.0992% ( 258) 00:23:36.797 9.996 - 10.057: 72.2588% ( 259) 00:23:36.797 10.057 - 10.118: 74.0182% ( 211) 00:23:36.797 10.118 - 10.179: 75.4440% ( 171) 00:23:36.797 10.179 - 10.240: 76.1277% ( 82) 00:23:36.797 10.240 - 10.301: 76.7031% ( 69) 00:23:36.797 10.301 - 10.362: 77.1283% ( 51) 00:23:36.797 10.362 - 10.423: 77.4702% ( 41) 00:23:36.797 10.423 - 10.484: 77.6286% ( 19) 00:23:36.797 10.484 - 10.545: 77.7620% ( 16) 00:23:36.797 10.545 - 10.606: 77.8537% ( 11) 00:23:36.797 10.606 - 10.667: 77.9621% ( 13) 00:23:36.797 10.667 - 10.728: 78.2373% ( 33) 00:23:36.797 10.728 - 10.789: 78.4958% ( 31) 00:23:36.797 10.789 - 10.849: 78.7793% ( 34) 00:23:36.797 10.849 - 10.910: 79.0961% ( 38) 00:23:36.797 10.910 - 10.971: 79.2712% ( 21) 00:23:36.797 10.971 - 11.032: 79.4297% ( 19) 00:23:36.797 11.032 - 11.093: 79.5964% ( 20) 00:23:36.797 11.093 - 11.154: 79.6882% ( 11) 00:23:36.797 11.154 - 11.215: 79.7632% ( 9) 00:23:36.797 11.215 - 11.276: 79.8549% ( 11) 00:23:36.797 11.276 - 11.337: 79.8883% ( 4) 00:23:36.797 11.337 - 11.398: 79.9550% ( 8) 00:23:36.797 11.398 - 11.459: 80.0050% ( 6) 00:23:36.797 11.459 - 11.520: 80.0133% ( 1) 00:23:36.797 11.520 - 11.581: 80.0717% ( 7) 00:23:36.797 11.581 - 11.642: 80.1051% ( 4) 00:23:36.797 11.642 - 11.703: 80.1468% ( 5) 00:23:36.797 11.703 - 11.764: 80.1718% ( 3) 00:23:36.797 11.764 - 11.825: 80.2135% ( 5) 00:23:36.797 11.825 - 11.886: 80.2385% ( 3) 00:23:36.797 11.886 - 11.947: 80.2468% ( 1) 00:23:36.797 12.008 - 12.069: 80.2551% ( 1) 00:23:36.797 12.069 - 12.129: 80.2635% ( 1) 00:23:36.797 12.190 - 12.251: 80.2802% ( 2) 00:23:36.797 12.434 - 12.495: 80.2968% ( 2) 00:23:36.797 12.495 - 12.556: 80.3219% ( 3) 00:23:36.797 12.556 - 12.617: 80.3635% ( 5) 00:23:36.797 12.617 - 12.678: 80.5053% ( 17) 00:23:36.797 12.678 - 12.739: 81.2724% ( 92) 00:23:36.797 12.739 - 12.800: 83.0901% ( 218) 00:23:36.797 12.800 - 12.861: 85.9585% ( 344) 00:23:36.797 12.861 - 12.922: 89.2354% ( 393) 00:23:36.797 12.922 - 12.983: 91.3950% ( 259) 00:23:36.797 12.983 - 13.044: 92.7291% ( 160) 00:23:36.797 13.044 - 13.105: 93.4962% ( 92) 00:23:36.797 13.105 - 13.166: 94.0882% ( 71) 00:23:36.797 13.166 - 13.227: 94.4634% ( 45) 00:23:36.797 13.227 - 13.288: 94.9220% ( 55) 00:23:36.797 13.288 - 13.349: 95.2973% ( 45) 00:23:36.797 13.349 - 13.409: 95.7475% ( 54) 00:23:36.797 13.409 - 13.470: 96.0644% ( 38) 00:23:36.797 13.470 - 13.531: 96.3479% ( 34) 00:23:36.797 13.531 - 13.592: 96.4980% ( 18) 00:23:36.797 13.592 - 13.653: 96.7064% ( 25) 00:23:36.797 13.653 - 13.714: 96.8899% ( 22) 00:23:36.797 13.714 - 13.775: 97.0650% ( 21) 00:23:36.797 13.775 - 13.836: 97.1817% ( 14) 00:23:36.797 13.836 - 13.897: 97.2901% ( 13) 00:23:36.797 13.897 - 13.958: 97.3651% ( 9) 00:23:36.797 13.958 - 14.019: 97.4402% ( 9) 00:23:36.797 14.019 - 14.080: 97.4735% ( 4) 00:23:36.797 14.080 - 14.141: 97.4819% ( 1) 00:23:36.797 14.141 - 14.202: 97.5069% ( 3) 00:23:36.797 14.202 - 14.263: 97.5319% ( 3) 00:23:36.798 14.263 - 14.324: 97.5486% ( 2) 00:23:36.798 14.324 - 14.385: 97.5652% ( 2) 00:23:36.798 14.385 - 14.446: 97.5986% ( 4) 00:23:36.798 14.446 - 14.507: 97.6236% ( 3) 00:23:36.798 14.507 - 14.568: 97.6486% ( 3) 00:23:36.798 14.568 - 14.629: 97.6653% ( 2) 00:23:36.798 14.629 - 14.689: 97.6820% ( 2) 00:23:36.798 14.689 - 14.750: 97.7070% ( 3) 00:23:36.798 14.750 - 14.811: 97.7403% ( 4) 00:23:36.798 14.811 - 14.872: 97.7570% ( 2) 00:23:36.798 14.872 - 14.933: 97.7820% ( 3) 00:23:36.798 14.933 - 14.994: 97.7904% ( 1) 00:23:36.798 14.994 - 15.055: 97.8071% ( 2) 00:23:36.798 15.055 - 15.116: 97.8321% ( 3) 00:23:36.798 15.116 - 15.177: 97.8571% ( 3) 00:23:36.798 15.177 - 15.238: 97.8988% ( 5) 00:23:36.798 15.238 - 15.299: 97.9321% ( 4) 00:23:36.798 15.299 - 15.360: 97.9988% ( 8) 00:23:36.798 15.360 - 15.421: 98.0405% ( 5) 00:23:36.798 15.421 - 15.482: 98.0572% ( 2) 00:23:36.798 15.482 - 15.543: 98.0822% ( 3) 00:23:36.798 15.543 - 15.604: 98.1156% ( 4) 00:23:36.798 15.604 - 15.726: 98.1406% ( 3) 00:23:36.798 15.726 - 15.848: 98.1989% ( 7) 00:23:36.798 15.848 - 15.969: 98.2156% ( 2) 00:23:36.798 15.969 - 16.091: 98.2406% ( 3) 00:23:36.798 16.091 - 16.213: 98.2490% ( 1) 00:23:36.798 16.335 - 16.457: 98.2740% ( 3) 00:23:36.798 16.457 - 16.579: 98.3157% ( 5) 00:23:36.798 16.579 - 16.701: 98.3407% ( 3) 00:23:36.798 16.701 - 16.823: 98.3490% ( 1) 00:23:36.798 16.823 - 16.945: 98.3741% ( 3) 00:23:36.798 16.945 - 17.067: 98.3907% ( 2) 00:23:36.798 17.067 - 17.189: 98.3991% ( 1) 00:23:36.798 17.189 - 17.310: 98.4074% ( 1) 00:23:36.798 17.310 - 17.432: 98.4241% ( 2) 00:23:36.798 17.432 - 17.554: 98.4408% ( 2) 00:23:36.798 17.676 - 17.798: 98.4491% ( 1) 00:23:36.798 17.798 - 17.920: 98.4574% ( 1) 00:23:36.798 17.920 - 18.042: 98.4908% ( 4) 00:23:36.798 18.042 - 18.164: 98.5241% ( 4) 00:23:36.798 18.164 - 18.286: 98.5492% ( 3) 00:23:36.798 18.286 - 18.408: 98.5825% ( 4) 00:23:36.798 18.408 - 18.529: 98.5992% ( 2) 00:23:36.798 18.529 - 18.651: 98.6242% ( 3) 00:23:36.798 18.651 - 18.773: 98.6409% ( 2) 00:23:36.798 18.773 - 18.895: 98.6492% ( 1) 00:23:36.798 18.895 - 19.017: 98.6576% ( 1) 00:23:36.798 19.017 - 19.139: 98.6659% ( 1) 00:23:36.798 19.139 - 19.261: 98.6742% ( 1) 00:23:36.798 19.261 - 19.383: 98.6992% ( 3) 00:23:36.798 19.383 - 19.505: 98.7243% ( 3) 00:23:36.798 19.505 - 19.627: 98.7826% ( 7) 00:23:36.798 19.627 - 19.749: 98.8743% ( 11) 00:23:36.798 19.749 - 19.870: 98.9577% ( 10) 00:23:36.798 19.870 - 19.992: 99.0328% ( 9) 00:23:36.798 19.992 - 20.114: 99.0578% ( 3) 00:23:36.798 20.114 - 20.236: 99.0745% ( 2) 00:23:36.798 20.236 - 20.358: 99.1162% ( 5) 00:23:36.798 20.358 - 20.480: 99.1745% ( 7) 00:23:36.798 20.480 - 20.602: 99.2496% ( 9) 00:23:36.798 20.602 - 20.724: 99.3913% ( 17) 00:23:36.798 20.724 - 20.846: 99.4497% ( 7) 00:23:36.798 20.846 - 20.968: 99.4830% ( 4) 00:23:36.798 20.968 - 21.089: 99.5164% ( 4) 00:23:36.798 21.089 - 21.211: 99.5331% ( 2) 00:23:36.798 21.211 - 21.333: 99.5664% ( 4) 00:23:36.798 21.333 - 21.455: 99.5831% ( 2) 00:23:36.798 21.455 - 21.577: 99.6081% ( 3) 00:23:36.798 21.699 - 21.821: 99.6248% ( 2) 00:23:36.798 21.943 - 22.065: 99.6331% ( 1) 00:23:36.798 22.065 - 22.187: 99.6665% ( 4) 00:23:36.798 22.187 - 22.309: 99.6831% ( 2) 00:23:36.798 22.918 - 23.040: 99.6915% ( 1) 00:23:36.798 23.040 - 23.162: 99.6998% ( 1) 00:23:36.798 23.649 - 23.771: 99.7082% ( 1) 00:23:36.798 24.625 - 24.747: 99.7165% ( 1) 00:23:36.798 25.112 - 25.234: 99.7248% ( 1) 00:23:36.798 25.234 - 25.356: 99.7332% ( 1) 00:23:36.798 25.600 - 25.722: 99.7582% ( 3) 00:23:36.798 25.722 - 25.844: 99.7832% ( 3) 00:23:36.798 25.844 - 25.966: 99.7999% ( 2) 00:23:36.798 25.966 - 26.088: 99.8249% ( 3) 00:23:36.798 26.088 - 26.209: 99.8583% ( 4) 00:23:36.798 26.209 - 26.331: 99.9083% ( 6) 00:23:36.798 26.331 - 26.453: 99.9500% ( 5) 00:23:36.798 26.575 - 26.697: 99.9583% ( 1) 00:23:36.798 26.819 - 26.941: 99.9666% ( 1) 00:23:36.798 27.307 - 27.429: 99.9750% ( 1) 00:23:36.798 30.476 - 30.598: 99.9833% ( 1) 00:23:36.798 32.183 - 32.427: 99.9917% ( 1) 00:23:36.798 88.259 - 88.746: 100.0000% ( 1) 00:23:36.798 00:23:36.798 Complete histogram 00:23:36.798 ================== 00:23:36.798 Range in us Cumulative Count 00:23:36.798 5.699 - 5.730: 0.0167% ( 2) 00:23:36.798 5.730 - 5.760: 0.1668% ( 18) 00:23:36.798 5.760 - 5.790: 1.0840% ( 110) 00:23:36.798 5.790 - 5.821: 2.3597% ( 153) 00:23:36.798 5.821 - 5.851: 2.9517% ( 71) 00:23:36.798 5.851 - 5.882: 3.1185% ( 20) 00:23:36.798 5.882 - 5.912: 3.2102% ( 11) 00:23:36.798 5.912 - 5.943: 3.2602% ( 6) 00:23:36.798 5.943 - 5.973: 3.2936% ( 4) 00:23:36.798 5.973 - 6.004: 3.3103% ( 2) 00:23:36.798 6.034 - 6.065: 3.3186% ( 1) 00:23:36.798 6.065 - 6.095: 3.3353% ( 2) 00:23:36.798 6.095 - 6.126: 3.5020% ( 20) 00:23:36.798 6.126 - 6.156: 3.9273% ( 51) 00:23:36.798 6.156 - 6.187: 4.2108% ( 34) 00:23:36.798 6.187 - 6.217: 4.5026% ( 35) 00:23:36.798 6.217 - 6.248: 5.0529% ( 66) 00:23:36.798 6.248 - 6.278: 7.9713% ( 350) 00:23:36.798 6.278 - 6.309: 15.8092% ( 940) 00:23:36.798 6.309 - 6.339: 24.2975% ( 1018) 00:23:36.798 6.339 - 6.370: 29.6256% ( 639) 00:23:36.798 6.370 - 6.400: 31.9019% ( 273) 00:23:36.798 6.400 - 6.430: 33.0693% ( 140) 00:23:36.798 6.430 - 6.461: 33.7780% ( 85) 00:23:36.798 6.461 - 6.491: 34.1449% ( 44) 00:23:36.798 6.491 - 6.522: 34.3867% ( 29) 00:23:36.798 6.522 - 6.552: 34.5035% ( 14) 00:23:36.798 6.552 - 6.583: 34.6452% ( 17) 00:23:36.798 6.583 - 6.613: 34.7870% ( 17) 00:23:36.798 6.613 - 6.644: 34.8787% ( 11) 00:23:36.798 6.644 - 6.674: 36.2545% ( 165) 00:23:36.798 6.674 - 6.705: 41.8411% ( 670) 00:23:36.798 6.705 - 6.735: 49.8124% ( 956) 00:23:36.798 6.735 - 6.766: 56.9582% ( 857) 00:23:36.798 6.766 - 6.796: 60.8188% ( 463) 00:23:36.798 6.796 - 6.827: 62.9867% ( 260) 00:23:36.798 6.827 - 6.857: 64.1958% ( 145) 00:23:36.798 6.857 - 6.888: 64.8211% ( 75) 00:23:36.798 6.888 - 6.918: 65.6466% ( 99) 00:23:36.798 6.918 - 6.949: 67.0308% ( 166) 00:23:36.798 6.949 - 6.979: 68.9152% ( 226) 00:23:36.798 6.979 - 7.010: 70.2827% ( 164) 00:23:36.798 7.010 - 7.040: 71.1332% ( 102) 00:23:36.798 7.040 - 7.070: 71.8336% ( 84) 00:23:36.798 7.070 - 7.101: 72.3505% ( 62) 00:23:36.798 7.101 - 7.131: 73.4012% ( 126) 00:23:36.798 7.131 - 7.162: 74.1016% ( 84) 00:23:36.798 7.162 - 7.192: 74.7353% ( 76) 00:23:36.798 7.192 - 7.223: 75.1855% ( 54) 00:23:36.798 7.223 - 7.253: 75.4774% ( 35) 00:23:36.798 7.253 - 7.284: 75.7358% ( 31) 00:23:36.798 7.284 - 7.314: 75.8776% ( 17) 00:23:36.798 7.314 - 7.345: 76.0527% ( 21) 00:23:36.798 7.345 - 7.375: 76.3279% ( 33) 00:23:36.798 7.375 - 7.406: 76.6447% ( 38) 00:23:36.798 7.406 - 7.436: 76.9115% ( 32) 00:23:36.798 7.436 - 7.467: 77.2701% ( 43) 00:23:36.798 7.467 - 7.497: 77.6620% ( 47) 00:23:36.798 7.497 - 7.528: 78.0289% ( 44) 00:23:36.798 7.528 - 7.558: 78.2040% ( 21) 00:23:36.798 7.558 - 7.589: 78.3207% ( 14) 00:23:36.798 7.589 - 7.619: 78.4207% ( 12) 00:23:36.798 7.619 - 7.650: 78.4958% ( 9) 00:23:36.798 7.650 - 7.680: 78.5625% ( 8) 00:23:36.798 7.680 - 7.710: 78.6626% ( 12) 00:23:36.798 7.710 - 7.741: 78.8377% ( 21) 00:23:36.798 7.741 - 7.771: 78.9544% ( 14) 00:23:36.798 7.771 - 7.802: 79.0211% ( 8) 00:23:36.798 7.802 - 7.863: 79.1462% ( 15) 00:23:36.798 7.863 - 7.924: 79.2379% ( 11) 00:23:36.798 7.924 - 7.985: 79.2796% ( 5) 00:23:36.798 7.985 - 8.046: 79.3213% ( 5) 00:23:36.798 8.046 - 8.107: 79.3796% ( 7) 00:23:36.798 8.107 - 8.168: 79.3963% ( 2) 00:23:36.798 8.168 - 8.229: 79.4297% ( 4) 00:23:36.798 8.229 - 8.290: 79.4797% ( 6) 00:23:36.798 8.290 - 8.350: 79.4880% ( 1) 00:23:36.798 8.350 - 8.411: 79.5297% ( 5) 00:23:36.798 8.411 - 8.472: 79.5381% ( 1) 00:23:36.798 8.472 - 8.533: 79.5714% ( 4) 00:23:36.798 8.533 - 8.594: 79.6048% ( 4) 00:23:36.798 8.594 - 8.655: 79.6548% ( 6) 00:23:36.798 8.655 - 8.716: 79.6798% ( 3) 00:23:36.798 8.716 - 8.777: 79.7132% ( 4) 00:23:36.798 8.777 - 8.838: 79.7215% ( 1) 00:23:36.798 8.838 - 8.899: 79.7382% ( 2) 00:23:36.798 8.899 - 8.960: 79.7549% ( 2) 00:23:36.798 8.960 - 9.021: 79.9383% ( 22) 00:23:36.798 9.021 - 9.082: 83.4237% ( 418) 00:23:36.798 9.082 - 9.143: 90.3360% ( 829) 00:23:36.799 9.143 - 9.204: 93.5462% ( 385) 00:23:36.799 9.204 - 9.265: 95.0221% ( 177) 00:23:36.799 9.265 - 9.326: 95.7892% ( 92) 00:23:36.799 9.326 - 9.387: 96.0477% ( 31) 00:23:36.799 9.387 - 9.448: 96.1978% ( 18) 00:23:36.799 9.448 - 9.509: 96.2812% ( 10) 00:23:36.799 9.509 - 9.570: 96.2895% ( 1) 00:23:36.799 9.570 - 9.630: 96.3145% ( 3) 00:23:36.799 9.630 - 9.691: 96.3395% ( 3) 00:23:36.799 9.691 - 9.752: 96.3729% ( 4) 00:23:36.799 9.752 - 9.813: 96.3896% ( 2) 00:23:36.799 9.813 - 9.874: 96.4146% ( 3) 00:23:36.799 9.874 - 9.935: 96.4229% ( 1) 00:23:36.799 9.935 - 9.996: 96.4396% ( 2) 00:23:36.799 9.996 - 10.057: 96.4646% ( 3) 00:23:36.799 10.057 - 10.118: 96.4896% ( 3) 00:23:36.799 10.118 - 10.179: 96.5647% ( 9) 00:23:36.799 10.179 - 10.240: 96.7231% ( 19) 00:23:36.799 10.240 - 10.301: 96.8065% ( 10) 00:23:36.799 10.301 - 10.362: 96.8565% ( 6) 00:23:36.799 10.362 - 10.423: 96.9232% ( 8) 00:23:36.799 10.484 - 10.545: 96.9399% ( 2) 00:23:36.799 10.545 - 10.606: 96.9899% ( 6) 00:23:36.799 10.606 - 10.667: 97.0149% ( 3) 00:23:36.799 10.667 - 10.728: 97.0483% ( 4) 00:23:36.799 10.728 - 10.789: 97.0900% ( 5) 00:23:36.799 10.789 - 10.849: 97.1066% ( 2) 00:23:36.799 10.849 - 10.910: 97.1400% ( 4) 00:23:36.799 10.910 - 10.971: 97.1817% ( 5) 00:23:36.799 10.971 - 11.032: 97.2317% ( 6) 00:23:36.799 11.032 - 11.093: 97.2401% ( 1) 00:23:36.799 11.093 - 11.154: 97.2734% ( 4) 00:23:36.799 11.154 - 11.215: 97.2984% ( 3) 00:23:36.799 11.215 - 11.276: 97.3401% ( 5) 00:23:36.799 11.276 - 11.337: 97.3568% ( 2) 00:23:36.799 11.337 - 11.398: 97.3901% ( 4) 00:23:36.799 11.398 - 11.459: 97.4068% ( 2) 00:23:36.799 11.459 - 11.520: 97.4318% ( 3) 00:23:36.799 11.520 - 11.581: 97.4485% ( 2) 00:23:36.799 11.581 - 11.642: 97.4652% ( 2) 00:23:36.799 11.642 - 11.703: 97.4819% ( 2) 00:23:36.799 11.703 - 11.764: 97.4985% ( 2) 00:23:36.799 11.825 - 11.886: 97.5319% ( 4) 00:23:36.799 11.886 - 11.947: 97.5402% ( 1) 00:23:36.799 11.947 - 12.008: 97.5652% ( 3) 00:23:36.799 12.008 - 12.069: 97.5819% ( 2) 00:23:36.799 12.069 - 12.129: 97.6069% ( 3) 00:23:36.799 12.129 - 12.190: 97.6236% ( 2) 00:23:36.799 12.251 - 12.312: 97.6320% ( 1) 00:23:36.799 12.373 - 12.434: 97.6486% ( 2) 00:23:36.799 12.434 - 12.495: 97.6653% ( 2) 00:23:36.799 12.495 - 12.556: 97.6820% ( 2) 00:23:36.799 12.556 - 12.617: 97.6903% ( 1) 00:23:36.799 12.617 - 12.678: 97.7070% ( 2) 00:23:36.799 12.678 - 12.739: 97.7237% ( 2) 00:23:36.799 12.739 - 12.800: 97.7487% ( 3) 00:23:36.799 12.800 - 12.861: 97.7737% ( 3) 00:23:36.799 12.861 - 12.922: 97.7904% ( 2) 00:23:36.799 12.922 - 12.983: 97.8154% ( 3) 00:23:36.799 12.983 - 13.044: 97.8321% ( 2) 00:23:36.799 13.044 - 13.105: 97.8654% ( 4) 00:23:36.799 13.105 - 13.166: 97.8738% ( 1) 00:23:36.799 13.166 - 13.227: 97.8821% ( 1) 00:23:36.799 13.227 - 13.288: 97.8904% ( 1) 00:23:36.799 13.288 - 13.349: 97.9155% ( 3) 00:23:36.799 13.349 - 13.409: 97.9238% ( 1) 00:23:36.799 13.409 - 13.470: 97.9405% ( 2) 00:23:36.799 13.470 - 13.531: 97.9488% ( 1) 00:23:36.799 13.531 - 13.592: 97.9738% ( 3) 00:23:36.799 13.592 - 13.653: 97.9905% ( 2) 00:23:36.799 13.653 - 13.714: 97.9988% ( 1) 00:23:36.799 13.714 - 13.775: 98.0072% ( 1) 00:23:36.799 13.775 - 13.836: 98.0155% ( 1) 00:23:36.799 13.836 - 13.897: 98.0238% ( 1) 00:23:36.799 13.897 - 13.958: 98.0405% ( 2) 00:23:36.799 13.958 - 14.019: 98.0489% ( 1) 00:23:36.799 14.019 - 14.080: 98.0655% ( 2) 00:23:36.799 14.385 - 14.446: 98.0822% ( 2) 00:23:36.799 14.446 - 14.507: 98.0906% ( 1) 00:23:36.799 14.507 - 14.568: 98.0989% ( 1) 00:23:36.799 14.568 - 14.629: 98.1156% ( 2) 00:23:36.799 14.629 - 14.689: 98.1239% ( 1) 00:23:36.799 14.689 - 14.750: 98.1322% ( 1) 00:23:36.799 14.750 - 14.811: 98.1489% ( 2) 00:23:36.799 14.933 - 14.994: 98.1573% ( 1) 00:23:36.799 14.994 - 15.055: 98.1656% ( 1) 00:23:36.799 15.055 - 15.116: 98.1739% ( 1) 00:23:36.799 15.116 - 15.177: 98.1906% ( 2) 00:23:36.799 15.177 - 15.238: 98.2073% ( 2) 00:23:36.799 15.238 - 15.299: 98.2240% ( 2) 00:23:36.799 15.360 - 15.421: 98.2406% ( 2) 00:23:36.799 15.421 - 15.482: 98.2657% ( 3) 00:23:36.799 15.604 - 15.726: 98.2823% ( 2) 00:23:36.799 15.726 - 15.848: 98.2907% ( 1) 00:23:36.799 15.969 - 16.091: 98.2990% ( 1) 00:23:36.799 16.091 - 16.213: 98.3324% ( 4) 00:23:36.799 16.335 - 16.457: 98.3490% ( 2) 00:23:36.799 16.457 - 16.579: 98.3657% ( 2) 00:23:36.799 16.579 - 16.701: 98.3907% ( 3) 00:23:36.799 16.701 - 16.823: 98.4741% ( 10) 00:23:36.799 16.823 - 16.945: 98.5742% ( 12) 00:23:36.799 16.945 - 17.067: 98.6826% ( 13) 00:23:36.799 17.067 - 17.189: 98.6992% ( 2) 00:23:36.799 17.189 - 17.310: 98.7493% ( 6) 00:23:36.799 17.310 - 17.432: 98.7993% ( 6) 00:23:36.799 17.432 - 17.554: 98.8410% ( 5) 00:23:36.799 17.554 - 17.676: 98.9827% ( 17) 00:23:36.799 17.676 - 17.798: 99.1245% ( 17) 00:23:36.799 17.798 - 17.920: 99.2079% ( 10) 00:23:36.799 17.920 - 18.042: 99.2913% ( 10) 00:23:36.799 18.042 - 18.164: 99.3830% ( 11) 00:23:36.799 18.164 - 18.286: 99.4497% ( 8) 00:23:36.799 18.286 - 18.408: 99.5247% ( 9) 00:23:36.799 18.529 - 18.651: 99.5331% ( 1) 00:23:36.799 18.651 - 18.773: 99.5414% ( 1) 00:23:36.799 18.773 - 18.895: 99.5581% ( 2) 00:23:36.799 18.895 - 19.017: 99.5664% ( 1) 00:23:36.799 19.261 - 19.383: 99.5831% ( 2) 00:23:36.799 19.627 - 19.749: 99.5914% ( 1) 00:23:36.799 19.749 - 19.870: 99.6081% ( 2) 00:23:36.799 19.992 - 20.114: 99.6248% ( 2) 00:23:36.799 20.114 - 20.236: 99.6331% ( 1) 00:23:36.799 20.236 - 20.358: 99.6415% ( 1) 00:23:36.799 20.358 - 20.480: 99.6581% ( 2) 00:23:36.799 20.724 - 20.846: 99.6665% ( 1) 00:23:36.799 20.968 - 21.089: 99.6748% ( 1) 00:23:36.799 21.089 - 21.211: 99.6831% ( 1) 00:23:36.799 22.065 - 22.187: 99.6998% ( 2) 00:23:36.799 22.309 - 22.430: 99.7082% ( 1) 00:23:36.799 22.430 - 22.552: 99.7248% ( 2) 00:23:36.799 22.552 - 22.674: 99.7332% ( 1) 00:23:36.799 22.796 - 22.918: 99.7499% ( 2) 00:23:36.799 22.918 - 23.040: 99.7749% ( 3) 00:23:36.799 23.040 - 23.162: 99.7915% ( 2) 00:23:36.799 23.162 - 23.284: 99.8166% ( 3) 00:23:36.799 23.284 - 23.406: 99.8499% ( 4) 00:23:36.799 23.406 - 23.528: 99.8666% ( 2) 00:23:36.799 23.528 - 23.649: 99.8749% ( 1) 00:23:36.799 23.649 - 23.771: 99.8833% ( 1) 00:23:36.799 23.771 - 23.893: 99.8916% ( 1) 00:23:36.799 23.893 - 24.015: 99.9083% ( 2) 00:23:36.799 24.015 - 24.137: 99.9166% ( 1) 00:23:36.799 24.137 - 24.259: 99.9250% ( 1) 00:23:36.799 24.503 - 24.625: 99.9333% ( 1) 00:23:36.799 25.966 - 26.088: 99.9416% ( 1) 00:23:36.799 26.088 - 26.209: 99.9500% ( 1) 00:23:36.799 27.672 - 27.794: 99.9583% ( 1) 00:23:36.799 28.160 - 28.282: 99.9666% ( 1) 00:23:36.799 29.379 - 29.501: 99.9750% ( 1) 00:23:36.799 44.861 - 45.105: 99.9833% ( 1) 00:23:36.799 68.266 - 68.754: 99.9917% ( 1) 00:23:36.799 143.360 - 144.335: 100.0000% ( 1) 00:23:36.799 00:23:36.799 00:23:36.799 real 0m1.576s 00:23:36.799 user 0m1.021s 00:23:36.799 sys 0m0.554s 00:23:36.799 02:24:24 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:36.799 02:24:24 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:23:36.799 ************************************ 00:23:36.799 END TEST nvme_overhead 00:23:36.799 ************************************ 00:23:36.799 02:24:24 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:23:36.799 02:24:24 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:23:36.800 02:24:24 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:36.800 02:24:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:36.800 ************************************ 00:23:36.800 START TEST nvme_arbitration 00:23:36.800 ************************************ 00:23:36.800 02:24:24 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:23:37.058 EAL: TSC is not safe to use in SMP mode 00:23:37.058 EAL: TSC is not invariant 00:23:37.058 [2024-05-15 02:24:24.953884] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:41.243 Initializing NVMe Controllers 00:23:41.243 Attaching to 0000:00:10.0 00:23:41.243 Attached to 0000:00:10.0 00:23:41.243 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:23:41.243 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:23:41.243 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:23:41.243 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:23:41.243 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:23:41.243 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:23:41.243 Initialization complete. Launching workers. 00:23:41.243 Starting thread on core 1 with urgent priority queue 00:23:41.243 Starting thread on core 2 with urgent priority queue 00:23:41.243 Starting thread on core 3 with urgent priority queue 00:23:41.243 Starting thread on core 0 with urgent priority queue 00:23:41.243 QEMU NVMe Ctrl (12340 ) core 0: 5786.67 IO/s 17.28 secs/100000 ios 00:23:41.243 QEMU NVMe Ctrl (12340 ) core 1: 5982.33 IO/s 16.72 secs/100000 ios 00:23:41.243 QEMU NVMe Ctrl (12340 ) core 2: 6012.00 IO/s 16.63 secs/100000 ios 00:23:41.243 QEMU NVMe Ctrl (12340 ) core 3: 6063.00 IO/s 16.49 secs/100000 ios 00:23:41.243 ======================================================== 00:23:41.243 00:23:41.243 00:23:41.243 real 0m4.245s 00:23:41.243 user 0m12.732s 00:23:41.243 sys 0m0.542s 00:23:41.243 02:24:28 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.243 02:24:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:23:41.243 ************************************ 00:23:41.243 END TEST nvme_arbitration 00:23:41.243 ************************************ 00:23:41.243 02:24:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:23:41.243 02:24:28 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:41.243 02:24:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.243 02:24:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:41.243 ************************************ 00:23:41.243 START TEST nvme_single_aen 00:23:41.243 ************************************ 00:23:41.243 02:24:28 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:23:41.243 EAL: TSC is not safe to use in SMP mode 00:23:41.243 EAL: TSC is not invariant 00:23:41.243 [2024-05-15 02:24:29.216561] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:41.243 Asynchronous Event Request test 00:23:41.243 Attaching to 0000:00:10.0 00:23:41.243 Attached to 0000:00:10.0 00:23:41.243 Reset controller to setup AER completions for this process 00:23:41.243 Registering asynchronous event callbacks... 00:23:41.243 Getting orig temperature thresholds of all controllers 00:23:41.243 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:41.243 Setting all controllers temperature threshold low to trigger AER 00:23:41.243 Waiting for all controllers temperature threshold to be set lower 00:23:41.243 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:41.243 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:23:41.243 Waiting for all controllers to trigger AER and reset threshold 00:23:41.243 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:41.243 Cleaning up... 00:23:41.243 00:23:41.243 real 0m0.505s 00:23:41.243 user 0m0.017s 00:23:41.243 sys 0m0.487s 00:23:41.243 02:24:29 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.243 02:24:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:23:41.243 ************************************ 00:23:41.243 END TEST nvme_single_aen 00:23:41.243 ************************************ 00:23:41.502 02:24:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:23:41.502 02:24:29 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:41.502 02:24:29 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.502 02:24:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:41.502 ************************************ 00:23:41.502 START TEST nvme_doorbell_aers 00:23:41.502 ************************************ 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:23:41.502 02:24:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:42.070 EAL: TSC is not safe to use in SMP mode 00:23:42.070 EAL: TSC is not invariant 00:23:42.070 [2024-05-15 02:24:29.866739] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:42.070 Executing: test_write_invalid_db 00:23:42.070 Waiting for AER completion... 00:23:42.070 Asynchronous Event received. 00:23:42.070 Error Informaton Log Page received. 00:23:42.070 Success: test_write_invalid_db 00:23:42.070 00:23:42.070 Executing: test_invalid_db_write_overflow_sq 00:23:42.070 Waiting for AER completion... 00:23:42.070 Asynchronous Event received. 00:23:42.070 Error Informaton Log Page received. 00:23:42.070 Success: test_invalid_db_write_overflow_sq 00:23:42.070 00:23:42.070 Executing: test_invalid_db_write_overflow_cq 00:23:42.070 Waiting for AER completion... 00:23:42.070 Asynchronous Event received. 00:23:42.070 Error Informaton Log Page received. 00:23:42.070 Success: test_invalid_db_write_overflow_cq 00:23:42.070 00:23:42.070 00:23:42.070 real 0m0.615s 00:23:42.070 user 0m0.035s 00:23:42.070 sys 0m0.591s 00:23:42.070 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:42.070 02:24:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:23:42.070 ************************************ 00:23:42.070 END TEST nvme_doorbell_aers 00:23:42.070 ************************************ 00:23:42.070 02:24:29 nvme -- nvme/nvme.sh@97 -- # uname 00:23:42.070 02:24:29 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:23:42.070 02:24:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:42.070 02:24:29 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:42.070 02:24:29 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:42.070 02:24:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:42.070 ************************************ 00:23:42.070 START TEST bdev_nvme_reset_stuck_adm_cmd 00:23:42.070 ************************************ 00:23:42.070 02:24:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:42.329 * Looking for test storage... 00:23:42.329 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:23:42.329 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=67058 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 67058 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 67058 ']' 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:42.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:42.330 02:24:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:42.330 [2024-05-15 02:24:30.225316] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:42.330 [2024-05-15 02:24:30.225464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:42.896 EAL: TSC is not safe to use in SMP mode 00:23:42.896 EAL: TSC is not invariant 00:23:42.896 [2024-05-15 02:24:30.675128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.896 [2024-05-15 02:24:30.768106] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:42.896 [2024-05-15 02:24:30.768165] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:42.896 [2024-05-15 02:24:30.768176] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:23:42.896 [2024-05-15 02:24:30.768186] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:23:42.896 [2024-05-15 02:24:30.772400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.896 [2024-05-15 02:24:30.772497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.896 [2024-05-15 02:24:30.772646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.896 [2024-05-15 02:24:30.772643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:43.463 [2024-05-15 02:24:31.301286] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:43.463 nvme0n1 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:43.463 true 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715739871 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=67070 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:43.463 02:24:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 [2024-05-15 02:24:33.506452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:23:45.998 [2024-05-15 02:24:33.506614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.998 [2024-05-15 02:24:33.506629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:45.998 [2024-05-15 02:24:33.506655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.998 [2024-05-15 02:24:33.507832] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 67070 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 67070 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 67070 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.N6weXc 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.77B1CO 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 67058 ']' 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps -c -o command 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # tail -1 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:23:45.998 killing process with pid 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67058' 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 67058 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:23:45.998 00:23:45.998 real 0m3.818s 00:23:45.998 user 0m12.831s 00:23:45.998 sys 0m0.756s 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:45.998 02:24:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 ************************************ 00:23:45.998 END TEST bdev_nvme_reset_stuck_adm_cmd 00:23:45.998 ************************************ 00:23:45.998 02:24:33 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:23:45.998 02:24:33 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:23:45.998 02:24:33 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:45.998 02:24:33 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:45.998 02:24:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 ************************************ 00:23:45.998 START TEST nvme_fio 00:23:45.998 ************************************ 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:45.998 02:24:33 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:45.998 02:24:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:46.567 EAL: TSC is not safe to use in SMP mode 00:23:46.567 EAL: TSC is not invariant 00:23:46.567 [2024-05-15 02:24:34.393672] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:46.567 02:24:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:46.567 02:24:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:47.135 EAL: TSC is not safe to use in SMP mode 00:23:47.135 EAL: TSC is not invariant 00:23:47.135 [2024-05-15 02:24:34.900633] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:47.135 02:24:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:47.135 02:24:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:47.135 02:24:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:47.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:47.135 fio-3.35 00:23:47.135 Starting 1 thread 00:23:47.705 EAL: TSC is not safe to use in SMP mode 00:23:47.705 EAL: TSC is not invariant 00:23:47.705 [2024-05-15 02:24:35.529814] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:50.279 00:23:50.279 test: (groupid=0, jobs=1): err= 0: pid=102854: Wed May 15 02:24:38 2024 00:23:50.279 read: IOPS=47.2k, BW=184MiB/s (193MB/s)(369MiB/2001msec) 00:23:50.279 slat (nsec): min=430, max=19880, avg=602.04, stdev=326.18 00:23:50.279 clat (usec): min=270, max=5879, avg=1354.67, stdev=248.91 00:23:50.279 lat (usec): min=270, max=5893, avg=1355.28, stdev=248.92 00:23:50.279 clat percentiles (usec): 00:23:50.279 | 1.00th=[ 898], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:23:50.279 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1385], 00:23:50.279 | 70.00th=[ 1434], 80.00th=[ 1516], 90.00th=[ 1631], 95.00th=[ 1729], 00:23:50.279 | 99.00th=[ 2057], 99.50th=[ 2245], 99.90th=[ 2868], 99.95th=[ 5211], 00:23:50.279 | 99.99th=[ 5800] 00:23:50.279 bw ( KiB/s): min=163808, max=198820, per=98.04%, avg=185204.00, stdev=18757.83, samples=3 00:23:50.279 iops : min=40952, max=49705, avg=46301.00, stdev=4689.46, samples=3 00:23:50.279 write: IOPS=47.1k, BW=184MiB/s (193MB/s)(368MiB/2001msec); 0 zone resets 00:23:50.279 slat (nsec): min=458, max=18277, avg=785.32, stdev=384.80 00:23:50.279 clat (usec): min=244, max=5855, avg=1355.47, stdev=250.73 00:23:50.279 lat (usec): min=245, max=5859, avg=1356.26, stdev=250.74 00:23:50.279 clat percentiles (usec): 00:23:50.279 | 1.00th=[ 898], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:23:50.279 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1385], 00:23:50.279 | 70.00th=[ 1434], 80.00th=[ 1516], 90.00th=[ 1631], 95.00th=[ 1729], 00:23:50.279 | 99.00th=[ 2057], 99.50th=[ 2245], 99.90th=[ 2900], 99.95th=[ 5342], 00:23:50.279 | 99.99th=[ 5735] 00:23:50.279 bw ( KiB/s): min=163848, max=197546, per=97.74%, avg=184077.00, stdev=17837.10, samples=3 00:23:50.279 iops : min=40962, max=49386, avg=46019.00, stdev=4459.04, samples=3 00:23:50.279 lat (usec) : 250=0.01%, 500=0.06%, 750=0.27%, 1000=2.10% 00:23:50.279 lat (msec) : 2=96.32%, 4=1.18%, 10=0.07% 00:23:50.279 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=2 00:23:50.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:50.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.279 issued rwts: total=94504,94217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.279 00:23:50.279 Run status group 0 (all jobs): 00:23:50.279 READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=369MiB (387MB), run=2001-2001msec 00:23:50.279 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=368MiB (386MB), run=2001-2001msec 00:23:51.276 02:24:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:51.276 02:24:38 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:23:51.276 00:23:51.276 real 0m5.147s 00:23:51.276 user 0m2.430s 00:23:51.276 sys 0m2.628s 00:23:51.276 02:24:38 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.276 02:24:38 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:23:51.276 ************************************ 00:23:51.276 END TEST nvme_fio 00:23:51.276 ************************************ 00:23:51.276 00:23:51.276 real 0m24.898s 00:23:51.276 user 0m32.136s 00:23:51.276 sys 0m11.431s 00:23:51.276 02:24:39 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.276 ************************************ 00:23:51.276 END TEST nvme 00:23:51.276 ************************************ 00:23:51.276 02:24:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:51.276 02:24:39 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:23:51.276 02:24:39 -- spdk/autotest.sh@217 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:51.276 02:24:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:51.276 02:24:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.276 02:24:39 -- common/autotest_common.sh@10 -- # set +x 00:23:51.276 ************************************ 00:23:51.276 START TEST nvme_scc 00:23:51.276 ************************************ 00:23:51.276 02:24:39 nvme_scc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:51.276 * Looking for test storage... 00:23:51.276 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:23:51.276 02:24:39 nvme_scc -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.276 02:24:39 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.276 02:24:39 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.276 02:24:39 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.276 02:24:39 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:51.276 02:24:39 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:51.276 02:24:39 nvme_scc -- paths/export.sh@4 -- # export PATH 00:23:51.276 02:24:39 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:51.276 02:24:39 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:23:51.276 02:24:39 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.276 02:24:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:23:51.276 02:24:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:23:51.276 02:24:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:23:51.276 00:23:51.276 real 0m0.198s 00:23:51.276 user 0m0.132s 00:23:51.276 sys 0m0.142s 00:23:51.276 02:24:39 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.276 02:24:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:23:51.276 ************************************ 00:23:51.276 END TEST nvme_scc 00:23:51.276 ************************************ 00:23:51.276 02:24:39 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:23:51.276 02:24:39 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:23:51.276 02:24:39 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:23:51.276 02:24:39 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:23:51.277 02:24:39 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:23:51.277 02:24:39 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:51.277 02:24:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:51.277 02:24:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.277 02:24:39 -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 ************************************ 00:23:51.277 START TEST nvme_rpc 00:23:51.277 ************************************ 00:23:51.277 02:24:39 nvme_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:51.535 * Looking for test storage... 00:23:51.535 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67312 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:23:51.535 02:24:39 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67312 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 67312 ']' 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.535 02:24:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:51.535 [2024-05-15 02:24:39.502537] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:51.535 [2024-05-15 02:24:39.502792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:52.099 EAL: TSC is not safe to use in SMP mode 00:23:52.099 EAL: TSC is not invariant 00:23:52.099 [2024-05-15 02:24:39.960007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:52.099 [2024-05-15 02:24:40.054664] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:52.099 [2024-05-15 02:24:40.054740] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:52.099 [2024-05-15 02:24:40.058178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.099 [2024-05-15 02:24:40.058161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.663 02:24:40 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.663 02:24:40 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:23:52.663 02:24:40 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:52.920 [2024-05-15 02:24:40.812990] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:23:52.920 Nvme0n1 00:23:52.920 02:24:40 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:23:52.920 02:24:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:23:53.177 request: 00:23:53.177 { 00:23:53.177 "filename": "non_existing_file", 00:23:53.177 "bdev_name": "Nvme0n1", 00:23:53.177 "method": "bdev_nvme_apply_firmware", 00:23:53.177 "req_id": 1 00:23:53.177 } 00:23:53.177 Got JSON-RPC error response 00:23:53.177 response: 00:23:53.177 { 00:23:53.177 "code": -32603, 00:23:53.177 "message": "open file failed." 00:23:53.177 } 00:23:53.177 02:24:41 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:23:53.177 02:24:41 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:23:53.177 02:24:41 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:53.458 02:24:41 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:53.458 02:24:41 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67312 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 67312 ']' 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 67312 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 67312 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@954 -- # tail -1 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:23:53.458 killing process with pid 67312 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67312' 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@965 -- # kill 67312 00:23:53.458 02:24:41 nvme_rpc -- common/autotest_common.sh@970 -- # wait 67312 00:23:53.717 00:23:53.717 real 0m2.385s 00:23:53.717 user 0m4.538s 00:23:53.717 sys 0m0.801s 00:23:53.717 02:24:41 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:53.717 02:24:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:53.717 ************************************ 00:23:53.717 END TEST nvme_rpc 00:23:53.717 ************************************ 00:23:53.717 02:24:41 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:53.717 02:24:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:53.717 02:24:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:53.717 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:23:53.717 ************************************ 00:23:53.717 START TEST nvme_rpc_timeouts 00:23:53.717 ************************************ 00:23:53.717 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:53.975 * Looking for test storage... 00:23:53.975 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67349 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67349 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67377 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67377 00:23:53.975 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 67377 ']' 00:23:53.975 02:24:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:53.975 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.975 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:53.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.976 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.976 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:53.976 02:24:41 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:23:53.976 [2024-05-15 02:24:41.881230] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:23:53.976 [2024-05-15 02:24:41.881398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:54.541 EAL: TSC is not safe to use in SMP mode 00:23:54.541 EAL: TSC is not invariant 00:23:54.541 [2024-05-15 02:24:42.354105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:54.541 [2024-05-15 02:24:42.435366] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:54.541 [2024-05-15 02:24:42.435432] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:23:54.541 [2024-05-15 02:24:42.438239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.541 [2024-05-15 02:24:42.438237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.106 02:24:42 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.106 02:24:42 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:23:55.106 Checking default timeout settings: 00:23:55.106 02:24:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:23:55.106 02:24:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:55.364 Making settings changes with rpc: 00:23:55.364 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:23:55.364 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:23:55.637 Check default vs. modified settings: 00:23:55.637 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:23:55.637 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67349 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67349 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:23:55.900 Setting action_on_timeout is changed as expected. 00:23:55.900 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67349 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67349 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:23:55.901 Setting timeout_us is changed as expected. 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67349 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67349 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:23:55.901 Setting timeout_admin_us is changed as expected. 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67349 /tmp/settings_modified_67349 00:23:55.901 02:24:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67377 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 67377 ']' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 67377 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # tail -1 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps -c -o command 67377 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:23:55.901 killing process with pid 67377 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67377' 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 67377 00:23:55.901 02:24:43 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 67377 00:23:56.159 RPC TIMEOUT SETTING TEST PASSED. 00:23:56.159 02:24:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:23:56.159 00:23:56.159 real 0m2.398s 00:23:56.159 user 0m4.565s 00:23:56.159 sys 0m0.823s 00:23:56.159 02:24:44 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:56.159 ************************************ 00:23:56.159 END TEST nvme_rpc_timeouts 00:23:56.159 ************************************ 00:23:56.159 02:24:44 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:23:56.159 02:24:44 -- spdk/autotest.sh@239 -- # uname -s 00:23:56.159 02:24:44 -- spdk/autotest.sh@239 -- # '[' FreeBSD = Linux ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]] 00:23:56.159 02:24:44 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@256 -- # timing_exit lib 00:23:56.159 02:24:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.159 02:24:44 -- common/autotest_common.sh@10 -- # set +x 00:23:56.159 02:24:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:23:56.159 02:24:44 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:23:56.159 02:24:44 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:23:56.159 02:24:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:56.159 02:24:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:56.159 02:24:44 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:23:56.159 02:24:44 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:23:56.159 02:24:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:56.159 02:24:44 -- common/autotest_common.sh@10 -- # set +x 00:23:56.159 02:24:44 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:23:56.159 02:24:44 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:23:56.159 02:24:44 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:23:56.159 02:24:44 -- common/autotest_common.sh@10 -- # set +x 00:23:57.090 setup.sh cleanup function not yet supported on FreeBSD 00:23:57.090 02:24:44 -- common/autotest_common.sh@1447 -- # return 0 00:23:57.090 02:24:44 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:23:57.090 02:24:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.090 02:24:44 -- common/autotest_common.sh@10 -- # set +x 00:23:57.090 02:24:44 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:23:57.090 02:24:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.090 02:24:44 -- common/autotest_common.sh@10 -- # set +x 00:23:57.090 02:24:44 -- spdk/autotest.sh@383 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:57.090 02:24:44 -- spdk/autotest.sh@385 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:57.090 02:24:44 -- spdk/autotest.sh@387 -- # hash lcov 00:23:57.090 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 387: hash: lcov: not found 00:23:57.090 02:24:44 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.090 02:24:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:57.090 02:24:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.090 02:24:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.090 02:24:44 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:57.090 02:24:44 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:57.090 02:24:44 -- paths/export.sh@4 -- $ export PATH 00:23:57.090 02:24:44 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:23:57.090 02:24:44 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:23:57.090 02:24:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:23:57.090 02:24:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715739884.XXXXXX 00:23:57.090 02:24:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715739884.XXXXXX.NlqCQe7X 00:23:57.090 02:24:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:23:57.090 02:24:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:23:57.090 02:24:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:23:57.090 02:24:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:57.090 02:24:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:57.090 02:24:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:23:57.090 02:24:44 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:23:57.090 02:24:44 -- common/autotest_common.sh@10 -- $ set +x 00:23:57.090 02:24:45 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:23:57.090 02:24:45 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:23:57.090 02:24:45 -- pm/common@17 -- $ local monitor 00:23:57.090 02:24:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:57.090 02:24:45 -- pm/common@25 -- $ sleep 1 00:23:57.090 02:24:45 -- pm/common@21 -- $ date +%s 00:23:57.090 02:24:45 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715739885 00:23:57.090 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715739885_collect-vmstat.pm.log 00:23:58.462 02:24:46 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:23:58.462 02:24:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:58.462 02:24:46 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:23:58.462 02:24:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:58.462 02:24:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:58.462 02:24:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:58.462 02:24:46 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:58.462 02:24:46 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:58.462 02:24:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:58.462 02:24:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:58.462 02:24:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:58.462 02:24:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:58.462 02:24:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:58.462 02:24:46 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:58.462 02:24:46 -- pm/common@44 -- $ pid=67598 00:23:58.462 02:24:46 -- pm/common@50 -- $ kill -TERM 67598 00:23:58.462 + [[ -n 1268 ]] 00:23:58.462 + sudo kill 1268 00:23:58.471 [Pipeline] } 00:23:58.483 [Pipeline] // timeout 00:23:58.489 [Pipeline] } 00:23:58.503 [Pipeline] // stage 00:23:58.507 [Pipeline] } 00:23:58.523 [Pipeline] // catchError 00:23:58.530 [Pipeline] stage 00:23:58.532 [Pipeline] { (Stop VM) 00:23:58.549 [Pipeline] sh 00:23:58.824 + vagrant halt 00:24:03.006 ==> default: Halting domain... 00:24:21.213 [Pipeline] sh 00:24:21.493 + vagrant destroy -f 00:24:25.680 ==> default: Removing domain... 00:24:25.689 [Pipeline] sh 00:24:25.979 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:24:25.995 [Pipeline] } 00:24:26.016 [Pipeline] // stage 00:24:26.023 [Pipeline] } 00:24:26.044 [Pipeline] // dir 00:24:26.050 [Pipeline] } 00:24:26.067 [Pipeline] // wrap 00:24:26.073 [Pipeline] } 00:24:26.089 [Pipeline] // catchError 00:24:26.098 [Pipeline] stage 00:24:26.100 [Pipeline] { (Epilogue) 00:24:26.112 [Pipeline] sh 00:24:26.392 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:26.404 [Pipeline] catchError 00:24:26.406 [Pipeline] { 00:24:26.421 [Pipeline] sh 00:24:26.702 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:26.960 Artifacts sizes are good 00:24:26.969 [Pipeline] } 00:24:26.987 [Pipeline] // catchError 00:24:26.998 [Pipeline] archiveArtifacts 00:24:27.005 Archiving artifacts 00:24:27.048 [Pipeline] cleanWs 00:24:27.059 [WS-CLEANUP] Deleting project workspace... 00:24:27.059 [WS-CLEANUP] Deferred wipeout is used... 00:24:27.066 [WS-CLEANUP] done 00:24:27.068 [Pipeline] } 00:24:27.087 [Pipeline] // stage 00:24:27.094 [Pipeline] } 00:24:27.111 [Pipeline] // node 00:24:27.117 [Pipeline] End of Pipeline 00:24:27.151 Finished: SUCCESS