00:00:00.001 Started by upstream project "autotest-per-patch" build number 122813 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.198 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.198 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.715 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.726 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.738 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.738 > git config core.sparsecheckout # timeout=10 00:00:04.748 > git read-tree -mu HEAD # timeout=10 00:00:04.764 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.780 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.780 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.876 [Pipeline] Start of Pipeline 00:00:04.890 [Pipeline] library 00:00:04.892 Loading library shm_lib@master 00:00:04.892 Library shm_lib@master is cached. Copying from home. 00:00:04.908 [Pipeline] node 00:00:19.909 Still waiting to schedule task 00:00:19.910 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:00.770 Running on VM-host-SM16 in /var/jenkins/workspace/freebsd-vg-autotest 00:03:00.772 [Pipeline] { 00:03:00.787 [Pipeline] catchError 00:03:00.788 [Pipeline] { 00:03:00.803 [Pipeline] wrap 00:03:00.815 [Pipeline] { 00:03:00.824 [Pipeline] stage 00:03:00.826 [Pipeline] { (Prologue) 00:03:00.854 [Pipeline] echo 00:03:00.855 Node: VM-host-SM16 00:03:00.861 [Pipeline] cleanWs 00:03:00.869 [WS-CLEANUP] Deleting project workspace... 00:03:00.869 [WS-CLEANUP] Deferred wipeout is used... 00:03:00.875 [WS-CLEANUP] done 00:03:01.062 [Pipeline] setCustomBuildProperty 00:03:01.132 [Pipeline] nodesByLabel 00:03:01.134 Found a total of 1 nodes with the 'sorcerer' label 00:03:01.144 [Pipeline] httpRequest 00:03:01.149 HttpMethod: GET 00:03:01.149 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:03:01.151 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:03:01.153 Response Code: HTTP/1.1 200 OK 00:03:01.153 Success: Status code 200 is in the accepted range: 200,404 00:03:01.154 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:03:01.292 [Pipeline] sh 00:03:01.570 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:03:01.589 [Pipeline] httpRequest 00:03:01.594 HttpMethod: GET 00:03:01.595 URL: http://10.211.164.101/packages/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:03:01.595 Sending request to url: http://10.211.164.101/packages/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:03:01.596 Response Code: HTTP/1.1 200 OK 00:03:01.597 Success: Status code 200 is in the accepted range: 200,404 00:03:01.597 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:03:03.715 [Pipeline] sh 00:03:03.992 + tar --no-same-owner -xf spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:03:07.289 [Pipeline] sh 00:03:07.573 + git -C spdk log --oneline -n5 00:03:07.573 52939f252 lib/blobfs: fix memory error for spdk_file_write 00:03:07.573 235c4c537 xnvme: change gitmodule-remote 00:03:07.573 bf8fa3b96 test/skipped_tests: update the list to current per-patch 00:03:07.573 e2d29d42b test/ftl: remove duplicated ftl_dirty_shutdown 00:03:07.573 7313180df test/ftl: replace FTL extended and nightly flags 00:03:07.592 [Pipeline] writeFile 00:03:07.609 [Pipeline] sh 00:03:07.929 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:07.940 [Pipeline] sh 00:03:08.219 + cat autorun-spdk.conf 00:03:08.219 SPDK_TEST_UNITTEST=1 00:03:08.219 SPDK_RUN_VALGRIND=0 00:03:08.219 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:08.219 SPDK_TEST_NVME=1 00:03:08.219 SPDK_TEST_BLOCKDEV=1 00:03:08.219 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:08.226 RUN_NIGHTLY=0 00:03:08.228 [Pipeline] } 00:03:08.246 [Pipeline] // stage 00:03:08.262 [Pipeline] stage 00:03:08.264 [Pipeline] { (Run VM) 00:03:08.279 [Pipeline] sh 00:03:08.558 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:08.558 + echo 'Start stage prepare_nvme.sh' 00:03:08.558 Start stage prepare_nvme.sh 00:03:08.558 + [[ -n 6 ]] 00:03:08.558 + disk_prefix=ex6 00:03:08.558 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:03:08.558 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:03:08.558 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:03:08.558 ++ SPDK_TEST_UNITTEST=1 00:03:08.558 ++ SPDK_RUN_VALGRIND=0 00:03:08.558 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:08.558 ++ SPDK_TEST_NVME=1 00:03:08.558 ++ SPDK_TEST_BLOCKDEV=1 00:03:08.558 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:08.558 ++ RUN_NIGHTLY=0 00:03:08.558 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:03:08.558 + nvme_files=() 00:03:08.558 + declare -A nvme_files 00:03:08.558 + backend_dir=/var/lib/libvirt/images/backends 00:03:08.558 + nvme_files['nvme.img']=5G 00:03:08.558 + nvme_files['nvme-cmb.img']=5G 00:03:08.558 + nvme_files['nvme-multi0.img']=4G 00:03:08.558 + nvme_files['nvme-multi1.img']=4G 00:03:08.558 + nvme_files['nvme-multi2.img']=4G 00:03:08.558 + nvme_files['nvme-openstack.img']=8G 00:03:08.558 + nvme_files['nvme-zns.img']=5G 00:03:08.558 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:08.558 + (( SPDK_TEST_FTL == 1 )) 00:03:08.558 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:08.558 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:08.558 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:08.558 + for nvme in "${!nvme_files[@]}" 00:03:08.558 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:09.492 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:09.492 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:09.492 + echo 'End stage prepare_nvme.sh' 00:03:09.492 End stage prepare_nvme.sh 00:03:09.504 [Pipeline] sh 00:03:09.782 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:09.782 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f freebsd13 00:03:09.782 00:03:09.782 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:03:09.782 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:03:09.782 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:03:09.782 HELP=0 00:03:09.782 DRY_RUN=0 00:03:09.782 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:03:09.782 NVME_DISKS_TYPE=nvme, 00:03:09.782 NVME_AUTO_CREATE=0 00:03:09.782 NVME_DISKS_NAMESPACES=, 00:03:09.782 NVME_CMB=, 00:03:09.782 NVME_PMR=, 00:03:09.782 NVME_ZNS=, 00:03:09.782 NVME_MS=, 00:03:09.782 NVME_FDP=, 00:03:09.782 SPDK_VAGRANT_DISTRO=freebsd13 00:03:09.782 SPDK_VAGRANT_VMCPU=10 00:03:09.782 SPDK_VAGRANT_VMRAM=12288 00:03:09.782 SPDK_VAGRANT_PROVIDER=libvirt 00:03:09.782 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:09.782 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:09.782 SPDK_OPENSTACK_NETWORK=0 00:03:09.782 VAGRANT_PACKAGE_BOX=0 00:03:09.782 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:09.782 FORCE_DISTRO=true 00:03:09.782 VAGRANT_BOX_VERSION= 00:03:09.782 EXTRA_VAGRANTFILES= 00:03:09.782 NIC_MODEL=e1000 00:03:09.782 00:03:09.782 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:03:09.782 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:03:13.088 Bringing machine 'default' up with 'libvirt' provider... 00:03:14.024 ==> default: Creating image (snapshot of base box volume). 00:03:14.024 ==> default: Creating domain with the following settings... 00:03:14.024 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1715723114_de2d61033a44b34efce1 00:03:14.024 ==> default: -- Domain type: kvm 00:03:14.024 ==> default: -- Cpus: 10 00:03:14.024 ==> default: -- Feature: acpi 00:03:14.024 ==> default: -- Feature: apic 00:03:14.024 ==> default: -- Feature: pae 00:03:14.024 ==> default: -- Memory: 12288M 00:03:14.024 ==> default: -- Memory Backing: hugepages: 00:03:14.024 ==> default: -- Management MAC: 00:03:14.024 ==> default: -- Loader: 00:03:14.024 ==> default: -- Nvram: 00:03:14.024 ==> default: -- Base box: spdk/freebsd13 00:03:14.024 ==> default: -- Storage pool: default 00:03:14.024 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1715723114_de2d61033a44b34efce1.img (32G) 00:03:14.024 ==> default: -- Volume Cache: default 00:03:14.024 ==> default: -- Kernel: 00:03:14.024 ==> default: -- Initrd: 00:03:14.024 ==> default: -- Graphics Type: vnc 00:03:14.024 ==> default: -- Graphics Port: -1 00:03:14.024 ==> default: -- Graphics IP: 127.0.0.1 00:03:14.024 ==> default: -- Graphics Password: Not defined 00:03:14.024 ==> default: -- Video Type: cirrus 00:03:14.024 ==> default: -- Video VRAM: 9216 00:03:14.024 ==> default: -- Sound Type: 00:03:14.024 ==> default: -- Keymap: en-us 00:03:14.024 ==> default: -- TPM Path: 00:03:14.024 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:14.024 ==> default: -- Command line args: 00:03:14.024 ==> default: -> value=-device, 00:03:14.024 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:14.024 ==> default: -> value=-drive, 00:03:14.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:03:14.024 ==> default: -> value=-device, 00:03:14.024 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:14.387 ==> default: Creating shared folders metadata... 00:03:14.387 ==> default: Starting domain. 00:03:15.764 ==> default: Waiting for domain to get an IP address... 00:03:37.693 ==> default: Waiting for SSH to become available... 00:03:52.640 ==> default: Configuring and enabling network interfaces... 00:03:54.593 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:06.795 ==> default: Mounting SSHFS shared folder... 00:04:06.795 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:04:06.795 ==> default: Checking Mount.. 00:04:07.053 ==> default: Folder Successfully Mounted! 00:04:07.053 ==> default: Running provisioner: file... 00:04:07.312 default: ~/.gitconfig => .gitconfig 00:04:07.571 00:04:07.571 SUCCESS! 00:04:07.571 00:04:07.571 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:04:07.571 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:07.571 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:04:07.571 00:04:07.581 [Pipeline] } 00:04:07.600 [Pipeline] // stage 00:04:07.608 [Pipeline] dir 00:04:07.609 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:04:07.611 [Pipeline] { 00:04:07.625 [Pipeline] catchError 00:04:07.626 [Pipeline] { 00:04:07.640 [Pipeline] sh 00:04:07.918 + vagrant ssh-config --host vagrant 00:04:07.918 + sed -ne /^Host/,$p 00:04:07.918 + tee ssh_conf 00:04:12.105 Host vagrant 00:04:12.105 HostName 192.168.121.5 00:04:12.105 User vagrant 00:04:12.105 Port 22 00:04:12.105 UserKnownHostsFile /dev/null 00:04:12.105 StrictHostKeyChecking no 00:04:12.105 PasswordAuthentication no 00:04:12.105 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:04:12.105 IdentitiesOnly yes 00:04:12.105 LogLevel FATAL 00:04:12.105 ForwardAgent yes 00:04:12.105 ForwardX11 yes 00:04:12.105 00:04:12.119 [Pipeline] withEnv 00:04:12.121 [Pipeline] { 00:04:12.137 [Pipeline] sh 00:04:12.416 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:12.416 source /etc/os-release 00:04:12.416 [[ -e /image.version ]] && img=$(< /image.version) 00:04:12.416 # Minimal, systemd-like check. 00:04:12.416 if [[ -e /.dockerenv ]]; then 00:04:12.416 # Clear garbage from the node's name: 00:04:12.416 # agt-er_autotest_547-896 -> autotest_547-896 00:04:12.416 # $HOSTNAME is the actual container id 00:04:12.416 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:12.416 if mountpoint -q /etc/hostname; then 00:04:12.416 # We can assume this is a mount from a host where container is running, 00:04:12.416 # so fetch its hostname to easily identify the target swarm worker. 00:04:12.416 container="$(< /etc/hostname) ($agent)" 00:04:12.416 else 00:04:12.416 # Fallback 00:04:12.416 container=$agent 00:04:12.416 fi 00:04:12.416 fi 00:04:12.416 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:12.416 00:04:12.428 [Pipeline] } 00:04:12.448 [Pipeline] // withEnv 00:04:12.457 [Pipeline] setCustomBuildProperty 00:04:12.471 [Pipeline] stage 00:04:12.473 [Pipeline] { (Tests) 00:04:12.492 [Pipeline] sh 00:04:12.849 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:12.865 [Pipeline] timeout 00:04:12.865 Timeout set to expire in 1 hr 0 min 00:04:12.867 [Pipeline] { 00:04:12.883 [Pipeline] sh 00:04:13.162 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:13.729 HEAD is now at 52939f252 lib/blobfs: fix memory error for spdk_file_write 00:04:13.742 [Pipeline] sh 00:04:14.020 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:14.034 [Pipeline] sh 00:04:14.313 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:14.328 [Pipeline] sh 00:04:14.607 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:04:14.607 ++ readlink -f spdk_repo 00:04:14.607 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:04:14.607 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:04:14.607 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:04:14.607 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:04:14.607 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:04:14.607 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:04:14.607 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:04:14.607 + cd /usr/home/vagrant/spdk_repo 00:04:14.607 + source /etc/os-release 00:04:14.607 ++ NAME=FreeBSD 00:04:14.607 ++ VERSION=13.2-RELEASE 00:04:14.607 ++ VERSION_ID=13.2 00:04:14.607 ++ ID=freebsd 00:04:14.607 ++ ANSI_COLOR='0;31' 00:04:14.607 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:04:14.607 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:04:14.607 ++ HOME_URL=https://FreeBSD.org/ 00:04:14.607 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:04:14.607 + uname -a 00:04:14.607 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:04:14.607 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.865 Contigmem (not present) 00:04:14.865 Buffer Size: not set 00:04:14.865 Num Buffers: not set 00:04:14.865 00:04:14.865 00:04:14.865 Type BDF Vendor Device Driver 00:04:14.865 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:04:14.865 + rm -f /tmp/spdk-ld-path 00:04:14.865 + source autorun-spdk.conf 00:04:14.865 ++ SPDK_TEST_UNITTEST=1 00:04:14.865 ++ SPDK_RUN_VALGRIND=0 00:04:14.865 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:14.865 ++ SPDK_TEST_NVME=1 00:04:14.865 ++ SPDK_TEST_BLOCKDEV=1 00:04:14.865 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:14.865 ++ RUN_NIGHTLY=0 00:04:14.865 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:14.865 + [[ -n '' ]] 00:04:14.865 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:04:14.865 + for M in /var/spdk/build-*-manifest.txt 00:04:14.865 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:14.865 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:04:14.865 + for M in /var/spdk/build-*-manifest.txt 00:04:14.865 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:14.865 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:04:14.865 ++ uname 00:04:14.865 + [[ FreeBSD == \L\i\n\u\x ]] 00:04:14.865 + dmesg_pid=1293 00:04:14.865 + [[ FreeBSD == FreeBSD ]] 00:04:14.865 + tail -F /var/log/messages 00:04:14.865 + export LC_ALL=C LC_CTYPE=C 00:04:14.865 + LC_ALL=C 00:04:14.865 + LC_CTYPE=C 00:04:14.865 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:14.865 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:14.865 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:14.865 + [[ -x /usr/src/fio-static/fio ]] 00:04:14.865 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:14.865 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:14.865 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:14.865 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:04:14.865 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:14.865 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:14.865 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:14.865 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:04:14.865 Test configuration: 00:04:14.865 SPDK_TEST_UNITTEST=1 00:04:14.865 SPDK_RUN_VALGRIND=0 00:04:14.865 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:14.865 SPDK_TEST_NVME=1 00:04:14.865 SPDK_TEST_BLOCKDEV=1 00:04:14.865 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:14.865 RUN_NIGHTLY=0 21:46:15 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:14.865 21:46:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:14.865 21:46:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.865 21:46:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.865 21:46:15 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:14.865 21:46:15 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:14.865 21:46:15 -- paths/export.sh@4 -- $ export PATH 00:04:14.865 21:46:15 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:04:14.865 21:46:15 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:04:14.865 21:46:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:04:14.865 21:46:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715723175.XXXXXX 00:04:14.865 21:46:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715723175.XXXXXX.HKOu7BVm 00:04:14.865 21:46:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:04:14.865 21:46:15 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:04:14.865 21:46:15 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:04:14.865 21:46:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:14.865 21:46:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:14.865 21:46:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:04:14.865 21:46:15 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:04:14.865 21:46:15 -- common/autotest_common.sh@10 -- $ set +x 00:04:15.124 21:46:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:04:15.124 21:46:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:04:15.124 21:46:15 -- pm/common@17 -- $ local monitor 00:04:15.124 21:46:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.124 21:46:15 -- pm/common@25 -- $ sleep 1 00:04:15.124 21:46:15 -- pm/common@21 -- $ date +%s 00:04:15.124 21:46:15 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715723175 00:04:15.124 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715723175_collect-vmstat.pm.log 00:04:16.058 21:46:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:04:16.058 21:46:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:16.058 21:46:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:16.058 21:46:16 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:04:16.058 21:46:16 -- spdk/autobuild.sh@16 -- $ date -u 00:04:16.058 Tue May 14 21:46:16 UTC 2024 00:04:16.058 21:46:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:16.058 v24.05-pre-617-g52939f252 00:04:16.058 21:46:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:16.058 21:46:16 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:04:16.058 21:46:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:16.058 21:46:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:16.058 21:46:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:16.058 21:46:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:16.058 21:46:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:16.058 21:46:16 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:04:16.058 21:46:16 -- spdk/autobuild.sh@58 -- $ unittest_build 00:04:16.058 21:46:16 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:04:16.058 21:46:16 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:04:16.058 21:46:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:04:16.058 21:46:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:16.058 ************************************ 00:04:16.058 START TEST unittest_build 00:04:16.058 ************************************ 00:04:16.058 21:46:16 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:04:16.058 21:46:16 unittest_build -- common/autobuild_common.sh@404 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:04:16.997 Notice: Vhost, rte_vhost library, virtio, and fuse 00:04:16.997 are only supported on Linux. Turning off default feature. 00:04:16.997 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:16.997 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:17.933 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:04:17.933 Using 'verbs' RDMA provider 00:04:28.165 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:40.364 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:40.364 Creating mk/config.mk...done. 00:04:40.364 Creating mk/cc.flags.mk...done. 00:04:40.364 Type 'gmake' to build. 00:04:40.364 21:46:39 unittest_build -- common/autobuild_common.sh@405 -- $ gmake -j10 00:04:40.364 gmake[1]: Nothing to be done for 'all'. 00:04:43.653 ps: stdin: not a terminal 00:04:47.871 The Meson build system 00:04:47.871 Version: 1.3.1 00:04:47.871 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:04:47.871 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:47.871 Build type: native build 00:04:47.871 Program cat found: YES (/bin/cat) 00:04:47.871 Project name: DPDK 00:04:47.871 Project version: 23.11.0 00:04:47.871 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:04:47.871 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:04:47.871 Host machine cpu family: x86_64 00:04:47.871 Host machine cpu: x86_64 00:04:47.871 Message: ## Building in Developer Mode ## 00:04:47.871 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:04:47.871 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:47.871 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:47.871 Program python3 found: YES (/usr/local/bin/python3.9) 00:04:47.871 Program cat found: YES (/bin/cat) 00:04:47.871 Compiler for C supports arguments -march=native: YES 00:04:47.871 Checking for size of "void *" : 8 00:04:47.871 Checking for size of "void *" : 8 (cached) 00:04:47.871 Library m found: YES 00:04:47.871 Library numa found: NO 00:04:47.871 Library fdt found: NO 00:04:47.871 Library execinfo found: YES 00:04:47.871 Has header "execinfo.h" : YES 00:04:47.871 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:04:47.871 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:47.871 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:47.871 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:47.871 Run-time dependency openssl found: YES 3.0.13 00:04:47.871 Run-time dependency libpcap found: NO (tried pkgconfig) 00:04:47.871 Library pcap found: YES 00:04:47.871 Has header "pcap.h" with dependency -lpcap: YES 00:04:47.871 Compiler for C supports arguments -Wcast-qual: YES 00:04:47.871 Compiler for C supports arguments -Wdeprecated: YES 00:04:47.871 Compiler for C supports arguments -Wformat: YES 00:04:47.871 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:47.871 Compiler for C supports arguments -Wformat-security: YES 00:04:47.871 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:47.871 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:47.871 Compiler for C supports arguments -Wnested-externs: YES 00:04:47.871 Compiler for C supports arguments -Wold-style-definition: YES 00:04:47.871 Compiler for C supports arguments -Wpointer-arith: YES 00:04:47.872 Compiler for C supports arguments -Wsign-compare: YES 00:04:47.872 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:47.872 Compiler for C supports arguments -Wundef: YES 00:04:47.872 Compiler for C supports arguments -Wwrite-strings: YES 00:04:47.872 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:47.872 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:04:47.872 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:47.872 Compiler for C supports arguments -mavx512f: YES 00:04:47.872 Checking if "AVX512 checking" compiles: YES 00:04:47.872 Fetching value of define "__SSE4_2__" : 1 00:04:47.872 Fetching value of define "__AES__" : 1 00:04:47.872 Fetching value of define "__AVX__" : 1 00:04:47.872 Fetching value of define "__AVX2__" : 1 00:04:47.872 Fetching value of define "__AVX512BW__" : (undefined) 00:04:47.872 Fetching value of define "__AVX512CD__" : (undefined) 00:04:47.872 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:47.872 Fetching value of define "__AVX512F__" : (undefined) 00:04:47.872 Fetching value of define "__AVX512VL__" : (undefined) 00:04:47.872 Fetching value of define "__PCLMUL__" : 1 00:04:47.872 Fetching value of define "__RDRND__" : 1 00:04:47.872 Fetching value of define "__RDSEED__" : 1 00:04:47.872 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:47.872 Fetching value of define "__znver1__" : (undefined) 00:04:47.872 Fetching value of define "__znver2__" : (undefined) 00:04:47.872 Fetching value of define "__znver3__" : (undefined) 00:04:47.872 Fetching value of define "__znver4__" : (undefined) 00:04:47.872 Compiler for C supports arguments -Wno-format-truncation: NO 00:04:47.872 Message: lib/log: Defining dependency "log" 00:04:47.872 Message: lib/kvargs: Defining dependency "kvargs" 00:04:47.872 Message: lib/telemetry: Defining dependency "telemetry" 00:04:47.872 Checking if "Detect argument count for CPU_OR" compiles: YES 00:04:47.872 Checking for function "getentropy" : YES 00:04:47.872 Message: lib/eal: Defining dependency "eal" 00:04:47.872 Message: lib/ring: Defining dependency "ring" 00:04:47.872 Message: lib/rcu: Defining dependency "rcu" 00:04:47.872 Message: lib/mempool: Defining dependency "mempool" 00:04:47.872 Message: lib/mbuf: Defining dependency "mbuf" 00:04:47.872 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:47.872 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:47.872 Compiler for C supports arguments -mpclmul: YES 00:04:47.872 Compiler for C supports arguments -maes: YES 00:04:47.872 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:47.872 Compiler for C supports arguments -mavx512bw: YES 00:04:47.872 Compiler for C supports arguments -mavx512dq: YES 00:04:47.872 Compiler for C supports arguments -mavx512vl: YES 00:04:47.872 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:47.872 Compiler for C supports arguments -mavx2: YES 00:04:47.872 Compiler for C supports arguments -mavx: YES 00:04:47.872 Message: lib/net: Defining dependency "net" 00:04:47.872 Message: lib/meter: Defining dependency "meter" 00:04:47.872 Message: lib/ethdev: Defining dependency "ethdev" 00:04:47.872 Message: lib/pci: Defining dependency "pci" 00:04:47.872 Message: lib/cmdline: Defining dependency "cmdline" 00:04:47.872 Message: lib/hash: Defining dependency "hash" 00:04:47.872 Message: lib/timer: Defining dependency "timer" 00:04:47.872 Message: lib/compressdev: Defining dependency "compressdev" 00:04:47.872 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:47.872 Message: lib/dmadev: Defining dependency "dmadev" 00:04:47.872 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:47.872 Message: lib/reorder: Defining dependency "reorder" 00:04:47.872 Message: lib/security: Defining dependency "security" 00:04:47.872 Has header "linux/userfaultfd.h" : NO 00:04:47.872 Has header "linux/vduse.h" : NO 00:04:47.872 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:04:47.872 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:47.872 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:47.872 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:47.872 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:47.872 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:47.872 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:47.872 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:04:47.872 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:47.872 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:47.872 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:47.872 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:47.872 Configuring doxy-api-html.conf using configuration 00:04:47.872 Configuring doxy-api-man.conf using configuration 00:04:47.872 Program mandb found: NO 00:04:47.872 Program sphinx-build found: NO 00:04:47.872 Configuring rte_build_config.h using configuration 00:04:47.872 Message: 00:04:47.872 ================= 00:04:47.872 Applications Enabled 00:04:47.872 ================= 00:04:47.872 00:04:47.872 apps: 00:04:47.872 00:04:47.872 00:04:47.872 Message: 00:04:47.872 ================= 00:04:47.872 Libraries Enabled 00:04:47.872 ================= 00:04:47.872 00:04:47.872 libs: 00:04:47.872 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:47.872 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:47.872 cryptodev, dmadev, reorder, security, 00:04:47.872 00:04:47.872 Message: 00:04:47.872 =============== 00:04:47.872 Drivers Enabled 00:04:47.872 =============== 00:04:47.872 00:04:47.872 common: 00:04:47.872 00:04:47.872 bus: 00:04:47.872 pci, vdev, 00:04:47.872 mempool: 00:04:47.872 ring, 00:04:47.872 dma: 00:04:47.872 00:04:47.872 net: 00:04:47.872 00:04:47.872 crypto: 00:04:47.872 00:04:47.872 compress: 00:04:47.872 00:04:47.872 00:04:47.872 Message: 00:04:47.872 ================= 00:04:47.872 Content Skipped 00:04:47.872 ================= 00:04:47.872 00:04:47.872 apps: 00:04:47.872 dumpcap: explicitly disabled via build config 00:04:47.872 graph: explicitly disabled via build config 00:04:47.872 pdump: explicitly disabled via build config 00:04:47.872 proc-info: explicitly disabled via build config 00:04:47.872 test-acl: explicitly disabled via build config 00:04:47.872 test-bbdev: explicitly disabled via build config 00:04:47.872 test-cmdline: explicitly disabled via build config 00:04:47.872 test-compress-perf: explicitly disabled via build config 00:04:47.872 test-crypto-perf: explicitly disabled via build config 00:04:47.872 test-dma-perf: explicitly disabled via build config 00:04:47.872 test-eventdev: explicitly disabled via build config 00:04:47.872 test-fib: explicitly disabled via build config 00:04:47.872 test-flow-perf: explicitly disabled via build config 00:04:47.872 test-gpudev: explicitly disabled via build config 00:04:47.872 test-mldev: explicitly disabled via build config 00:04:47.872 test-pipeline: explicitly disabled via build config 00:04:47.872 test-pmd: explicitly disabled via build config 00:04:47.872 test-regex: explicitly disabled via build config 00:04:47.872 test-sad: explicitly disabled via build config 00:04:47.872 test-security-perf: explicitly disabled via build config 00:04:47.872 00:04:47.872 libs: 00:04:47.872 metrics: explicitly disabled via build config 00:04:47.872 acl: explicitly disabled via build config 00:04:47.872 bbdev: explicitly disabled via build config 00:04:47.872 bitratestats: explicitly disabled via build config 00:04:47.872 bpf: explicitly disabled via build config 00:04:47.872 cfgfile: explicitly disabled via build config 00:04:47.872 distributor: explicitly disabled via build config 00:04:47.872 efd: explicitly disabled via build config 00:04:47.872 eventdev: explicitly disabled via build config 00:04:47.872 dispatcher: explicitly disabled via build config 00:04:47.872 gpudev: explicitly disabled via build config 00:04:47.872 gro: explicitly disabled via build config 00:04:47.872 gso: explicitly disabled via build config 00:04:47.872 ip_frag: explicitly disabled via build config 00:04:47.872 jobstats: explicitly disabled via build config 00:04:47.872 latencystats: explicitly disabled via build config 00:04:47.872 lpm: explicitly disabled via build config 00:04:47.872 member: explicitly disabled via build config 00:04:47.872 pcapng: explicitly disabled via build config 00:04:47.872 power: only supported on Linux 00:04:47.872 rawdev: explicitly disabled via build config 00:04:47.872 regexdev: explicitly disabled via build config 00:04:47.872 mldev: explicitly disabled via build config 00:04:47.872 rib: explicitly disabled via build config 00:04:47.872 sched: explicitly disabled via build config 00:04:47.872 stack: explicitly disabled via build config 00:04:47.872 vhost: only supported on Linux 00:04:47.872 ipsec: explicitly disabled via build config 00:04:47.872 pdcp: explicitly disabled via build config 00:04:47.872 fib: explicitly disabled via build config 00:04:47.872 port: explicitly disabled via build config 00:04:47.872 pdump: explicitly disabled via build config 00:04:47.872 table: explicitly disabled via build config 00:04:47.872 pipeline: explicitly disabled via build config 00:04:47.872 graph: explicitly disabled via build config 00:04:47.872 node: explicitly disabled via build config 00:04:47.872 00:04:47.872 drivers: 00:04:47.872 common/cpt: not in enabled drivers build config 00:04:47.872 common/dpaax: not in enabled drivers build config 00:04:47.872 common/iavf: not in enabled drivers build config 00:04:47.872 common/idpf: not in enabled drivers build config 00:04:47.872 common/mvep: not in enabled drivers build config 00:04:47.872 common/octeontx: not in enabled drivers build config 00:04:47.872 bus/auxiliary: not in enabled drivers build config 00:04:47.872 bus/cdx: not in enabled drivers build config 00:04:47.872 bus/dpaa: not in enabled drivers build config 00:04:47.872 bus/fslmc: not in enabled drivers build config 00:04:47.872 bus/ifpga: not in enabled drivers build config 00:04:47.872 bus/platform: not in enabled drivers build config 00:04:47.872 bus/vmbus: not in enabled drivers build config 00:04:47.872 common/cnxk: not in enabled drivers build config 00:04:47.872 common/mlx5: not in enabled drivers build config 00:04:47.872 common/nfp: not in enabled drivers build config 00:04:47.872 common/qat: not in enabled drivers build config 00:04:47.872 common/sfc_efx: not in enabled drivers build config 00:04:47.872 mempool/bucket: not in enabled drivers build config 00:04:47.872 mempool/cnxk: not in enabled drivers build config 00:04:47.872 mempool/dpaa: not in enabled drivers build config 00:04:47.872 mempool/dpaa2: not in enabled drivers build config 00:04:47.872 mempool/octeontx: not in enabled drivers build config 00:04:47.872 mempool/stack: not in enabled drivers build config 00:04:47.873 dma/cnxk: not in enabled drivers build config 00:04:47.873 dma/dpaa: not in enabled drivers build config 00:04:47.873 dma/dpaa2: not in enabled drivers build config 00:04:47.873 dma/hisilicon: not in enabled drivers build config 00:04:47.873 dma/idxd: not in enabled drivers build config 00:04:47.873 dma/ioat: not in enabled drivers build config 00:04:47.873 dma/skeleton: not in enabled drivers build config 00:04:47.873 net/af_packet: not in enabled drivers build config 00:04:47.873 net/af_xdp: not in enabled drivers build config 00:04:47.873 net/ark: not in enabled drivers build config 00:04:47.873 net/atlantic: not in enabled drivers build config 00:04:47.873 net/avp: not in enabled drivers build config 00:04:47.873 net/axgbe: not in enabled drivers build config 00:04:47.873 net/bnx2x: not in enabled drivers build config 00:04:47.873 net/bnxt: not in enabled drivers build config 00:04:47.873 net/bonding: not in enabled drivers build config 00:04:47.873 net/cnxk: not in enabled drivers build config 00:04:47.873 net/cpfl: not in enabled drivers build config 00:04:47.873 net/cxgbe: not in enabled drivers build config 00:04:47.873 net/dpaa: not in enabled drivers build config 00:04:47.873 net/dpaa2: not in enabled drivers build config 00:04:47.873 net/e1000: not in enabled drivers build config 00:04:47.873 net/ena: not in enabled drivers build config 00:04:47.873 net/enetc: not in enabled drivers build config 00:04:47.873 net/enetfec: not in enabled drivers build config 00:04:47.873 net/enic: not in enabled drivers build config 00:04:47.873 net/failsafe: not in enabled drivers build config 00:04:47.873 net/fm10k: not in enabled drivers build config 00:04:47.873 net/gve: not in enabled drivers build config 00:04:47.873 net/hinic: not in enabled drivers build config 00:04:47.873 net/hns3: not in enabled drivers build config 00:04:47.873 net/i40e: not in enabled drivers build config 00:04:47.873 net/iavf: not in enabled drivers build config 00:04:47.873 net/ice: not in enabled drivers build config 00:04:47.873 net/idpf: not in enabled drivers build config 00:04:47.873 net/igc: not in enabled drivers build config 00:04:47.873 net/ionic: not in enabled drivers build config 00:04:47.873 net/ipn3ke: not in enabled drivers build config 00:04:47.873 net/ixgbe: not in enabled drivers build config 00:04:47.873 net/mana: not in enabled drivers build config 00:04:47.873 net/memif: not in enabled drivers build config 00:04:47.873 net/mlx4: not in enabled drivers build config 00:04:47.873 net/mlx5: not in enabled drivers build config 00:04:47.873 net/mvneta: not in enabled drivers build config 00:04:47.873 net/mvpp2: not in enabled drivers build config 00:04:47.873 net/netvsc: not in enabled drivers build config 00:04:47.873 net/nfb: not in enabled drivers build config 00:04:47.873 net/nfp: not in enabled drivers build config 00:04:47.873 net/ngbe: not in enabled drivers build config 00:04:47.873 net/null: not in enabled drivers build config 00:04:47.873 net/octeontx: not in enabled drivers build config 00:04:47.873 net/octeon_ep: not in enabled drivers build config 00:04:47.873 net/pcap: not in enabled drivers build config 00:04:47.873 net/pfe: not in enabled drivers build config 00:04:47.873 net/qede: not in enabled drivers build config 00:04:47.873 net/ring: not in enabled drivers build config 00:04:47.873 net/sfc: not in enabled drivers build config 00:04:47.873 net/softnic: not in enabled drivers build config 00:04:47.873 net/tap: not in enabled drivers build config 00:04:47.873 net/thunderx: not in enabled drivers build config 00:04:47.873 net/txgbe: not in enabled drivers build config 00:04:47.873 net/vdev_netvsc: not in enabled drivers build config 00:04:47.873 net/vhost: not in enabled drivers build config 00:04:47.873 net/virtio: not in enabled drivers build config 00:04:47.873 net/vmxnet3: not in enabled drivers build config 00:04:47.873 raw/*: missing internal dependency, "rawdev" 00:04:47.873 crypto/armv8: not in enabled drivers build config 00:04:47.873 crypto/bcmfs: not in enabled drivers build config 00:04:47.873 crypto/caam_jr: not in enabled drivers build config 00:04:47.873 crypto/ccp: not in enabled drivers build config 00:04:47.873 crypto/cnxk: not in enabled drivers build config 00:04:47.873 crypto/dpaa_sec: not in enabled drivers build config 00:04:47.873 crypto/dpaa2_sec: not in enabled drivers build config 00:04:47.873 crypto/ipsec_mb: not in enabled drivers build config 00:04:47.873 crypto/mlx5: not in enabled drivers build config 00:04:47.873 crypto/mvsam: not in enabled drivers build config 00:04:47.873 crypto/nitrox: not in enabled drivers build config 00:04:47.873 crypto/null: not in enabled drivers build config 00:04:47.873 crypto/octeontx: not in enabled drivers build config 00:04:47.873 crypto/openssl: not in enabled drivers build config 00:04:47.873 crypto/scheduler: not in enabled drivers build config 00:04:47.873 crypto/uadk: not in enabled drivers build config 00:04:47.873 crypto/virtio: not in enabled drivers build config 00:04:47.873 compress/isal: not in enabled drivers build config 00:04:47.873 compress/mlx5: not in enabled drivers build config 00:04:47.873 compress/octeontx: not in enabled drivers build config 00:04:47.873 compress/zlib: not in enabled drivers build config 00:04:47.873 regex/*: missing internal dependency, "regexdev" 00:04:47.873 ml/*: missing internal dependency, "mldev" 00:04:47.873 vdpa/*: missing internal dependency, "vhost" 00:04:47.873 event/*: missing internal dependency, "eventdev" 00:04:47.873 baseband/*: missing internal dependency, "bbdev" 00:04:47.873 gpu/*: missing internal dependency, "gpudev" 00:04:47.873 00:04:47.873 00:04:47.873 Build targets in project: 81 00:04:47.873 00:04:47.873 DPDK 23.11.0 00:04:47.873 00:04:47.873 User defined options 00:04:47.873 buildtype : debug 00:04:47.873 default_library : static 00:04:47.873 libdir : lib 00:04:47.873 prefix : / 00:04:47.873 c_args : -fPIC -Werror 00:04:47.873 c_link_args : 00:04:47.873 cpu_instruction_set: native 00:04:47.873 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:47.873 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:47.873 enable_docs : false 00:04:47.873 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:47.873 enable_kmods : true 00:04:47.873 tests : false 00:04:47.873 00:04:47.873 Found ninja-1.11.1 at /usr/local/bin/ninja 00:04:48.440 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:48.440 [1/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:48.440 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:48.440 [3/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:04:48.440 [4/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:48.440 [5/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:48.440 [6/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:48.440 [7/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:48.440 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:48.698 [9/231] Linking static target lib/librte_log.a 00:04:48.698 [10/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:48.698 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:48.698 [12/231] Linking static target lib/librte_kvargs.a 00:04:48.698 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:48.956 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:48.956 [15/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:48.956 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:48.956 [17/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:48.956 [18/231] Linking static target lib/librte_telemetry.a 00:04:48.956 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:48.956 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:48.956 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:48.956 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:49.214 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:49.214 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:49.214 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:49.214 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:49.214 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:49.214 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:49.472 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:49.472 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:49.472 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:49.472 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:49.472 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:49.472 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:49.472 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:49.472 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:49.472 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:49.730 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:49.730 [39/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.730 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:49.730 [41/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:49.730 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:49.988 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:49.988 [44/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:49.988 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:49.988 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:49.988 [47/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:49.988 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:49.988 [49/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:49.988 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:49.988 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:04:49.988 [52/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:04:49.988 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:50.247 [54/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:50.247 [55/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:50.247 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:50.247 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:50.247 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:50.247 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:50.247 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:50.247 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:04:50.505 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:04:50.505 [63/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:50.505 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:04:50.505 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:50.505 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:04:50.505 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:04:50.505 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:04:50.764 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:04:50.764 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:04:50.764 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:04:50.764 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:50.764 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:50.764 [74/231] Linking static target lib/librte_eal.a 00:04:50.764 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:50.764 [76/231] Linking static target lib/librte_ring.a 00:04:51.022 [77/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.022 [78/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:51.022 [79/231] Linking static target lib/librte_rcu.a 00:04:51.022 [80/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:51.022 [81/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:51.022 [82/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:51.022 [83/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:51.022 [84/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.022 [85/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:51.281 [86/231] Linking static target lib/librte_mempool.a 00:04:51.281 [87/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:51.281 [88/231] Linking target lib/librte_log.so.24.0 00:04:51.281 [89/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:51.281 [90/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.281 [91/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.281 [92/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:51.539 [93/231] Linking target lib/librte_kvargs.so.24.0 00:04:51.539 [94/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:51.539 [95/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:51.539 [96/231] Linking static target lib/librte_mbuf.a 00:04:51.539 [97/231] Linking target lib/librte_telemetry.so.24.0 00:04:51.539 [98/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:51.539 [99/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:51.539 [100/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:51.539 [101/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:51.539 [102/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:51.539 [103/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:51.539 [104/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:51.539 [105/231] Linking static target lib/librte_meter.a 00:04:51.797 [106/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:51.797 [107/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:51.797 [108/231] Linking static target lib/librte_net.a 00:04:51.797 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:51.797 [110/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.057 [111/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.057 [112/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:52.057 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:52.057 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:52.315 [115/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.315 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:52.315 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:52.574 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:52.574 [119/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:52.574 [120/231] Linking static target lib/librte_pci.a 00:04:52.574 [121/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:52.574 [122/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:52.574 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:52.574 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:52.833 [125/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:52.833 [126/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:52.833 [127/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:52.833 [128/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.833 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:52.833 [130/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:52.833 [131/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:52.833 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:52.833 [133/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.833 [134/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:52.833 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:52.833 [136/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:52.833 [137/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:52.833 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:52.833 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:52.833 [140/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:53.092 [141/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:53.092 [142/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:53.092 [143/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:53.092 [144/231] Linking static target lib/librte_cmdline.a 00:04:53.355 [145/231] Linking static target lib/librte_ethdev.a 00:04:53.355 [146/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:53.355 [147/231] Linking static target lib/librte_timer.a 00:04:53.355 [148/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:53.355 [149/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:53.356 [150/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:53.356 [151/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:53.356 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:53.356 [153/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:53.356 [154/231] Linking static target lib/librte_compressdev.a 00:04:53.614 [155/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:53.614 [156/231] Linking static target lib/librte_hash.a 00:04:53.614 [157/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.872 [158/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:53.872 [159/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:53.872 [160/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:53.872 [161/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:53.872 [162/231] Linking static target lib/librte_dmadev.a 00:04:53.872 [163/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:53.872 [164/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.872 [165/231] Linking static target lib/librte_reorder.a 00:04:54.131 [166/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:54.131 [167/231] Linking static target lib/librte_security.a 00:04:54.131 [168/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.131 [169/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.131 [170/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:54.131 [171/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:54.131 [172/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.131 [173/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.388 [174/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:54.388 [175/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:04:54.388 [176/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:54.388 [177/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:54.388 [178/231] Linking static target lib/librte_cryptodev.a 00:04:54.388 [179/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.388 [180/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:54.388 [181/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:54.645 [182/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:54.645 [183/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:54.645 [184/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:54.645 [185/231] Linking static target drivers/librte_bus_pci.a 00:04:54.645 [186/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:54.645 [187/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:54.645 [188/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:54.645 [189/231] Linking static target drivers/librte_bus_vdev.a 00:04:54.902 [190/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:54.902 [191/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:54.902 [192/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.902 [193/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.902 [194/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:55.160 [195/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:55.160 [196/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:55.160 [197/231] Linking static target drivers/librte_mempool_ring.a 00:04:55.160 [198/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.418 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:04:55.418 machine -> /usr/src/sys/amd64/include 00:04:55.418 x86 -> /usr/src/sys/x86/include 00:04:55.418 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:04:55.418 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:04:55.418 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:04:55.418 touch opt_global.h 00:04:55.418 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:04:55.418 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:04:55.418 :> export_syms 00:04:55.418 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:04:55.418 objcopy --strip-debug contigmem.ko 00:04:55.983 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:04:55.983 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:04:55.983 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:04:55.983 :> export_syms 00:04:55.983 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:04:55.983 objcopy --strip-debug nic_uio.ko 00:04:58.517 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.822 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.822 [203/231] Linking target lib/librte_eal.so.24.0 00:05:01.822 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:01.822 [205/231] Linking target lib/librte_meter.so.24.0 00:05:01.822 [206/231] Linking target lib/librte_pci.so.24.0 00:05:01.822 [207/231] Linking target lib/librte_timer.so.24.0 00:05:01.822 [208/231] Linking target drivers/librte_bus_vdev.so.24.0 00:05:01.822 [209/231] Linking target lib/librte_ring.so.24.0 00:05:01.822 [210/231] Linking target lib/librte_dmadev.so.24.0 00:05:01.822 [211/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:01.822 [212/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:01.822 [213/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:01.822 [214/231] Linking target drivers/librte_bus_pci.so.24.0 00:05:01.822 [215/231] Linking target lib/librte_rcu.so.24.0 00:05:01.822 [216/231] Linking target lib/librte_mempool.so.24.0 00:05:01.822 [217/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:01.822 [218/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:01.822 [219/231] Linking target lib/librte_mbuf.so.24.0 00:05:01.822 [220/231] Linking target drivers/librte_mempool_ring.so.24.0 00:05:02.080 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:02.080 [222/231] Linking target lib/librte_net.so.24.0 00:05:02.080 [223/231] Linking target lib/librte_compressdev.so.24.0 00:05:02.080 [224/231] Linking target lib/librte_reorder.so.24.0 00:05:02.080 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:05:02.080 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:02.080 [227/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:02.080 [228/231] Linking target lib/librte_hash.so.24.0 00:05:02.081 [229/231] Linking target lib/librte_cmdline.so.24.0 00:05:02.081 [230/231] Linking target lib/librte_security.so.24.0 00:05:02.081 [231/231] Linking target lib/librte_ethdev.so.24.0 00:05:02.081 INFO: autodetecting backend as ninja 00:05:02.081 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:03.013 CC lib/ut/ut.o 00:05:03.013 CC lib/log/log.o 00:05:03.013 CC lib/log/log_flags.o 00:05:03.013 CC lib/log/log_deprecated.o 00:05:03.013 CC lib/ut_mock/mock.o 00:05:03.271 LIB libspdk_ut_mock.a 00:05:03.271 LIB libspdk_ut.a 00:05:03.271 LIB libspdk_log.a 00:05:03.271 CXX lib/trace_parser/trace.o 00:05:03.271 CC lib/util/base64.o 00:05:03.271 CC lib/util/bit_array.o 00:05:03.271 CC lib/util/cpuset.o 00:05:03.271 CC lib/util/crc16.o 00:05:03.271 CC lib/util/crc32.o 00:05:03.271 CC lib/util/crc32c.o 00:05:03.271 CC lib/util/crc32_ieee.o 00:05:03.271 CC lib/ioat/ioat.o 00:05:03.271 CC lib/dma/dma.o 00:05:03.271 CC lib/util/crc64.o 00:05:03.271 CC lib/util/dif.o 00:05:03.271 CC lib/util/fd.o 00:05:03.271 CC lib/util/file.o 00:05:03.271 CC lib/util/hexlify.o 00:05:03.271 CC lib/util/iov.o 00:05:03.271 LIB libspdk_dma.a 00:05:03.529 CC lib/util/math.o 00:05:03.529 CC lib/util/pipe.o 00:05:03.529 CC lib/util/strerror_tls.o 00:05:03.529 LIB libspdk_ioat.a 00:05:03.529 CC lib/util/string.o 00:05:03.529 CC lib/util/uuid.o 00:05:03.529 CC lib/util/fd_group.o 00:05:03.529 CC lib/util/xor.o 00:05:03.529 CC lib/util/zipf.o 00:05:03.529 LIB libspdk_util.a 00:05:03.788 CC lib/json/json_parse.o 00:05:03.788 CC lib/json/json_util.o 00:05:03.788 CC lib/json/json_write.o 00:05:03.788 CC lib/idxd/idxd.o 00:05:03.788 CC lib/idxd/idxd_user.o 00:05:03.788 CC lib/conf/conf.o 00:05:03.788 CC lib/env_dpdk/env.o 00:05:03.788 CC lib/rdma/common.o 00:05:03.788 CC lib/vmd/vmd.o 00:05:03.788 CC lib/env_dpdk/memory.o 00:05:03.788 CC lib/vmd/led.o 00:05:03.788 CC lib/rdma/rdma_verbs.o 00:05:03.788 LIB libspdk_conf.a 00:05:03.788 CC lib/env_dpdk/pci.o 00:05:03.788 LIB libspdk_json.a 00:05:03.788 CC lib/env_dpdk/init.o 00:05:03.788 CC lib/env_dpdk/threads.o 00:05:03.788 LIB libspdk_idxd.a 00:05:03.788 CC lib/env_dpdk/pci_ioat.o 00:05:03.788 CC lib/env_dpdk/pci_virtio.o 00:05:03.788 LIB libspdk_vmd.a 00:05:04.046 CC lib/env_dpdk/pci_vmd.o 00:05:04.046 CC lib/env_dpdk/pci_idxd.o 00:05:04.046 CC lib/env_dpdk/pci_event.o 00:05:04.046 LIB libspdk_rdma.a 00:05:04.046 CC lib/env_dpdk/sigbus_handler.o 00:05:04.046 CC lib/env_dpdk/pci_dpdk.o 00:05:04.046 CC lib/jsonrpc/jsonrpc_server.o 00:05:04.046 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:04.046 CC lib/jsonrpc/jsonrpc_client.o 00:05:04.046 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:04.046 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:04.046 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:04.046 LIB libspdk_jsonrpc.a 00:05:04.304 CC lib/rpc/rpc.o 00:05:04.304 LIB libspdk_trace_parser.a 00:05:04.304 LIB libspdk_rpc.a 00:05:04.563 CC lib/notify/notify.o 00:05:04.563 CC lib/notify/notify_rpc.o 00:05:04.563 CC lib/trace/trace.o 00:05:04.563 CC lib/trace/trace_flags.o 00:05:04.563 CC lib/trace/trace_rpc.o 00:05:04.563 CC lib/keyring/keyring.o 00:05:04.563 CC lib/keyring/keyring_rpc.o 00:05:04.563 LIB libspdk_env_dpdk.a 00:05:04.563 LIB libspdk_notify.a 00:05:04.563 LIB libspdk_keyring.a 00:05:04.563 LIB libspdk_trace.a 00:05:04.823 CC lib/sock/sock.o 00:05:04.823 CC lib/sock/sock_rpc.o 00:05:04.823 CC lib/thread/thread.o 00:05:04.823 CC lib/thread/iobuf.o 00:05:04.823 LIB libspdk_sock.a 00:05:05.081 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:05.081 CC lib/nvme/nvme_ctrlr.o 00:05:05.081 CC lib/nvme/nvme_fabric.o 00:05:05.081 CC lib/nvme/nvme_ns_cmd.o 00:05:05.081 CC lib/nvme/nvme_pcie_common.o 00:05:05.081 CC lib/nvme/nvme_ns.o 00:05:05.081 CC lib/nvme/nvme_qpair.o 00:05:05.081 CC lib/nvme/nvme.o 00:05:05.081 CC lib/nvme/nvme_pcie.o 00:05:05.081 LIB libspdk_thread.a 00:05:05.081 CC lib/nvme/nvme_quirks.o 00:05:05.715 CC lib/accel/accel.o 00:05:05.715 CC lib/accel/accel_rpc.o 00:05:05.715 CC lib/accel/accel_sw.o 00:05:05.715 CC lib/nvme/nvme_transport.o 00:05:05.715 CC lib/blob/blobstore.o 00:05:05.715 CC lib/blob/request.o 00:05:05.715 CC lib/init/json_config.o 00:05:05.715 CC lib/blob/zeroes.o 00:05:05.715 CC lib/init/subsystem.o 00:05:05.715 CC lib/blob/blob_bs_dev.o 00:05:05.715 CC lib/init/subsystem_rpc.o 00:05:05.715 CC lib/nvme/nvme_discovery.o 00:05:05.715 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:05.715 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:05.715 CC lib/init/rpc.o 00:05:05.715 CC lib/nvme/nvme_tcp.o 00:05:05.715 CC lib/nvme/nvme_opal.o 00:05:05.715 LIB libspdk_accel.a 00:05:05.715 CC lib/nvme/nvme_io_msg.o 00:05:05.715 CC lib/nvme/nvme_poll_group.o 00:05:05.715 LIB libspdk_init.a 00:05:05.715 CC lib/nvme/nvme_zns.o 00:05:05.974 CC lib/bdev/bdev.o 00:05:05.974 LIB libspdk_blob.a 00:05:05.974 CC lib/bdev/bdev_rpc.o 00:05:05.974 CC lib/bdev/bdev_zone.o 00:05:05.974 CC lib/nvme/nvme_stubs.o 00:05:06.233 CC lib/bdev/part.o 00:05:06.233 CC lib/event/app.o 00:05:06.233 CC lib/bdev/scsi_nvme.o 00:05:06.233 CC lib/nvme/nvme_auth.o 00:05:06.233 CC lib/nvme/nvme_rdma.o 00:05:06.233 CC lib/event/reactor.o 00:05:06.233 CC lib/blobfs/blobfs.o 00:05:06.233 CC lib/event/log_rpc.o 00:05:06.233 CC lib/blobfs/tree.o 00:05:06.233 CC lib/lvol/lvol.o 00:05:06.492 CC lib/event/app_rpc.o 00:05:06.492 CC lib/event/scheduler_static.o 00:05:06.492 LIB libspdk_event.a 00:05:06.492 LIB libspdk_blobfs.a 00:05:06.492 LIB libspdk_lvol.a 00:05:06.751 LIB libspdk_bdev.a 00:05:06.751 CC lib/scsi/dev.o 00:05:06.751 CC lib/scsi/scsi.o 00:05:06.751 CC lib/scsi/lun.o 00:05:06.751 CC lib/scsi/port.o 00:05:06.751 CC lib/scsi/scsi_bdev.o 00:05:06.751 CC lib/scsi/scsi_pr.o 00:05:06.751 CC lib/scsi/scsi_rpc.o 00:05:06.751 CC lib/scsi/task.o 00:05:06.751 LIB libspdk_nvme.a 00:05:07.009 CC lib/nvmf/ctrlr.o 00:05:07.009 CC lib/nvmf/ctrlr_discovery.o 00:05:07.009 CC lib/nvmf/subsystem.o 00:05:07.009 CC lib/nvmf/nvmf.o 00:05:07.009 CC lib/nvmf/ctrlr_bdev.o 00:05:07.009 CC lib/nvmf/nvmf_rpc.o 00:05:07.009 CC lib/nvmf/transport.o 00:05:07.009 CC lib/nvmf/tcp.o 00:05:07.009 CC lib/nvmf/stubs.o 00:05:07.009 LIB libspdk_scsi.a 00:05:07.009 CC lib/nvmf/rdma.o 00:05:07.009 CC lib/nvmf/auth.o 00:05:07.009 CC lib/iscsi/conn.o 00:05:07.009 CC lib/iscsi/init_grp.o 00:05:07.267 CC lib/iscsi/iscsi.o 00:05:07.268 CC lib/iscsi/md5.o 00:05:07.268 CC lib/iscsi/param.o 00:05:07.268 CC lib/iscsi/portal_grp.o 00:05:07.268 CC lib/iscsi/tgt_node.o 00:05:07.268 CC lib/iscsi/iscsi_subsystem.o 00:05:07.268 CC lib/iscsi/iscsi_rpc.o 00:05:07.268 CC lib/iscsi/task.o 00:05:07.527 LIB libspdk_nvmf.a 00:05:07.527 LIB libspdk_iscsi.a 00:05:07.786 CC module/env_dpdk/env_dpdk_rpc.o 00:05:07.786 CC module/keyring/file/keyring.o 00:05:07.786 CC module/keyring/file/keyring_rpc.o 00:05:07.786 CC module/blob/bdev/blob_bdev.o 00:05:07.786 CC module/accel/dsa/accel_dsa.o 00:05:07.786 CC module/accel/iaa/accel_iaa.o 00:05:07.786 CC module/sock/posix/posix.o 00:05:07.786 CC module/accel/ioat/accel_ioat.o 00:05:07.786 CC module/accel/error/accel_error.o 00:05:07.786 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:07.786 LIB libspdk_env_dpdk_rpc.a 00:05:07.786 CC module/accel/iaa/accel_iaa_rpc.o 00:05:07.786 CC module/accel/error/accel_error_rpc.o 00:05:07.786 LIB libspdk_keyring_file.a 00:05:07.786 CC module/accel/ioat/accel_ioat_rpc.o 00:05:07.786 CC module/accel/dsa/accel_dsa_rpc.o 00:05:07.786 LIB libspdk_accel_iaa.a 00:05:07.786 LIB libspdk_blob_bdev.a 00:05:07.786 LIB libspdk_scheduler_dynamic.a 00:05:07.786 LIB libspdk_accel_error.a 00:05:07.786 LIB libspdk_accel_ioat.a 00:05:08.044 LIB libspdk_accel_dsa.a 00:05:08.044 CC module/bdev/error/vbdev_error.o 00:05:08.044 CC module/bdev/gpt/gpt.o 00:05:08.044 CC module/bdev/malloc/bdev_malloc.o 00:05:08.044 CC module/bdev/delay/vbdev_delay.o 00:05:08.044 CC module/bdev/lvol/vbdev_lvol.o 00:05:08.044 CC module/bdev/nvme/bdev_nvme.o 00:05:08.044 CC module/blobfs/bdev/blobfs_bdev.o 00:05:08.044 CC module/bdev/null/bdev_null.o 00:05:08.044 CC module/bdev/passthru/vbdev_passthru.o 00:05:08.044 LIB libspdk_sock_posix.a 00:05:08.044 CC module/bdev/gpt/vbdev_gpt.o 00:05:08.044 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:08.044 CC module/bdev/error/vbdev_error_rpc.o 00:05:08.044 CC module/bdev/null/bdev_null_rpc.o 00:05:08.044 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:08.044 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:08.044 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:08.044 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:08.302 LIB libspdk_bdev_gpt.a 00:05:08.302 LIB libspdk_bdev_error.a 00:05:08.302 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:08.302 CC module/bdev/nvme/nvme_rpc.o 00:05:08.302 LIB libspdk_bdev_malloc.a 00:05:08.302 LIB libspdk_bdev_null.a 00:05:08.302 LIB libspdk_bdev_delay.a 00:05:08.302 CC module/bdev/nvme/bdev_mdns_client.o 00:05:08.302 CC module/bdev/raid/bdev_raid.o 00:05:08.302 CC module/bdev/raid/bdev_raid_rpc.o 00:05:08.302 CC module/bdev/raid/bdev_raid_sb.o 00:05:08.302 LIB libspdk_blobfs_bdev.a 00:05:08.302 LIB libspdk_bdev_passthru.a 00:05:08.302 CC module/bdev/raid/raid0.o 00:05:08.302 LIB libspdk_bdev_lvol.a 00:05:08.302 CC module/bdev/raid/raid1.o 00:05:08.302 CC module/bdev/split/vbdev_split.o 00:05:08.302 CC module/bdev/raid/concat.o 00:05:08.302 CC module/bdev/split/vbdev_split_rpc.o 00:05:08.302 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:08.302 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:08.302 CC module/bdev/aio/bdev_aio.o 00:05:08.302 CC module/bdev/aio/bdev_aio_rpc.o 00:05:08.561 LIB libspdk_bdev_split.a 00:05:08.561 LIB libspdk_bdev_raid.a 00:05:08.561 LIB libspdk_bdev_nvme.a 00:05:08.561 LIB libspdk_bdev_zone_block.a 00:05:08.561 LIB libspdk_bdev_aio.a 00:05:08.819 CC module/event/subsystems/iobuf/iobuf.o 00:05:08.819 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:08.819 CC module/event/subsystems/keyring/keyring.o 00:05:08.819 CC module/event/subsystems/scheduler/scheduler.o 00:05:08.819 CC module/event/subsystems/vmd/vmd.o 00:05:08.819 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:08.819 CC module/event/subsystems/sock/sock.o 00:05:08.819 LIB libspdk_event_sock.a 00:05:08.819 LIB libspdk_event_keyring.a 00:05:08.819 LIB libspdk_event_scheduler.a 00:05:08.819 LIB libspdk_event_vmd.a 00:05:08.819 LIB libspdk_event_iobuf.a 00:05:09.077 CC module/event/subsystems/accel/accel.o 00:05:09.077 LIB libspdk_event_accel.a 00:05:09.336 CC module/event/subsystems/bdev/bdev.o 00:05:09.336 LIB libspdk_event_bdev.a 00:05:09.336 CC module/event/subsystems/scsi/scsi.o 00:05:09.336 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:09.336 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:09.595 LIB libspdk_event_scsi.a 00:05:09.595 LIB libspdk_event_nvmf.a 00:05:09.595 CC module/event/subsystems/iscsi/iscsi.o 00:05:09.854 LIB libspdk_event_iscsi.a 00:05:09.854 CC app/trace_record/trace_record.o 00:05:09.854 CC app/spdk_nvme_perf/perf.o 00:05:09.854 CC app/spdk_nvme_identify/identify.o 00:05:09.854 CC app/spdk_lspci/spdk_lspci.o 00:05:09.854 CXX app/trace/trace.o 00:05:09.854 CC examples/accel/perf/accel_perf.o 00:05:09.854 CC app/iscsi_tgt/iscsi_tgt.o 00:05:09.854 CC app/nvmf_tgt/nvmf_main.o 00:05:09.854 CC app/spdk_tgt/spdk_tgt.o 00:05:10.113 CC test/accel/dif/dif.o 00:05:10.113 LINK spdk_lspci 00:05:10.113 LINK spdk_trace_record 00:05:10.113 LINK spdk_tgt 00:05:10.113 LINK iscsi_tgt 00:05:10.113 LINK accel_perf 00:05:10.113 LINK nvmf_tgt 00:05:10.113 LINK spdk_nvme_perf 00:05:10.113 LINK spdk_nvme_identify 00:05:10.113 LINK dif 00:05:10.113 CC examples/bdev/hello_world/hello_bdev.o 00:05:10.113 CC test/app/bdev_svc/bdev_svc.o 00:05:10.372 CC examples/bdev/bdevperf/bdevperf.o 00:05:10.372 CC examples/ioat/perf/perf.o 00:05:10.372 CC test/bdev/bdevio/bdevio.o 00:05:10.372 LINK bdev_svc 00:05:10.372 TEST_HEADER include/spdk/accel.h 00:05:10.372 TEST_HEADER include/spdk/accel_module.h 00:05:10.372 TEST_HEADER include/spdk/assert.h 00:05:10.372 TEST_HEADER include/spdk/barrier.h 00:05:10.372 TEST_HEADER include/spdk/base64.h 00:05:10.372 CC examples/blob/hello_world/hello_blob.o 00:05:10.372 TEST_HEADER include/spdk/bdev.h 00:05:10.372 TEST_HEADER include/spdk/bdev_module.h 00:05:10.372 TEST_HEADER include/spdk/bdev_zone.h 00:05:10.372 TEST_HEADER include/spdk/bit_array.h 00:05:10.372 TEST_HEADER include/spdk/bit_pool.h 00:05:10.372 TEST_HEADER include/spdk/blob.h 00:05:10.372 TEST_HEADER include/spdk/blob_bdev.h 00:05:10.372 TEST_HEADER include/spdk/blobfs.h 00:05:10.372 CC test/blobfs/mkfs/mkfs.o 00:05:10.372 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:10.372 TEST_HEADER include/spdk/conf.h 00:05:10.372 TEST_HEADER include/spdk/config.h 00:05:10.372 TEST_HEADER include/spdk/cpuset.h 00:05:10.372 TEST_HEADER include/spdk/crc16.h 00:05:10.372 LINK hello_bdev 00:05:10.372 TEST_HEADER include/spdk/crc32.h 00:05:10.372 TEST_HEADER include/spdk/crc64.h 00:05:10.372 TEST_HEADER include/spdk/dif.h 00:05:10.372 TEST_HEADER include/spdk/dma.h 00:05:10.372 TEST_HEADER include/spdk/endian.h 00:05:10.372 TEST_HEADER include/spdk/env.h 00:05:10.372 TEST_HEADER include/spdk/env_dpdk.h 00:05:10.372 TEST_HEADER include/spdk/event.h 00:05:10.372 TEST_HEADER include/spdk/fd.h 00:05:10.372 TEST_HEADER include/spdk/fd_group.h 00:05:10.372 TEST_HEADER include/spdk/file.h 00:05:10.372 TEST_HEADER include/spdk/ftl.h 00:05:10.372 TEST_HEADER include/spdk/gpt_spec.h 00:05:10.372 TEST_HEADER include/spdk/hexlify.h 00:05:10.372 TEST_HEADER include/spdk/histogram_data.h 00:05:10.372 TEST_HEADER include/spdk/idxd.h 00:05:10.372 TEST_HEADER include/spdk/idxd_spec.h 00:05:10.372 TEST_HEADER include/spdk/init.h 00:05:10.372 TEST_HEADER include/spdk/ioat.h 00:05:10.372 TEST_HEADER include/spdk/ioat_spec.h 00:05:10.372 TEST_HEADER include/spdk/iscsi_spec.h 00:05:10.372 TEST_HEADER include/spdk/json.h 00:05:10.372 TEST_HEADER include/spdk/jsonrpc.h 00:05:10.373 TEST_HEADER include/spdk/keyring.h 00:05:10.373 TEST_HEADER include/spdk/keyring_module.h 00:05:10.373 TEST_HEADER include/spdk/likely.h 00:05:10.373 TEST_HEADER include/spdk/log.h 00:05:10.373 LINK ioat_perf 00:05:10.373 TEST_HEADER include/spdk/lvol.h 00:05:10.373 CC examples/blob/cli/blobcli.o 00:05:10.373 TEST_HEADER include/spdk/memory.h 00:05:10.373 TEST_HEADER include/spdk/mmio.h 00:05:10.373 TEST_HEADER include/spdk/nbd.h 00:05:10.373 TEST_HEADER include/spdk/notify.h 00:05:10.373 TEST_HEADER include/spdk/nvme.h 00:05:10.373 TEST_HEADER include/spdk/nvme_intel.h 00:05:10.373 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:10.373 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:10.373 TEST_HEADER include/spdk/nvme_spec.h 00:05:10.373 TEST_HEADER include/spdk/nvme_zns.h 00:05:10.373 TEST_HEADER include/spdk/nvmf.h 00:05:10.373 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:10.373 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:10.373 TEST_HEADER include/spdk/nvmf_spec.h 00:05:10.373 TEST_HEADER include/spdk/nvmf_transport.h 00:05:10.373 TEST_HEADER include/spdk/opal.h 00:05:10.373 TEST_HEADER include/spdk/opal_spec.h 00:05:10.373 TEST_HEADER include/spdk/pci_ids.h 00:05:10.373 TEST_HEADER include/spdk/pipe.h 00:05:10.373 TEST_HEADER include/spdk/queue.h 00:05:10.373 TEST_HEADER include/spdk/reduce.h 00:05:10.373 TEST_HEADER include/spdk/rpc.h 00:05:10.373 TEST_HEADER include/spdk/scheduler.h 00:05:10.373 TEST_HEADER include/spdk/scsi.h 00:05:10.373 TEST_HEADER include/spdk/scsi_spec.h 00:05:10.373 TEST_HEADER include/spdk/sock.h 00:05:10.373 TEST_HEADER include/spdk/stdinc.h 00:05:10.373 TEST_HEADER include/spdk/string.h 00:05:10.373 TEST_HEADER include/spdk/thread.h 00:05:10.373 TEST_HEADER include/spdk/trace.h 00:05:10.373 TEST_HEADER include/spdk/trace_parser.h 00:05:10.373 TEST_HEADER include/spdk/tree.h 00:05:10.373 TEST_HEADER include/spdk/ublk.h 00:05:10.373 TEST_HEADER include/spdk/util.h 00:05:10.373 TEST_HEADER include/spdk/uuid.h 00:05:10.373 TEST_HEADER include/spdk/version.h 00:05:10.373 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:10.373 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:10.373 TEST_HEADER include/spdk/vhost.h 00:05:10.373 TEST_HEADER include/spdk/vmd.h 00:05:10.373 TEST_HEADER include/spdk/xor.h 00:05:10.373 LINK hello_blob 00:05:10.373 TEST_HEADER include/spdk/zipf.h 00:05:10.373 CXX test/cpp_headers/accel.o 00:05:10.657 CC examples/ioat/verify/verify.o 00:05:10.657 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:10.657 LINK bdevio 00:05:10.657 LINK mkfs 00:05:10.657 LINK bdevperf 00:05:10.657 CC examples/nvme/hello_world/hello_world.o 00:05:10.657 LINK spdk_trace 00:05:10.657 LINK blobcli 00:05:10.657 LINK verify 00:05:10.657 CXX test/cpp_headers/accel_module.o 00:05:10.657 LINK nvme_fuzz 00:05:10.657 CC examples/sock/hello_world/hello_sock.o 00:05:10.657 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:10.657 CC app/spdk_nvme_discover/discovery_aer.o 00:05:10.657 LINK hello_world 00:05:10.657 CC examples/vmd/lsvmd/lsvmd.o 00:05:10.657 CC examples/nvme/reconnect/reconnect.o 00:05:10.916 CXX test/cpp_headers/assert.o 00:05:10.916 CC test/dma/test_dma/test_dma.o 00:05:10.916 CC test/app/histogram_perf/histogram_perf.o 00:05:10.916 LINK spdk_nvme_discover 00:05:10.916 LINK lsvmd 00:05:10.916 LINK hello_sock 00:05:10.916 CC test/env/mem_callbacks/mem_callbacks.o 00:05:10.916 CC test/event/event_perf/event_perf.o 00:05:10.916 LINK histogram_perf 00:05:10.916 LINK reconnect 00:05:10.916 gmake[2]: Nothing to be done for 'all'. 00:05:10.916 CC app/spdk_top/spdk_top.o 00:05:10.916 LINK event_perf 00:05:10.916 CC examples/vmd/led/led.o 00:05:10.916 CXX test/cpp_headers/barrier.o 00:05:10.916 CC test/env/vtophys/vtophys.o 00:05:10.916 LINK test_dma 00:05:10.916 LINK led 00:05:10.916 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:11.175 CC test/event/reactor/reactor.o 00:05:11.175 CC app/fio/nvme/fio_plugin.o 00:05:11.175 LINK vtophys 00:05:11.175 CXX test/cpp_headers/base64.o 00:05:11.175 LINK reactor 00:05:11.175 CC test/app/jsoncat/jsoncat.o 00:05:11.175 CC examples/nvme/arbitration/arbitration.o 00:05:11.175 LINK iscsi_fuzz 00:05:11.175 LINK spdk_top 00:05:11.175 CC examples/nvmf/nvmf/nvmf.o 00:05:11.175 LINK jsoncat 00:05:11.175 CC test/event/reactor_perf/reactor_perf.o 00:05:11.175 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:05:11.175 struct spdk_nvme_fdp_ruhs ruhs; 00:05:11.175 ^ 00:05:11.175 LINK nvme_manage 00:05:11.175 CXX test/cpp_headers/bdev.o 00:05:11.432 LINK arbitration 00:05:11.432 1 warning generated. 00:05:11.432 LINK spdk_nvme 00:05:11.432 LINK mem_callbacks 00:05:11.432 CC test/app/stub/stub.o 00:05:11.432 CC examples/util/zipf/zipf.o 00:05:11.432 LINK reactor_perf 00:05:11.432 CC test/nvme/aer/aer.o 00:05:11.432 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:11.432 LINK zipf 00:05:11.432 CC examples/thread/thread/thread_ex.o 00:05:11.432 LINK nvmf 00:05:11.432 CC examples/nvme/hotplug/hotplug.o 00:05:11.432 LINK stub 00:05:11.432 CXX test/cpp_headers/bdev_module.o 00:05:11.432 CC app/fio/bdev/fio_plugin.o 00:05:11.432 LINK aer 00:05:11.432 CC examples/idxd/perf/perf.o 00:05:11.432 CC test/rpc_client/rpc_client_test.o 00:05:11.432 LINK env_dpdk_post_init 00:05:11.432 LINK hotplug 00:05:11.432 CXX test/cpp_headers/bdev_zone.o 00:05:11.691 CC test/nvme/reset/reset.o 00:05:11.691 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:11.691 LINK thread 00:05:11.691 LINK rpc_client_test 00:05:11.691 LINK idxd_perf 00:05:11.691 CC test/env/memory/memory_ut.o 00:05:11.691 CC test/nvme/sgl/sgl.o 00:05:11.691 LINK reset 00:05:11.691 LINK cmb_copy 00:05:11.691 LINK spdk_bdev 00:05:11.691 CC test/nvme/e2edp/nvme_dp.o 00:05:11.691 CC examples/nvme/abort/abort.o 00:05:11.691 CC test/env/pci/pci_ut.o 00:05:11.691 CC test/thread/poller_perf/poller_perf.o 00:05:11.691 CXX test/cpp_headers/bit_array.o 00:05:11.691 CXX test/cpp_headers/bit_pool.o 00:05:11.691 LINK poller_perf 00:05:11.949 LINK sgl 00:05:11.949 LINK nvme_dp 00:05:11.949 LINK abort 00:05:11.949 CC test/nvme/overhead/overhead.o 00:05:11.949 LINK pci_ut 00:05:11.949 CXX test/cpp_headers/blob.o 00:05:11.949 CC test/thread/lock/spdk_lock.o 00:05:11.949 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:05:11.949 CXX test/cpp_headers/blob_bdev.o 00:05:11.949 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:11.949 CC test/nvme/err_injection/err_injection.o 00:05:11.949 CC test/nvme/startup/startup.o 00:05:11.949 LINK overhead 00:05:11.949 CC test/unit/lib/accel/accel.c/accel_ut.o 00:05:11.949 LINK histogram_ut 00:05:11.949 CXX test/cpp_headers/blobfs.o 00:05:11.949 LINK err_injection 00:05:11.949 LINK startup 00:05:11.949 CXX test/cpp_headers/blobfs_bdev.o 00:05:11.949 LINK pmr_persistence 00:05:12.207 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:05:12.207 CXX test/cpp_headers/conf.o 00:05:12.207 CC test/nvme/reserve/reserve.o 00:05:12.207 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:05:12.207 LINK memory_ut 00:05:12.207 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:05:12.207 LINK spdk_lock 00:05:12.207 CC test/nvme/simple_copy/simple_copy.o 00:05:12.207 CXX test/cpp_headers/config.o 00:05:12.207 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:05:12.207 CXX test/cpp_headers/cpuset.o 00:05:12.207 LINK reserve 00:05:12.207 LINK tree_ut 00:05:12.465 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:05:12.465 LINK simple_copy 00:05:12.465 CC test/unit/lib/dma/dma.c/dma_ut.o 00:05:12.465 LINK blob_bdev_ut 00:05:12.465 CXX test/cpp_headers/crc16.o 00:05:12.465 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:05:12.465 CC test/unit/lib/event/app.c/app_ut.o 00:05:12.465 CC test/nvme/connect_stress/connect_stress.o 00:05:12.465 CC test/unit/lib/blob/blob.c/blob_ut.o 00:05:12.465 LINK dma_ut 00:05:12.465 LINK blobfs_bdev_ut 00:05:12.465 CXX test/cpp_headers/crc32.o 00:05:12.465 LINK connect_stress 00:05:12.465 CXX test/cpp_headers/crc64.o 00:05:12.723 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:05:12.723 LINK blobfs_async_ut 00:05:12.723 LINK app_ut 00:05:12.723 LINK accel_ut 00:05:12.723 LINK blobfs_sync_ut 00:05:12.723 CC test/nvme/boot_partition/boot_partition.o 00:05:12.723 CXX test/cpp_headers/dif.o 00:05:12.723 CC test/unit/lib/bdev/part.c/part_ut.o 00:05:12.723 CXX test/cpp_headers/dma.o 00:05:12.723 CC test/nvme/compliance/nvme_compliance.o 00:05:12.724 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:05:12.724 CC test/nvme/fused_ordering/fused_ordering.o 00:05:12.724 LINK boot_partition 00:05:12.981 LINK reactor_ut 00:05:12.981 CXX test/cpp_headers/endian.o 00:05:12.981 LINK fused_ordering 00:05:12.981 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:05:12.981 LINK ioat_ut 00:05:12.981 LINK nvme_compliance 00:05:12.981 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:05:12.981 CXX test/cpp_headers/env.o 00:05:12.981 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:05:12.981 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:05:12.981 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:05:12.981 LINK scsi_nvme_ut 00:05:12.981 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:12.981 CXX test/cpp_headers/env_dpdk.o 00:05:13.239 LINK gpt_ut 00:05:13.239 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:05:13.239 LINK doorbell_aers 00:05:13.239 CXX test/cpp_headers/event.o 00:05:13.239 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:05:13.239 LINK init_grp_ut 00:05:13.239 LINK conn_ut 00:05:13.239 CXX test/cpp_headers/fd.o 00:05:13.239 CC test/nvme/fdp/fdp.o 00:05:13.239 LINK bdev_ut 00:05:13.240 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:05:13.498 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:05:13.498 CXX test/cpp_headers/fd_group.o 00:05:13.498 LINK vbdev_lvol_ut 00:05:13.498 LINK part_ut 00:05:13.498 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:05:13.498 LINK fdp 00:05:13.498 CC test/unit/lib/iscsi/param.c/param_ut.o 00:05:13.498 CXX test/cpp_headers/file.o 00:05:13.498 CC test/unit/lib/log/log.c/log_ut.o 00:05:13.498 LINK jsonrpc_server_ut 00:05:13.498 LINK json_util_ut 00:05:13.757 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:05:13.757 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:05:13.757 CXX test/cpp_headers/ftl.o 00:05:13.757 LINK log_ut 00:05:13.757 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:05:13.757 LINK param_ut 00:05:13.757 LINK iscsi_ut 00:05:13.757 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:05:13.757 CXX test/cpp_headers/gpt_spec.o 00:05:13.757 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:05:13.757 LINK json_parse_ut 00:05:13.757 CC test/unit/lib/notify/notify.c/notify_ut.o 00:05:14.015 LINK bdev_zone_ut 00:05:14.015 LINK portal_grp_ut 00:05:14.015 CXX test/cpp_headers/hexlify.o 00:05:14.015 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:05:14.015 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:05:14.015 LINK notify_ut 00:05:14.015 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:05:14.015 LINK blob_ut 00:05:14.015 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:05:14.015 LINK bdev_ut 00:05:14.015 CXX test/cpp_headers/histogram_data.o 00:05:14.274 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:05:14.274 LINK bdev_raid_sb_ut 00:05:14.274 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:05:14.274 CXX test/cpp_headers/idxd.o 00:05:14.274 LINK tgt_node_ut 00:05:14.274 LINK json_write_ut 00:05:14.274 LINK bdev_raid_ut 00:05:14.274 LINK lvol_ut 00:05:14.533 LINK vbdev_zone_block_ut 00:05:14.533 CXX test/cpp_headers/idxd_spec.o 00:05:14.533 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:05:14.533 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:05:14.533 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:05:14.533 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:05:14.533 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:05:14.533 CXX test/cpp_headers/init.o 00:05:14.533 LINK concat_ut 00:05:14.533 CC test/unit/lib/sock/sock.c/sock_ut.o 00:05:14.533 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:05:14.792 LINK dev_ut 00:05:14.792 LINK nvme_ut 00:05:14.792 CXX test/cpp_headers/ioat.o 00:05:14.792 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:05:14.792 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:05:14.792 LINK raid1_ut 00:05:14.792 CXX test/cpp_headers/ioat_spec.o 00:05:15.052 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:05:15.052 LINK nvme_ctrlr_ocssd_cmd_ut 00:05:15.052 CXX test/cpp_headers/iscsi_spec.o 00:05:15.052 LINK lun_ut 00:05:15.052 LINK nvme_ctrlr_cmd_ut 00:05:15.052 CC test/unit/lib/sock/posix.c/posix_ut.o 00:05:15.052 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:05:15.052 LINK sock_ut 00:05:15.052 CC test/unit/lib/thread/thread.c/thread_ut.o 00:05:15.052 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:05:15.052 CXX test/cpp_headers/json.o 00:05:15.313 LINK nvme_ctrlr_ut 00:05:15.313 LINK scsi_ut 00:05:15.313 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:05:15.313 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:05:15.313 LINK tcp_ut 00:05:15.313 CXX test/cpp_headers/jsonrpc.o 00:05:15.313 CC test/unit/lib/util/base64.c/base64_ut.o 00:05:15.313 LINK ctrlr_ut 00:05:15.571 LINK posix_ut 00:05:15.571 CXX test/cpp_headers/keyring.o 00:05:15.571 LINK iobuf_ut 00:05:15.571 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:05:15.571 LINK base64_ut 00:05:15.571 LINK nvme_ns_ut 00:05:15.571 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:05:15.571 LINK bdev_nvme_ut 00:05:15.571 CXX test/cpp_headers/keyring_module.o 00:05:15.571 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:05:15.571 LINK thread_ut 00:05:15.571 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:05:15.571 LINK scsi_bdev_ut 00:05:15.571 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:05:15.571 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:05:15.572 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:05:15.830 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:05:15.830 CXX test/cpp_headers/likely.o 00:05:15.830 LINK pci_event_ut 00:05:15.830 LINK bit_array_ut 00:05:15.830 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:05:15.830 LINK cpuset_ut 00:05:15.830 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:05:15.830 LINK crc16_ut 00:05:15.830 CXX test/cpp_headers/log.o 00:05:15.830 LINK ctrlr_bdev_ut 00:05:15.830 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:05:15.830 LINK scsi_pr_ut 00:05:15.830 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:05:15.830 LINK nvme_ns_cmd_ut 00:05:15.830 CXX test/cpp_headers/lvol.o 00:05:16.088 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:05:16.088 LINK auth_ut 00:05:16.088 LINK crc32_ieee_ut 00:05:16.088 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:05:16.088 LINK nvmf_ut 00:05:16.088 CXX test/cpp_headers/memory.o 00:05:16.088 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:05:16.088 LINK ctrlr_discovery_ut 00:05:16.088 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:05:16.088 LINK subsystem_ut 00:05:16.088 LINK crc32c_ut 00:05:16.088 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:05:16.088 CXX test/cpp_headers/mmio.o 00:05:16.088 CC test/unit/lib/util/dif.c/dif_ut.o 00:05:16.088 CXX test/cpp_headers/nbd.o 00:05:16.088 LINK crc64_ut 00:05:16.088 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:05:16.088 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:05:16.088 CXX test/cpp_headers/notify.o 00:05:16.088 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:05:16.347 CXX test/cpp_headers/nvme.o 00:05:16.347 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:05:16.347 LINK dif_ut 00:05:16.605 LINK subsystem_ut 00:05:16.605 CXX test/cpp_headers/nvme_intel.o 00:05:16.605 CC test/unit/lib/util/iov.c/iov_ut.o 00:05:16.605 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:05:16.605 LINK nvme_ns_ocssd_cmd_ut 00:05:16.605 CXX test/cpp_headers/nvme_ocssd.o 00:05:16.605 LINK iov_ut 00:05:16.605 LINK rdma_ut 00:05:16.605 LINK nvme_quirks_ut 00:05:16.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:16.605 LINK rpc_ut 00:05:16.605 LINK nvme_poll_group_ut 00:05:16.605 CXX test/cpp_headers/nvme_spec.o 00:05:16.605 CC test/unit/lib/util/math.c/math_ut.o 00:05:16.605 LINK nvme_pcie_ut 00:05:16.605 LINK transport_ut 00:05:16.605 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:05:16.863 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:05:16.863 CC test/unit/lib/util/string.c/string_ut.o 00:05:16.863 LINK math_ut 00:05:16.863 CC test/unit/lib/util/xor.c/xor_ut.o 00:05:16.863 CXX test/cpp_headers/nvme_zns.o 00:05:16.864 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:05:16.864 LINK nvme_qpair_ut 00:05:16.864 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:05:16.864 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:05:16.864 CXX test/cpp_headers/nvmf.o 00:05:16.864 LINK string_ut 00:05:16.864 LINK pipe_ut 00:05:16.864 LINK xor_ut 00:05:16.864 CXX test/cpp_headers/nvmf_cmd.o 00:05:16.864 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:05:16.864 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:05:16.864 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:05:17.121 LINK rpc_ut 00:05:17.121 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:05:17.121 LINK keyring_ut 00:05:17.121 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:05:17.121 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:05:17.121 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:17.121 CXX test/cpp_headers/nvmf_spec.o 00:05:17.121 LINK idxd_user_ut 00:05:17.380 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:05:17.380 LINK nvme_opal_ut 00:05:17.380 CXX test/cpp_headers/nvmf_transport.o 00:05:17.380 CXX test/cpp_headers/opal.o 00:05:17.380 CC test/unit/lib/rdma/common.c/common_ut.o 00:05:17.380 LINK nvme_io_msg_ut 00:05:17.380 LINK nvme_transport_ut 00:05:17.380 CXX test/cpp_headers/opal_spec.o 00:05:17.380 CXX test/cpp_headers/pci_ids.o 00:05:17.380 LINK idxd_ut 00:05:17.638 CXX test/cpp_headers/pipe.o 00:05:17.638 LINK nvme_fabric_ut 00:05:17.638 CXX test/cpp_headers/queue.o 00:05:17.638 CXX test/cpp_headers/reduce.o 00:05:17.638 LINK common_ut 00:05:17.638 CXX test/cpp_headers/rpc.o 00:05:17.638 CXX test/cpp_headers/scheduler.o 00:05:17.638 CXX test/cpp_headers/scsi.o 00:05:17.638 CXX test/cpp_headers/scsi_spec.o 00:05:17.638 LINK nvme_pcie_common_ut 00:05:17.638 CXX test/cpp_headers/sock.o 00:05:17.638 CXX test/cpp_headers/stdinc.o 00:05:17.638 CXX test/cpp_headers/string.o 00:05:17.638 LINK nvme_tcp_ut 00:05:17.638 CXX test/cpp_headers/thread.o 00:05:17.638 CXX test/cpp_headers/trace.o 00:05:17.638 CXX test/cpp_headers/trace_parser.o 00:05:17.638 CXX test/cpp_headers/tree.o 00:05:17.638 CXX test/cpp_headers/ublk.o 00:05:17.638 CXX test/cpp_headers/util.o 00:05:17.638 CXX test/cpp_headers/uuid.o 00:05:17.638 CXX test/cpp_headers/version.o 00:05:17.638 CXX test/cpp_headers/vfio_user_pci.o 00:05:17.896 CXX test/cpp_headers/vfio_user_spec.o 00:05:17.896 CXX test/cpp_headers/vhost.o 00:05:17.896 CXX test/cpp_headers/vmd.o 00:05:17.896 CXX test/cpp_headers/xor.o 00:05:17.896 CXX test/cpp_headers/zipf.o 00:05:17.896 LINK nvme_rdma_ut 00:05:17.896 00:05:17.896 real 1m1.832s 00:05:17.896 user 4m8.217s 00:05:17.896 sys 0m46.792s 00:05:17.896 ************************************ 00:05:17.896 END TEST unittest_build 00:05:17.896 ************************************ 00:05:17.896 21:47:18 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:17.896 21:47:18 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:05:17.896 21:47:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:17.896 21:47:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:17.896 21:47:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:17.896 21:47:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.896 21:47:18 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:17.896 21:47:18 -- pm/common@44 -- $ pid=1336 00:05:17.896 21:47:18 -- pm/common@50 -- $ kill -TERM 1336 00:05:18.154 21:47:18 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.154 21:47:18 -- nvmf/common.sh@7 -- # uname -s 00:05:18.154 21:47:18 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:05:18.154 21:47:18 -- nvmf/common.sh@7 -- # return 0 00:05:18.154 21:47:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:18.154 21:47:18 -- spdk/autotest.sh@32 -- # uname -s 00:05:18.155 21:47:18 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:05:18.155 21:47:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:18.155 21:47:18 -- pm/common@17 -- # local monitor 00:05:18.155 21:47:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.155 21:47:18 -- pm/common@25 -- # sleep 1 00:05:18.155 21:47:18 -- pm/common@21 -- # date +%s 00:05:18.155 21:47:18 -- pm/common@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715723238 00:05:18.155 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715723238_collect-vmstat.pm.log 00:05:19.090 21:47:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:19.090 21:47:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:19.090 21:47:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.090 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.090 21:47:19 -- spdk/autotest.sh@59 -- # create_test_list 00:05:19.090 21:47:19 -- common/autotest_common.sh@744 -- # xtrace_disable 00:05:19.090 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.349 21:47:19 -- spdk/autotest.sh@61 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:05:19.349 21:47:19 -- spdk/autotest.sh@61 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:05:19.349 21:47:19 -- spdk/autotest.sh@61 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:05:19.349 21:47:19 -- spdk/autotest.sh@62 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:19.349 21:47:19 -- spdk/autotest.sh@63 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:05:19.349 21:47:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:19.349 21:47:19 -- common/autotest_common.sh@1451 -- # uname 00:05:19.349 21:47:19 -- common/autotest_common.sh@1451 -- # '[' FreeBSD = FreeBSD ']' 00:05:19.349 21:47:19 -- common/autotest_common.sh@1452 -- # kldunload contigmem.ko 00:05:19.349 kldunload: can't find file contigmem.ko 00:05:19.349 21:47:19 -- common/autotest_common.sh@1452 -- # true 00:05:19.349 21:47:19 -- common/autotest_common.sh@1453 -- # '[' -n '' ']' 00:05:19.349 21:47:19 -- common/autotest_common.sh@1459 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:05:19.349 21:47:19 -- common/autotest_common.sh@1460 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:05:19.349 21:47:19 -- common/autotest_common.sh@1461 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:05:19.349 21:47:19 -- common/autotest_common.sh@1462 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:05:19.349 21:47:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:19.349 21:47:19 -- common/autotest_common.sh@1471 -- # uname 00:05:19.349 21:47:19 -- common/autotest_common.sh@1471 -- # [[ FreeBSD = FreeBSD ]] 00:05:19.349 21:47:19 -- common/autotest_common.sh@1471 -- # sysctl -n kern.ipc.maxsockbuf 00:05:19.349 21:47:19 -- common/autotest_common.sh@1471 -- # (( 2097152 < 4194304 )) 00:05:19.349 21:47:19 -- common/autotest_common.sh@1472 -- # sysctl kern.ipc.maxsockbuf=4194304 00:05:19.349 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:05:19.349 21:47:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:19.349 21:47:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:05:19.349 21:47:19 -- spdk/autotest.sh@72 -- # hash lcov 00:05:19.349 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:05:19.349 21:47:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:19.349 21:47:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.349 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.349 21:47:19 -- spdk/autotest.sh@91 -- # rm -f 00:05:19.349 21:47:19 -- spdk/autotest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.349 kldunload: can't find file contigmem.ko 00:05:19.349 kldunload: can't find file nic_uio.ko 00:05:19.349 21:47:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:19.349 21:47:19 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:19.349 21:47:19 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:19.349 21:47:19 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:19.349 21:47:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:19.349 21:47:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.349 21:47:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.349 21:47:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:05:19.349 21:47:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:05:19.349 21:47:19 -- scripts/common.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:05:19.349 nvme0ns1 is not a block device 00:05:19.349 21:47:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:05:19.349 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:05:19.349 21:47:19 -- scripts/common.sh@391 -- # pt= 00:05:19.349 21:47:19 -- scripts/common.sh@392 -- # return 1 00:05:19.349 21:47:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:05:19.349 1+0 records in 00:05:19.349 1+0 records out 00:05:19.349 1048576 bytes transferred in 0.006948 secs (150912396 bytes/sec) 00:05:19.349 21:47:19 -- spdk/autotest.sh@118 -- # sync 00:05:19.916 21:47:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.916 21:47:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.916 21:47:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.481 21:47:20 -- spdk/autotest.sh@124 -- # uname -s 00:05:20.481 21:47:20 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:05:20.481 21:47:20 -- spdk/autotest.sh@128 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:20.481 Contigmem (not present) 00:05:20.481 Buffer Size: not set 00:05:20.481 Num Buffers: not set 00:05:20.481 00:05:20.481 00:05:20.481 Type BDF Vendor Device Driver 00:05:20.481 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:05:20.481 21:47:21 -- spdk/autotest.sh@130 -- # uname -s 00:05:20.481 21:47:21 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:05:20.481 21:47:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:20.481 21:47:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.481 21:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.481 21:47:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:20.481 21:47:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:20.481 21:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.481 21:47:21 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.739 kldunload: can't find file nic_uio.ko 00:05:20.739 hw.nic_uio.bdfs="0:16:0" 00:05:20.739 hw.contigmem.num_buffers="8" 00:05:20.739 hw.contigmem.buffer_size="268435456" 00:05:21.308 21:47:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:21.308 21:47:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.308 21:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:21.308 21:47:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:21.308 21:47:21 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:21.308 21:47:21 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.308 21:47:21 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:21.308 21:47:21 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:21.308 21:47:21 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:21.308 21:47:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:21.308 21:47:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:21.308 21:47:21 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.308 21:47:21 -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:21.308 21:47:21 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:21.308 21:47:21 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:21.308 21:47:21 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:05:21.308 21:47:21 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:21.308 21:47:21 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:21.308 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:05:21.308 21:47:21 -- common/autotest_common.sh@1576 -- # device= 00:05:21.308 21:47:21 -- common/autotest_common.sh@1576 -- # true 00:05:21.308 21:47:21 -- common/autotest_common.sh@1577 -- # [[ '' == \0\x\0\a\5\4 ]] 00:05:21.308 21:47:21 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:21.308 21:47:21 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:21.308 21:47:21 -- common/autotest_common.sh@1589 -- # return 0 00:05:21.308 21:47:21 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:21.308 21:47:21 -- spdk/autotest.sh@151 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:21.308 21:47:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.308 21:47:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.308 21:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:21.308 ************************************ 00:05:21.308 START TEST unittest 00:05:21.308 ************************************ 00:05:21.308 21:47:21 unittest -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:21.308 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:21.308 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.308 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.308 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:21.308 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:21.308 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:05:21.308 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:21.308 ++ rpc_py=rpc_cmd 00:05:21.308 ++ set -e 00:05:21.308 ++ shopt -s nullglob 00:05:21.308 ++ shopt -s extglob 00:05:21.308 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:05:21.308 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:21.308 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:21.308 +++ CONFIG_WPDK_DIR= 00:05:21.308 +++ CONFIG_ASAN=n 00:05:21.308 +++ CONFIG_VBDEV_COMPRESS=n 00:05:21.308 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:21.308 +++ CONFIG_USDT=n 00:05:21.309 +++ CONFIG_CUSTOMOCF=n 00:05:21.309 +++ CONFIG_PREFIX=/usr/local 00:05:21.309 +++ CONFIG_RBD=n 00:05:21.309 +++ CONFIG_LIBDIR= 00:05:21.309 +++ CONFIG_IDXD=y 00:05:21.309 +++ CONFIG_NVME_CUSE=n 00:05:21.309 +++ CONFIG_SMA=n 00:05:21.309 +++ CONFIG_VTUNE=n 00:05:21.309 +++ CONFIG_TSAN=n 00:05:21.309 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:21.309 +++ CONFIG_VFIO_USER_DIR= 00:05:21.309 +++ CONFIG_PGO_CAPTURE=n 00:05:21.309 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:05:21.309 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:21.309 +++ CONFIG_LTO=n 00:05:21.309 +++ CONFIG_ISCSI_INITIATOR=n 00:05:21.309 +++ CONFIG_CET=n 00:05:21.309 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:21.309 +++ CONFIG_OCF_PATH= 00:05:21.309 +++ CONFIG_RDMA_SET_TOS=y 00:05:21.309 +++ CONFIG_HAVE_ARC4RANDOM=y 00:05:21.309 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:21.309 +++ CONFIG_UBLK=n 00:05:21.309 +++ CONFIG_ISAL_CRYPTO=y 00:05:21.309 +++ CONFIG_OPENSSL_PATH= 00:05:21.309 +++ CONFIG_OCF=n 00:05:21.309 +++ CONFIG_FUSE=n 00:05:21.309 +++ CONFIG_VTUNE_DIR= 00:05:21.309 +++ CONFIG_FUZZER_LIB= 00:05:21.309 +++ CONFIG_FUZZER=n 00:05:21.309 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:21.309 +++ CONFIG_CRYPTO=n 00:05:21.309 +++ CONFIG_PGO_USE=n 00:05:21.309 +++ CONFIG_VHOST=n 00:05:21.309 +++ CONFIG_DAOS=n 00:05:21.309 +++ CONFIG_DPDK_INC_DIR= 00:05:21.309 +++ CONFIG_DAOS_DIR= 00:05:21.309 +++ CONFIG_UNIT_TESTS=y 00:05:21.309 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:05:21.309 +++ CONFIG_VIRTIO=n 00:05:21.309 +++ CONFIG_DPDK_UADK=n 00:05:21.309 +++ CONFIG_COVERAGE=n 00:05:21.309 +++ CONFIG_RDMA=y 00:05:21.309 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:21.309 +++ CONFIG_URING_PATH= 00:05:21.309 +++ CONFIG_XNVME=n 00:05:21.309 +++ CONFIG_VFIO_USER=n 00:05:21.309 +++ CONFIG_ARCH=native 00:05:21.309 +++ CONFIG_HAVE_EVP_MAC=y 00:05:21.309 +++ CONFIG_URING_ZNS=n 00:05:21.309 +++ CONFIG_WERROR=y 00:05:21.309 +++ CONFIG_HAVE_LIBBSD=n 00:05:21.309 +++ CONFIG_UBSAN=n 00:05:21.309 +++ CONFIG_IPSEC_MB_DIR= 00:05:21.309 +++ CONFIG_GOLANG=n 00:05:21.309 +++ CONFIG_ISAL=y 00:05:21.309 +++ CONFIG_IDXD_KERNEL=n 00:05:21.309 +++ CONFIG_DPDK_LIB_DIR= 00:05:21.309 +++ CONFIG_RDMA_PROV=verbs 00:05:21.309 +++ CONFIG_APPS=y 00:05:21.309 +++ CONFIG_SHARED=n 00:05:21.309 +++ CONFIG_HAVE_KEYUTILS=n 00:05:21.309 +++ CONFIG_FC_PATH= 00:05:21.309 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:21.309 +++ CONFIG_FC=n 00:05:21.309 +++ CONFIG_AVAHI=n 00:05:21.309 +++ CONFIG_FIO_PLUGIN=y 00:05:21.309 +++ CONFIG_RAID5F=n 00:05:21.309 +++ CONFIG_EXAMPLES=y 00:05:21.309 +++ CONFIG_TESTS=y 00:05:21.309 +++ CONFIG_CRYPTO_MLX5=n 00:05:21.309 +++ CONFIG_MAX_LCORES= 00:05:21.309 +++ CONFIG_IPSEC_MB=n 00:05:21.309 +++ CONFIG_PGO_DIR= 00:05:21.309 +++ CONFIG_DEBUG=y 00:05:21.309 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:21.309 +++ CONFIG_CROSS_PREFIX= 00:05:21.309 +++ CONFIG_URING=n 00:05:21.309 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:21.309 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:21.309 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:05:21.309 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:05:21.309 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:05:21.309 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:05:21.309 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:05:21.309 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:05:21.309 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:21.309 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:21.309 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:21.309 +++ VHOST_APP=("$_app_dir/vhost") 00:05:21.309 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:21.309 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:21.309 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:21.309 +++ [[ #ifndef SPDK_CONFIG_H 00:05:21.309 #define SPDK_CONFIG_H 00:05:21.309 #define SPDK_CONFIG_APPS 1 00:05:21.309 #define SPDK_CONFIG_ARCH native 00:05:21.309 #undef SPDK_CONFIG_ASAN 00:05:21.309 #undef SPDK_CONFIG_AVAHI 00:05:21.309 #undef SPDK_CONFIG_CET 00:05:21.309 #undef SPDK_CONFIG_COVERAGE 00:05:21.309 #define SPDK_CONFIG_CROSS_PREFIX 00:05:21.309 #undef SPDK_CONFIG_CRYPTO 00:05:21.309 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:21.309 #undef SPDK_CONFIG_CUSTOMOCF 00:05:21.309 #undef SPDK_CONFIG_DAOS 00:05:21.309 #define SPDK_CONFIG_DAOS_DIR 00:05:21.309 #define SPDK_CONFIG_DEBUG 1 00:05:21.309 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:21.309 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:21.309 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:21.309 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:21.309 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:21.309 #undef SPDK_CONFIG_DPDK_UADK 00:05:21.309 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:21.309 #define SPDK_CONFIG_EXAMPLES 1 00:05:21.309 #undef SPDK_CONFIG_FC 00:05:21.309 #define SPDK_CONFIG_FC_PATH 00:05:21.309 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:21.309 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:21.309 #undef SPDK_CONFIG_FUSE 00:05:21.309 #undef SPDK_CONFIG_FUZZER 00:05:21.309 #define SPDK_CONFIG_FUZZER_LIB 00:05:21.309 #undef SPDK_CONFIG_GOLANG 00:05:21.309 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:21.309 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:21.309 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:21.309 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:21.309 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:21.309 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:21.309 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:05:21.309 #define SPDK_CONFIG_IDXD 1 00:05:21.309 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:21.309 #undef SPDK_CONFIG_IPSEC_MB 00:05:21.309 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:21.309 #define SPDK_CONFIG_ISAL 1 00:05:21.309 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:21.309 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:05:21.309 #define SPDK_CONFIG_LIBDIR 00:05:21.309 #undef SPDK_CONFIG_LTO 00:05:21.309 #define SPDK_CONFIG_MAX_LCORES 00:05:21.309 #undef SPDK_CONFIG_NVME_CUSE 00:05:21.309 #undef SPDK_CONFIG_OCF 00:05:21.309 #define SPDK_CONFIG_OCF_PATH 00:05:21.309 #define SPDK_CONFIG_OPENSSL_PATH 00:05:21.309 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:21.309 #define SPDK_CONFIG_PGO_DIR 00:05:21.309 #undef SPDK_CONFIG_PGO_USE 00:05:21.309 #define SPDK_CONFIG_PREFIX /usr/local 00:05:21.309 #undef SPDK_CONFIG_RAID5F 00:05:21.309 #undef SPDK_CONFIG_RBD 00:05:21.309 #define SPDK_CONFIG_RDMA 1 00:05:21.309 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:21.309 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:21.309 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:05:21.309 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:21.309 #undef SPDK_CONFIG_SHARED 00:05:21.309 #undef SPDK_CONFIG_SMA 00:05:21.309 #define SPDK_CONFIG_TESTS 1 00:05:21.309 #undef SPDK_CONFIG_TSAN 00:05:21.309 #undef SPDK_CONFIG_UBLK 00:05:21.309 #undef SPDK_CONFIG_UBSAN 00:05:21.309 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:21.309 #undef SPDK_CONFIG_URING 00:05:21.309 #define SPDK_CONFIG_URING_PATH 00:05:21.309 #undef SPDK_CONFIG_URING_ZNS 00:05:21.309 #undef SPDK_CONFIG_USDT 00:05:21.309 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:21.309 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:21.309 #undef SPDK_CONFIG_VFIO_USER 00:05:21.309 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:21.309 #undef SPDK_CONFIG_VHOST 00:05:21.309 #undef SPDK_CONFIG_VIRTIO 00:05:21.309 #undef SPDK_CONFIG_VTUNE 00:05:21.309 #define SPDK_CONFIG_VTUNE_DIR 00:05:21.309 #define SPDK_CONFIG_WERROR 1 00:05:21.309 #define SPDK_CONFIG_WPDK_DIR 00:05:21.309 #undef SPDK_CONFIG_XNVME 00:05:21.309 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:21.309 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:21.309 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.309 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:21.309 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.309 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.309 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:21.309 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:21.309 ++++ export PATH 00:05:21.309 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:05:21.309 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:21.309 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:21.309 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:21.309 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:21.309 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:21.309 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:05:21.309 +++ TEST_TAG=N/A 00:05:21.309 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:21.309 +++ PM_OUTPUTDIR=/usr/home/vagrant/spdk_repo/spdk/../output/power 00:05:21.309 ++++ uname -s 00:05:21.309 +++ PM_OS=FreeBSD 00:05:21.309 +++ MONITOR_RESOURCES_SUDO=() 00:05:21.309 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:21.309 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:21.309 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:21.310 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:21.310 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:21.310 +++ SUDO[0]= 00:05:21.310 +++ SUDO[1]='sudo -E' 00:05:21.310 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:21.310 +++ [[ FreeBSD == FreeBSD ]] 00:05:21.310 +++ MONITOR_RESOURCES=(collect-vmstat) 00:05:21.310 +++ [[ ! -d /usr/home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:21.310 ++ : 0 00:05:21.310 ++ export RUN_NIGHTLY 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_RUN_VALGRIND 00:05:21.310 ++ : 1 00:05:21.310 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:21.310 ++ : 1 00:05:21.310 ++ export SPDK_TEST_UNITTEST 00:05:21.310 ++ : 00:05:21.310 ++ export SPDK_TEST_AUTOBUILD 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_RELEASE_BUILD 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_ISAL 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_ISCSI 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:21.310 ++ : 1 00:05:21.310 ++ export SPDK_TEST_NVME 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVME_PMR 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVME_BP 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVME_CLI 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVME_CUSE 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVME_FDP 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVMF 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VFIOUSER 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_FUZZER 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_FUZZER_SHORT 00:05:21.310 ++ : rdma 00:05:21.310 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_RBD 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VHOST 00:05:21.310 ++ : 1 00:05:21.310 ++ export SPDK_TEST_BLOCKDEV 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_IOAT 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_BLOBFS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VHOST_INIT 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_LVOL 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_RUN_ASAN 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_RUN_UBSAN 00:05:21.310 ++ : 00:05:21.310 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_RUN_NON_ROOT 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_CRYPTO 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_FTL 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_OCF 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_VMD 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_OPAL 00:05:21.310 ++ : 00:05:21.310 ++ export SPDK_TEST_NATIVE_DPDK 00:05:21.310 ++ : true 00:05:21.310 ++ export SPDK_AUTOTEST_X 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_RAID5 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_URING 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_USDT 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_USE_IGB_UIO 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_SCHEDULER 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_SCANBUILD 00:05:21.310 ++ : 00:05:21.310 ++ export SPDK_TEST_NVMF_NICS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_SMA 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_DAOS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_XNVME 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_ACCEL_DSA 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_ACCEL_IAA 00:05:21.310 ++ : 00:05:21.310 ++ export SPDK_TEST_FUZZER_TARGET 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_TEST_NVMF_MDNS 00:05:21.310 ++ : 0 00:05:21.310 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:21.310 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:05:21.310 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:05:21.310 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:21.310 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:21.310 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:21.310 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:21.310 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:21.310 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:21.310 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:21.310 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:21.310 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:05:21.310 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:05:21.310 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:21.310 ++ PYTHONDONTWRITEBYTECODE=1 00:05:21.310 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:21.310 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:21.310 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:21.310 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:21.310 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:21.310 ++ rm -rf /var/tmp/asan_suppression_file 00:05:21.310 ++ cat 00:05:21.310 ++ echo leak:libfuse3.so 00:05:21.310 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:21.310 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:21.310 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:21.310 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:21.310 ++ '[' -z /var/spdk/dependencies ']' 00:05:21.310 ++ export DEPENDENCY_DIR 00:05:21.310 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:05:21.310 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:05:21.310 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:05:21.310 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:05:21.310 ++ export QEMU_BIN= 00:05:21.310 ++ QEMU_BIN= 00:05:21.310 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:21.310 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:21.310 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:21.310 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:21.310 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:21.310 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:21.310 ++ '[' 0 -eq 0 ']' 00:05:21.310 ++ export valgrind= 00:05:21.310 ++ valgrind= 00:05:21.310 +++ uname -s 00:05:21.310 ++ '[' FreeBSD = Linux ']' 00:05:21.310 +++ uname -s 00:05:21.310 ++ '[' FreeBSD = FreeBSD ']' 00:05:21.310 ++ MAKE=gmake 00:05:21.310 +++ sysctl -a 00:05:21.310 +++ grep -E -i hw.ncpu 00:05:21.310 +++ awk '{print $2}' 00:05:21.570 ++ MAKEFLAGS=-j10 00:05:21.570 ++ HUGEMEM=2048 00:05:21.570 ++ export HUGEMEM=2048 00:05:21.570 ++ HUGEMEM=2048 00:05:21.570 ++ NO_HUGE=() 00:05:21.570 ++ TEST_MODE= 00:05:21.570 ++ [[ -z '' ]] 00:05:21.570 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:21.571 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:21.571 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:21.571 ++ exec 00:05:21.571 ++ set_test_storage 2147483648 00:05:21.571 ++ [[ -v testdir ]] 00:05:21.571 ++ local requested_size=2147483648 00:05:21.571 ++ local mount target_dir 00:05:21.571 ++ local -A mounts fss sizes avails uses 00:05:21.571 ++ local source fs size avail mount use 00:05:21.571 ++ local storage_fallback storage_candidates 00:05:21.571 +++ mktemp -udt spdk.XXXXXX 00:05:21.571 ++ storage_fallback=/tmp/spdk.XXXXXX.uzQGr1gb 00:05:21.571 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:21.571 ++ [[ -n '' ]] 00:05:21.571 ++ [[ -n '' ]] 00:05:21.571 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.uzQGr1gb/tests/unit /tmp/spdk.XXXXXX.uzQGr1gb 00:05:21.571 ++ requested_size=2214592512 00:05:21.571 ++ read -r source fs size use avail _ mount 00:05:21.571 +++ df -T 00:05:21.571 +++ grep -v Filesystem 00:05:21.571 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:05:21.571 ++ fss["$mount"]=ufs 00:05:21.571 ++ avails["$mount"]=17239019520 00:05:21.571 ++ sizes["$mount"]=31182712832 00:05:21.571 ++ uses["$mount"]=11449077760 00:05:21.571 ++ read -r source fs size use avail _ mount 00:05:21.571 ++ mounts["$mount"]=devfs 00:05:21.571 ++ fss["$mount"]=devfs 00:05:21.571 ++ avails["$mount"]=0 00:05:21.571 ++ sizes["$mount"]=1024 00:05:21.571 ++ uses["$mount"]=1024 00:05:21.571 ++ read -r source fs size use avail _ mount 00:05:21.571 ++ mounts["$mount"]=tmpfs 00:05:21.571 ++ fss["$mount"]=tmpfs 00:05:21.571 ++ avails["$mount"]=2147442688 00:05:21.571 ++ sizes["$mount"]=2147483648 00:05:21.571 ++ uses["$mount"]=40960 00:05:21.571 ++ read -r source fs size use avail _ mount 00:05:21.571 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:05:21.571 ++ fss["$mount"]=fusefs.sshfs 00:05:21.571 ++ avails["$mount"]=93713969152 00:05:21.571 ++ sizes["$mount"]=105088212992 00:05:21.571 ++ uses["$mount"]=5988810752 00:05:21.571 ++ read -r source fs size use avail _ mount 00:05:21.571 ++ printf '* Looking for test storage...\n' 00:05:21.571 * Looking for test storage... 00:05:21.571 ++ local target_space new_size 00:05:21.571 ++ for target_dir in "${storage_candidates[@]}" 00:05:21.571 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.571 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:21.571 ++ mount=/ 00:05:21.571 ++ target_space=17239019520 00:05:21.571 ++ (( target_space == 0 || target_space < requested_size )) 00:05:21.571 ++ (( target_space >= requested_size )) 00:05:21.571 ++ [[ ufs == tmpfs ]] 00:05:21.571 ++ [[ ufs == ramfs ]] 00:05:21.571 ++ [[ / == / ]] 00:05:21.571 ++ new_size=13663670272 00:05:21.571 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:21.571 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.571 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.571 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.571 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:05:21.571 ++ return 0 00:05:21.571 ++ set -o errtrace 00:05:21.571 ++ shopt -s extdebug 00:05:21.571 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:21.571 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@1683 -- # true 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@29 -- # exec 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@179 -- # hash lcov 00:05:21.571 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@182 -- # cov_avail=no 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@206 -- # uname -m 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:05:21.571 21:47:21 unittest -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.571 21:47:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:21.571 ************************************ 00:05:21.571 START TEST unittest_pci_event 00:05:21.571 ************************************ 00:05:21.571 21:47:21 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:21.571 00:05:21.571 00:05:21.571 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.571 http://cunit.sourceforge.net/ 00:05:21.571 00:05:21.571 00:05:21.571 Suite: pci_event 00:05:21.571 Test: test_pci_parse_event ...passed 00:05:21.571 00:05:21.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.571 suites 1 1 n/a 0 0 00:05:21.571 tests 1 1 1 0 0 00:05:21.571 asserts 1 1 1 0 n/a 00:05:21.571 00:05:21.571 Elapsed time = 0.000 seconds 00:05:21.571 00:05:21.571 real 0m0.024s 00:05:21.571 user 0m0.001s 00:05:21.571 sys 0m0.009s 00:05:21.571 21:47:21 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.571 ************************************ 00:05:21.571 END TEST unittest_pci_event 00:05:21.571 ************************************ 00:05:21.571 21:47:21 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:21.571 21:47:22 unittest -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:21.571 ************************************ 00:05:21.571 START TEST unittest_include 00:05:21.571 ************************************ 00:05:21.571 21:47:22 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:21.571 00:05:21.571 00:05:21.571 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.571 http://cunit.sourceforge.net/ 00:05:21.571 00:05:21.571 00:05:21.571 Suite: histogram 00:05:21.571 Test: histogram_test ...passed 00:05:21.571 Test: histogram_merge ...passed 00:05:21.571 00:05:21.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.571 suites 1 1 n/a 0 0 00:05:21.571 tests 2 2 2 0 0 00:05:21.571 asserts 50 50 50 0 n/a 00:05:21.571 00:05:21.571 Elapsed time = 0.000 seconds 00:05:21.571 00:05:21.571 real 0m0.008s 00:05:21.571 user 0m0.007s 00:05:21.571 sys 0m0.000s 00:05:21.571 21:47:22 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.571 21:47:22 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:21.571 ************************************ 00:05:21.571 END TEST unittest_include 00:05:21.571 ************************************ 00:05:21.571 21:47:22 unittest -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.571 21:47:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:21.571 ************************************ 00:05:21.571 START TEST unittest_bdev 00:05:21.571 ************************************ 00:05:21.571 21:47:22 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:05:21.571 21:47:22 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:21.571 00:05:21.571 00:05:21.571 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.571 http://cunit.sourceforge.net/ 00:05:21.571 00:05:21.571 00:05:21.571 Suite: bdev 00:05:21.571 Test: bytes_to_blocks_test ...passed 00:05:21.571 Test: num_blocks_test ...passed 00:05:21.571 Test: io_valid_test ...passed 00:05:21.571 Test: open_write_test ...[2024-05-14 21:47:22.096196] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:21.571 [2024-05-14 21:47:22.096547] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:21.571 [2024-05-14 21:47:22.096586] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:21.571 passed 00:05:21.571 Test: claim_test ...passed 00:05:21.571 Test: alias_add_del_test ...[2024-05-14 21:47:22.099794] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:21.571 [2024-05-14 21:47:22.099828] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4605:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:21.571 [2024-05-14 21:47:22.099840] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:21.571 passed 00:05:21.571 Test: get_device_stat_test ...passed 00:05:21.571 Test: bdev_io_types_test ...passed 00:05:21.571 Test: bdev_io_wait_test ...passed 00:05:21.571 Test: bdev_io_spans_split_test ...passed 00:05:21.571 Test: bdev_io_boundary_split_test ...passed 00:05:21.572 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-14 21:47:22.106611] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:21.572 passed 00:05:21.572 Test: bdev_io_mix_split_test ...passed 00:05:21.572 Test: bdev_io_split_with_io_wait ...passed 00:05:21.572 Test: bdev_io_write_unit_split_test ...[2024-05-14 21:47:22.111457] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:21.572 [2024-05-14 21:47:22.111544] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:21.572 [2024-05-14 21:47:22.111556] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:21.572 [2024-05-14 21:47:22.111568] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:21.572 passed 00:05:21.572 Test: bdev_io_alignment_with_boundary ...passed 00:05:21.572 Test: bdev_io_alignment ...passed 00:05:21.572 Test: bdev_histograms ...passed 00:05:21.572 Test: bdev_write_zeroes ...passed 00:05:21.572 Test: bdev_compare_and_write ...passed 00:05:21.572 Test: bdev_compare ...passed 00:05:21.572 Test: bdev_compare_emulated ...passed 00:05:21.572 Test: bdev_zcopy_write ...passed 00:05:21.572 Test: bdev_zcopy_read ...passed 00:05:21.572 Test: bdev_open_while_hotremove ...passed 00:05:21.572 Test: bdev_close_while_hotremove ...passed 00:05:21.572 Test: bdev_open_ext_test ...[2024-05-14 21:47:22.129524] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:21.572 passed 00:05:21.572 Test: bdev_open_ext_unregister ...passed 00:05:21.572 Test: bdev_set_io_timeout ...[2024-05-14 21:47:22.129644] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:21.572 passed 00:05:21.572 Test: bdev_set_qd_sampling ...passed 00:05:21.572 Test: lba_range_overlap ...passed 00:05:21.572 Test: lock_lba_range_check_ranges ...passed 00:05:21.572 Test: lock_lba_range_with_io_outstanding ...passed 00:05:21.572 Test: lock_lba_range_overlapped ...passed 00:05:21.572 Test: bdev_quiesce ...[2024-05-14 21:47:22.138202] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10059:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:21.572 passed 00:05:21.572 Test: bdev_io_abort ...passed 00:05:21.572 Test: bdev_unmap ...passed 00:05:21.572 Test: bdev_write_zeroes_split_test ...passed 00:05:21.572 Test: bdev_set_options_test ...passed 00:05:21.572 Test: bdev_get_memory_domains ...passed 00:05:21.572 Test: bdev_io_ext ...[2024-05-14 21:47:22.143582] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:21.572 passed 00:05:21.572 Test: bdev_io_ext_no_opts ...passed 00:05:21.572 Test: bdev_io_ext_invalid_opts ...passed 00:05:21.572 Test: bdev_io_ext_split ...passed 00:05:21.572 Test: bdev_io_ext_bounce_buffer ...passed 00:05:21.572 Test: bdev_register_uuid_alias ...[2024-05-14 21:47:22.151958] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 8b518092-123b-11ef-8c90-4585f0cfab08 already exists 00:05:21.572 [2024-05-14 21:47:22.151992] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:8b518092-123b-11ef-8c90-4585f0cfab08 alias for bdev bdev0 00:05:21.572 passed 00:05:21.572 Test: bdev_unregister_by_name ...passed 00:05:21.572 Test: for_each_bdev_test ...[2024-05-14 21:47:22.152435] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7926:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:21.572 [2024-05-14 21:47:22.152483] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:21.572 passed 00:05:21.572 Test: bdev_seek_test ...passed 00:05:21.572 Test: bdev_copy ...passed 00:05:21.572 Test: bdev_copy_split_test ...passed 00:05:21.572 Test: examine_locks ...passed 00:05:21.572 Test: claim_v2_rwo ...passed 00:05:21.572 Test: claim_v2_rom ...[2024-05-14 21:47:22.157388] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157428] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157445] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157455] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157463] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157474] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8656:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:21.572 [2024-05-14 21:47:22.157504] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157639] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157649] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157673] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157693] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:21.572 passed 00:05:21.572 Test: claim_v2_rwm ...passed 00:05:21.572 Test: claim_v2_existing_writer ...[2024-05-14 21:47:22.157703] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:21.572 [2024-05-14 21:47:22.157729] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:21.572 [2024-05-14 21:47:22.157739] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157748] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157756] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157764] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157773] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8748:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157796] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:21.572 [2024-05-14 21:47:22.157823] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:21.572 [2024-05-14 21:47:22.157832] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:21.572 passed 00:05:21.572 Test: claim_v2_existing_v1 ...passed 00:05:21.572 Test: claim_v1_existing_v2 ...passed 00:05:21.572 Test: examine_claimed ...passed 00:05:21.572 00:05:21.572 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.572 suites 1 1 n/a 0 0 00:05:21.572 tests 59 59 59 0 0 00:05:21.572 asserts 4599 4599 4599 0 n/a 00:05:21.572 00:05:21.572 Elapsed time = 0.070 seconds 00:05:21.572 [2024-05-14 21:47:22.157853] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157862] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157870] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157890] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157899] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157909] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:21.572 [2024-05-14 21:47:22.157947] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:21.831 21:47:22 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:21.831 00:05:21.831 00:05:21.831 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.831 http://cunit.sourceforge.net/ 00:05:21.831 00:05:21.831 00:05:21.831 Suite: nvme 00:05:21.831 Test: test_create_ctrlr ...passed 00:05:21.831 Test: test_reset_ctrlr ...[2024-05-14 21:47:22.167716] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 passed 00:05:21.831 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:21.831 Test: test_failover_ctrlr ...passed 00:05:21.831 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:05:21.831 Test: test_pending_reset ...[2024-05-14 21:47:22.168160] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.168200] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.168236] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.168454] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.168504] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 passed 00:05:21.831 Test: test_attach_ctrlr ...passed 00:05:21.831 Test: test_aer_cb ...passed 00:05:21.831 Test: test_submit_nvme_cmd ...[2024-05-14 21:47:22.168612] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:21.831 passed 00:05:21.831 Test: test_add_remove_trid ...passed 00:05:21.831 Test: test_abort ...passed 00:05:21.831 Test: test_get_io_qpair ...passed 00:05:21.831 Test: test_bdev_unregister ...passed 00:05:21.831 Test: test_compare_ns ...passed 00:05:21.831 Test: test_init_ana_log_page ...[2024-05-14 21:47:22.168956] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7436:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:21.831 passed 00:05:21.831 Test: test_get_memory_domains ...passed 00:05:21.831 Test: test_reconnect_qpair ...passed[2024-05-14 21:47:22.169283] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 00:05:21.831 Test: test_create_bdev_ctrlr ...passed 00:05:21.831 Test: test_add_multi_ns_to_bdev ...[2024-05-14 21:47:22.169353] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5362:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:21.831 [2024-05-14 21:47:22.169516] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4553:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:21.831 passed 00:05:21.831 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:21.831 Test: test_admin_path ...passed 00:05:21.831 Test: test_reset_bdev_ctrlr ...passed 00:05:21.831 Test: test_find_io_path ...passed 00:05:21.831 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:21.831 Test: test_retry_io_for_io_path_error ...passed 00:05:21.831 Test: test_retry_io_count ...passed 00:05:21.831 Test: test_concurrent_read_ana_log_page ...passed 00:05:21.831 Test: test_retry_io_for_ana_error ...passed 00:05:21.831 Test: test_check_io_error_resiliency_params ...[2024-05-14 21:47:22.170323] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6056:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:21.831 [2024-05-14 21:47:22.170349] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6060:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:21.831 [2024-05-14 21:47:22.170365] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6069:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:21.831 [2024-05-14 21:47:22.170379] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6072:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:21.831 [2024-05-14 21:47:22.170393] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:21.831 [2024-05-14 21:47:22.170430] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:21.831 passed 00:05:21.831 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:21.831 Test: test_reconnect_ctrlr ...[2024-05-14 21:47:22.170447] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6064:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:21.831 [2024-05-14 21:47:22.170461] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6079:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:21.831 [2024-05-14 21:47:22.170475] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:21.831 passed 00:05:21.831 Test: test_retry_failover_ctrlr ...passed 00:05:21.831 Test: test_fail_path ...[2024-05-14 21:47:22.170588] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170617] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170671] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170699] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170726] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170787] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 [2024-05-14 21:47:22.170873] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.831 passed 00:05:21.831 Test: test_nvme_ns_cmp ...passed 00:05:21.831 Test: test_ana_transition ...passed 00:05:21.831 Test: test_set_preferred_path ...[2024-05-14 21:47:22.170903] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 [2024-05-14 21:47:22.170930] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 [2024-05-14 21:47:22.170954] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 [2024-05-14 21:47:22.170979] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 passed 00:05:21.832 Test: test_find_next_io_path ...passed 00:05:21.832 Test: test_find_io_path_min_qd ...passed 00:05:21.832 Test: test_disable_auto_failback ...passed 00:05:21.832 Test: test_set_multipath_policy ...passed 00:05:21.832 Test: test_uuid_generation ...[2024-05-14 21:47:22.171204] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 passed 00:05:21.832 Test: test_retry_io_to_same_path ...passed 00:05:21.832 Test: test_race_between_reset_and_disconnected ...passed 00:05:21.832 Test: test_ctrlr_op_rpc ...passed 00:05:21.832 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:21.832 Test: test_disable_enable_ctrlr ...passed 00:05:21.832 Test: test_delete_ctrlr_done ...passed 00:05:21.832 Test: test_ns_remove_during_reset ...passed 00:05:21.832 00:05:21.832 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.832 suites 1 1 n/a 0 0 00:05:21.832 tests 48 48 48 0 0 00:05:21.832 asserts 3565 3565 3565 0 n/a 00:05:21.832 00:05:21.832 Elapsed time = 0.008 seconds 00:05:21.832 [2024-05-14 21:47:22.200497] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 [2024-05-14 21:47:22.200538] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.832 21:47:22 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:21.832 00:05:21.832 00:05:21.832 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.832 http://cunit.sourceforge.net/ 00:05:21.832 00:05:21.832 Test Options 00:05:21.832 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:05:21.832 00:05:21.832 Suite: raid 00:05:21.832 Test: test_create_raid ...passed 00:05:21.832 Test: test_create_raid_superblock ...passed 00:05:21.832 Test: test_delete_raid ...passed 00:05:21.832 Test: test_create_raid_invalid_args ...[2024-05-14 21:47:22.209554] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:21.832 [2024-05-14 21:47:22.209757] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:21.832 [2024-05-14 21:47:22.209871] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:21.832 [2024-05-14 21:47:22.209912] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:21.832 [2024-05-14 21:47:22.209927] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:21.832 [2024-05-14 21:47:22.210079] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:21.832 [2024-05-14 21:47:22.210094] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:21.832 passed 00:05:21.832 Test: test_delete_raid_invalid_args ...passed 00:05:21.832 Test: test_io_channel ...passed 00:05:21.832 Test: test_reset_io ...passed 00:05:21.832 Test: test_write_io ...passed 00:05:21.832 Test: test_read_io ...passed 00:05:22.399 Test: test_unmap_io ...passed 00:05:22.399 Test: test_io_failure ...passed 00:05:22.399 Test: test_multi_raid_no_io ...passed 00:05:22.399 Test: test_multi_raid_with_io ...[2024-05-14 21:47:22.953094] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:22.399 passed 00:05:22.399 Test: test_io_type_supported ...passed 00:05:22.399 Test: test_raid_json_dump_info ...passed 00:05:22.399 Test: test_context_size ...passed 00:05:22.399 Test: test_raid_level_conversions ...passed 00:05:22.399 Test: test_raid_io_split ...passed 00:05:22.399 Test: test_raid_process ...passedTest Options 00:05:22.399 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:05:22.399 00:05:22.399 Suite: raid_dif 00:05:22.399 Test: test_create_raid ...passed 00:05:22.399 Test: test_create_raid_superblock ...passed 00:05:22.399 Test: test_delete_raid ...passed 00:05:22.399 Test: test_create_raid_invalid_args ...[2024-05-14 21:47:22.954388] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:22.399 [2024-05-14 21:47:22.954428] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:22.399 [2024-05-14 21:47:22.954483] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:22.399 [2024-05-14 21:47:22.954501] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:22.399 [2024-05-14 21:47:22.954518] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:22.399 passed 00:05:22.399 Test: test_delete_raid_invalid_args ...passed 00:05:22.399 Test: test_io_channel ...passed 00:05:22.399 Test: test_reset_io ...passed 00:05:22.399 Test: test_write_io ...[2024-05-14 21:47:22.954619] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:22.399 [2024-05-14 21:47:22.954627] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:22.399 passed 00:05:22.399 Test: test_read_io ...passed 00:05:23.336 Test: test_unmap_io ...passed 00:05:23.336 Test: test_io_failure ...[2024-05-14 21:47:23.604613] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:23.336 passed 00:05:23.336 Test: test_multi_raid_no_io ...passed 00:05:23.336 Test: test_multi_raid_with_io ...passed 00:05:23.336 Test: test_io_type_supported ...passed 00:05:23.336 Test: test_raid_json_dump_info ...passed 00:05:23.336 Test: test_context_size ...passed 00:05:23.336 Test: test_raid_level_conversions ...passed 00:05:23.336 Test: test_raid_io_split ...passed 00:05:23.336 Test: test_raid_process ...passed 00:05:23.336 00:05:23.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.336 suites 2 2 n/a 0 0 00:05:23.336 tests 38 38 38 0 0 00:05:23.336 asserts 355741 355741 355741 0 n/a 00:05:23.336 00:05:23.336 Elapsed time = 1.398 seconds 00:05:23.336 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:23.336 00:05:23.336 00:05:23.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.336 http://cunit.sourceforge.net/ 00:05:23.336 00:05:23.336 00:05:23.336 Suite: raid_sb 00:05:23.336 Test: test_raid_bdev_write_superblock ...passed 00:05:23.336 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:23.336 Test: test_raid_bdev_parse_superblock ...[2024-05-14 21:47:23.616928] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:23.336 passed 00:05:23.336 Suite: raid_sb_md 00:05:23.336 Test: test_raid_bdev_write_superblock ...passed 00:05:23.336 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:23.336 Test: test_raid_bdev_parse_superblock ...passed 00:05:23.336 Suite: raid_sb_md_interleaved 00:05:23.336 Test: test_raid_bdev_write_superblock ...[2024-05-14 21:47:23.617955] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:23.336 passed 00:05:23.336 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:23.336 Test: test_raid_bdev_parse_superblock ...passed 00:05:23.336 00:05:23.336 [2024-05-14 21:47:23.618429] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:23.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.336 suites 3 3 n/a 0 0 00:05:23.336 tests 9 9 9 0 0 00:05:23.336 asserts 139 139 139 0 n/a 00:05:23.336 00:05:23.336 Elapsed time = 0.000 seconds 00:05:23.336 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:23.336 00:05:23.336 00:05:23.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.336 http://cunit.sourceforge.net/ 00:05:23.336 00:05:23.336 00:05:23.336 Suite: concat 00:05:23.336 Test: test_concat_start ...passed 00:05:23.336 Test: test_concat_rw ...passed 00:05:23.336 Test: test_concat_null_payload ...passed 00:05:23.336 00:05:23.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.336 suites 1 1 n/a 0 0 00:05:23.336 tests 3 3 3 0 0 00:05:23.336 asserts 8460 8460 8460 0 n/a 00:05:23.336 00:05:23.336 Elapsed time = 0.000 seconds 00:05:23.336 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:23.336 00:05:23.336 00:05:23.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.336 http://cunit.sourceforge.net/ 00:05:23.336 00:05:23.336 00:05:23.336 Suite: raid1 00:05:23.336 Test: test_raid1_start ...passed 00:05:23.336 Test: test_raid1_read_balancing ...passed 00:05:23.336 00:05:23.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.336 suites 1 1 n/a 0 0 00:05:23.336 tests 2 2 2 0 0 00:05:23.336 asserts 2880 2880 2880 0 n/a 00:05:23.336 00:05:23.336 Elapsed time = 0.000 seconds 00:05:23.336 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:23.336 00:05:23.336 00:05:23.336 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.336 http://cunit.sourceforge.net/ 00:05:23.336 00:05:23.336 00:05:23.337 Suite: zone 00:05:23.337 Test: test_zone_get_operation ...passed 00:05:23.337 Test: test_bdev_zone_get_info ...passed 00:05:23.337 Test: test_bdev_zone_management ...passed 00:05:23.337 Test: test_bdev_zone_append ...passed 00:05:23.337 Test: test_bdev_zone_append_with_md ...passed 00:05:23.337 Test: test_bdev_zone_appendv ...passed 00:05:23.337 Test: test_bdev_zone_appendv_with_md ...passed 00:05:23.337 Test: test_bdev_io_get_append_location ...passed 00:05:23.337 00:05:23.337 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.337 suites 1 1 n/a 0 0 00:05:23.337 tests 8 8 8 0 0 00:05:23.337 asserts 94 94 94 0 n/a 00:05:23.337 00:05:23.337 Elapsed time = 0.000 seconds 00:05:23.337 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:23.337 00:05:23.337 00:05:23.337 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.337 http://cunit.sourceforge.net/ 00:05:23.337 00:05:23.337 00:05:23.337 Suite: gpt_parse 00:05:23.337 Test: test_parse_mbr_and_primary ...[2024-05-14 21:47:23.642744] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:23.337 [2024-05-14 21:47:23.642974] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:23.337 [2024-05-14 21:47:23.643004] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:23.337 [2024-05-14 21:47:23.643016] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:23.337 [2024-05-14 21:47:23.643028] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:23.337 [2024-05-14 21:47:23.643040] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:23.337 passed 00:05:23.337 Test: test_parse_secondary ...passed[2024-05-14 21:47:23.643155] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:23.337 [2024-05-14 21:47:23.643177] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:23.337 [2024-05-14 21:47:23.643190] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:23.337 [2024-05-14 21:47:23.643201] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:23.337 00:05:23.337 Test: test_check_mbr ...[2024-05-14 21:47:23.643311] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:23.337 passed 00:05:23.337 Test: test_read_header ...passed 00:05:23.337 Test: test_read_partitions ...[2024-05-14 21:47:23.643325] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:23.337 [2024-05-14 21:47:23.643343] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:23.337 [2024-05-14 21:47:23.643356] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:23.337 [2024-05-14 21:47:23.643368] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:23.337 [2024-05-14 21:47:23.643382] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:23.337 [2024-05-14 21:47:23.643394] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:23.337 [2024-05-14 21:47:23.643406] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:23.337 [2024-05-14 21:47:23.643424] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:23.337 [2024-05-14 21:47:23.643437] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:23.337 [2024-05-14 21:47:23.643449] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:23.337 [2024-05-14 21:47:23.643460] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:23.337 [2024-05-14 21:47:23.643521] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:23.337 passed 00:05:23.337 00:05:23.337 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.337 suites 1 1 n/a 0 0 00:05:23.337 tests 5 5 5 0 0 00:05:23.337 asserts 33 33 33 0 n/a 00:05:23.337 00:05:23.337 Elapsed time = 0.000 seconds 00:05:23.337 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:23.337 00:05:23.337 00:05:23.337 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.337 http://cunit.sourceforge.net/ 00:05:23.337 00:05:23.337 00:05:23.337 Suite: bdev_part 00:05:23.337 Test: part_test ...[2024-05-14 21:47:23.652305] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:23.337 passed 00:05:23.337 Test: part_free_test ...passed 00:05:23.337 Test: part_get_io_channel_test ...passed 00:05:23.337 Test: part_construct_ext ...passed 00:05:23.337 00:05:23.337 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.337 suites 1 1 n/a 0 0 00:05:23.337 tests 4 4 4 0 0 00:05:23.337 asserts 48 48 48 0 n/a 00:05:23.337 00:05:23.337 Elapsed time = 0.008 seconds 00:05:23.337 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:23.337 00:05:23.337 00:05:23.337 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.337 http://cunit.sourceforge.net/ 00:05:23.337 00:05:23.337 00:05:23.337 Suite: scsi_nvme_suite 00:05:23.337 Test: scsi_nvme_translate_test ...passed 00:05:23.337 00:05:23.337 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.337 suites 1 1 n/a 0 0 00:05:23.337 tests 1 1 1 0 0 00:05:23.337 asserts 104 104 104 0 n/a 00:05:23.337 00:05:23.337 Elapsed time = 0.000 seconds 00:05:23.337 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:23.337 00:05:23.337 00:05:23.337 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.337 http://cunit.sourceforge.net/ 00:05:23.337 00:05:23.337 00:05:23.337 Suite: lvol 00:05:23.337 Test: ut_lvs_init ...[2024-05-14 21:47:23.667575] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:23.337 [2024-05-14 21:47:23.667812] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:23.337 passed 00:05:23.337 Test: ut_lvol_init ...passed 00:05:23.337 Test: ut_lvol_snapshot ...passed 00:05:23.337 Test: ut_lvol_clone ...passed 00:05:23.337 Test: ut_lvs_destroy ...passed 00:05:23.337 Test: ut_lvs_unload ...passed 00:05:23.337 Test: ut_lvol_resize ...passed 00:05:23.337 Test: ut_lvol_set_read_only ...passed 00:05:23.337 Test: ut_lvol_hotremove ...passed 00:05:23.337 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:23.337 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:23.337 Test: ut_lvol_read_write ...passed 00:05:23.337 Test: ut_vbdev_lvol_submit_request ...passed 00:05:23.337 Test: ut_lvol_examine_config ...[2024-05-14 21:47:23.667911] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:23.337 passed 00:05:23.337 Test: ut_lvol_examine_disk ...passed 00:05:23.337 Test: ut_lvol_rename ...passed 00:05:23.337 Test: ut_bdev_finish ...passed 00:05:23.337 Test: ut_lvs_rename ...passed 00:05:23.337 Test: ut_lvol_seek ...passed 00:05:23.337 Test: ut_esnap_dev_create ...[2024-05-14 21:47:23.668031] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:23.337 [2024-05-14 21:47:23.668106] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:23.337 [2024-05-14 21:47:23.668126] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:23.337 [2024-05-14 21:47:23.668193] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:23.337 [2024-05-14 21:47:23.668211] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:23.337 [2024-05-14 21:47:23.668228] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:23.337 passed 00:05:23.337 Test: ut_lvol_esnap_clone_bad_args ...passed 00:05:23.337 Test: ut_lvol_shallow_copy ...passed 00:05:23.337 00:05:23.337 [2024-05-14 21:47:23.668259] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:23.337 [2024-05-14 21:47:23.668299] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:23.337 [2024-05-14 21:47:23.668315] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:23.337 [2024-05-14 21:47:23.668344] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:23.337 [2024-05-14 21:47:23.668365] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:23.337 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.337 suites 1 1 n/a 0 0 00:05:23.337 tests 22 22 22 0 0 00:05:23.337 asserts 793 793 793 0 n/a 00:05:23.337 00:05:23.337 Elapsed time = 0.000 seconds 00:05:23.337 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:23.337 00:05:23.337 00:05:23.337 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.337 http://cunit.sourceforge.net/ 00:05:23.337 00:05:23.337 00:05:23.337 Suite: zone_block 00:05:23.337 Test: test_zone_block_create ...passed 00:05:23.338 Test: test_zone_block_create_invalid ...[2024-05-14 21:47:23.682799] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:23.338 [2024-05-14 21:47:23.683335] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-14 21:47:23.683402] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:23.338 [2024-05-14 21:47:23.683422] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-14 21:47:23.683445] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:23.338 [2024-05-14 21:47:23.683467] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-14 21:47:23.683499] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:23.338 passed 00:05:23.338 Test: test_get_zone_info ...[2024-05-14 21:47:23.683525] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-14 21:47:23.683684] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.684136] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.684180] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_supported_io_types ...passed 00:05:23.338 Test: test_reset_zone ...[2024-05-14 21:47:23.684304] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_open_zone ...[2024-05-14 21:47:23.684344] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.684421] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.685036] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_zone_write ...[2024-05-14 21:47:23.685087] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.685223] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:23.338 [2024-05-14 21:47:23.685259] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.685281] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:23.338 [2024-05-14 21:47:23.685295] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.686100] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:23.338 [2024-05-14 21:47:23.686140] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.686157] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:23.338 [2024-05-14 21:47:23.686167] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.686830] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:23.338 [2024-05-14 21:47:23.686850] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_zone_read ...[2024-05-14 21:47:23.686919] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:23.338 [2024-05-14 21:47:23.686941] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.686965] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:23.338 passed 00:05:23.338 Test: test_close_zone ...[2024-05-14 21:47:23.686980] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687047] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:23.338 [2024-05-14 21:47:23.687060] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687099] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687118] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_finish_zone ...[2024-05-14 21:47:23.687162] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687174] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 Test: test_append_zone ...[2024-05-14 21:47:23.687240] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687255] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687288] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:23.338 [2024-05-14 21:47:23.687300] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.687312] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:23.338 [2024-05-14 21:47:23.687321] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 [2024-05-14 21:47:23.688686] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:23.338 [2024-05-14 21:47:23.688720] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:23.338 passed 00:05:23.338 00:05:23.338 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.338 suites 1 1 n/a 0 0 00:05:23.338 tests 11 11 11 0 0 00:05:23.338 asserts 3437 3437 3437 0 n/a 00:05:23.338 00:05:23.338 Elapsed time = 0.008 seconds 00:05:23.338 21:47:23 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:23.338 00:05:23.338 00:05:23.338 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.338 http://cunit.sourceforge.net/ 00:05:23.338 00:05:23.338 00:05:23.338 Suite: bdev 00:05:23.338 Test: basic ...[2024-05-14 21:47:23.697358] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:05:23.338 [2024-05-14 21:47:23.697554] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82dd9a480 (0x248db0): Operation not permitted (rc=-1) 00:05:23.338 [2024-05-14 21:47:23.697573] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:05:23.338 passed 00:05:23.338 Test: unregister_and_close ...passed 00:05:23.338 Test: unregister_and_close_different_threads ...passed 00:05:23.338 Test: basic_qos ...passed 00:05:23.338 Test: put_channel_during_reset ...passed 00:05:23.338 Test: aborted_reset ...passed 00:05:23.338 Test: aborted_reset_no_outstanding_io ...passed 00:05:23.338 Test: io_during_reset ...passed 00:05:23.338 Test: reset_completions ...passed 00:05:23.338 Test: io_during_qos_queue ...passed 00:05:23.338 Test: io_during_qos_reset ...passed 00:05:23.338 Test: enomem ...passed 00:05:23.338 Test: enomem_multi_bdev ...passed 00:05:23.338 Test: enomem_multi_bdev_unregister ...passed 00:05:23.338 Test: enomem_multi_io_target ...passed 00:05:23.338 Test: qos_dynamic_enable ...passed 00:05:23.338 Test: bdev_histograms_mt ...passed 00:05:23.338 Test: bdev_set_io_timeout_mt ...[2024-05-14 21:47:23.743342] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x82dd9a600 not unregistered 00:05:23.338 passed 00:05:23.338 Test: lock_lba_range_then_submit_io ...[2024-05-14 21:47:23.744914] thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x248d98 already registered (old:0x82dd9a600 new:0x82dd9a780) 00:05:23.338 passed 00:05:23.338 Test: unregister_during_reset ...passed 00:05:23.338 Test: event_notify_and_close ...passed 00:05:23.338 Suite: bdev_wrong_thread 00:05:23.338 Test: spdk_bdev_register_wt ...passed 00:05:23.338 Test: spdk_bdev_examine_wt ...[2024-05-14 21:47:23.750268] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x82dd63700 (0x82dd63700) 00:05:23.338 [2024-05-14 21:47:23.750337] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82dd63700 (0x82dd63700) 00:05:23.338 passed 00:05:23.338 00:05:23.338 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.338 suites 2 2 n/a 0 0 00:05:23.338 tests 23 23 23 0 0 00:05:23.338 asserts 601 601 601 0 n/a 00:05:23.338 00:05:23.338 Elapsed time = 0.055 seconds 00:05:23.338 00:05:23.338 real 0m1.669s 00:05:23.338 user 0m1.325s 00:05:23.338 sys 0m0.332s 00:05:23.338 21:47:23 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.338 ************************************ 00:05:23.338 END TEST unittest_bdev 00:05:23.338 ************************************ 00:05:23.338 21:47:23 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:23.338 21:47:23 unittest -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.338 21:47:23 unittest -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.338 21:47:23 unittest -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.339 21:47:23 unittest -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.339 21:47:23 unittest -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:05:23.339 21:47:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.339 21:47:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.339 21:47:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 START TEST unittest_blob_blobfs 00:05:23.339 ************************************ 00:05:23.339 21:47:23 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:05:23.339 21:47:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:23.339 21:47:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:23.339 00:05:23.339 00:05:23.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.339 http://cunit.sourceforge.net/ 00:05:23.339 00:05:23.339 00:05:23.339 Suite: blob_nocopy_noextent 00:05:23.339 Test: blob_init ...[2024-05-14 21:47:23.811441] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:23.339 passed 00:05:23.339 Test: blob_thin_provision ...passed 00:05:23.339 Test: blob_read_only ...passed 00:05:23.339 Test: bs_load ...[2024-05-14 21:47:23.892620] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:23.339 passed 00:05:23.339 Test: bs_load_custom_cluster_size ...passed 00:05:23.339 Test: bs_load_after_failed_grow ...passed 00:05:23.339 Test: bs_cluster_sz ...[2024-05-14 21:47:23.919244] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:23.339 [2024-05-14 21:47:23.919332] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:23.339 [2024-05-14 21:47:23.919351] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:23.598 passed 00:05:23.598 Test: bs_resize_md ...passed 00:05:23.598 Test: bs_destroy ...passed 00:05:23.598 Test: bs_type ...passed 00:05:23.598 Test: bs_super_block ...passed 00:05:23.598 Test: bs_test_recover_cluster_count ...passed 00:05:23.598 Test: bs_grow_live ...passed 00:05:23.598 Test: bs_grow_live_no_space ...passed 00:05:23.598 Test: bs_test_grow ...passed 00:05:23.598 Test: blob_serialize_test ...passed 00:05:23.598 Test: super_block_crc ...passed 00:05:23.598 Test: blob_thin_prov_write_count_io ...passed 00:05:23.598 Test: blob_thin_prov_unmap_cluster ...passed 00:05:23.598 Test: bs_load_iter_test ...passed 00:05:23.598 Test: blob_relations ...[2024-05-14 21:47:24.077176] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.077247] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 [2024-05-14 21:47:24.077381] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.077394] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 passed 00:05:23.598 Test: blob_relations2 ...[2024-05-14 21:47:24.088904] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.088960] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 [2024-05-14 21:47:24.088971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.088980] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 [2024-05-14 21:47:24.089147] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.089161] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 [2024-05-14 21:47:24.089205] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.598 [2024-05-14 21:47:24.089215] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.598 passed 00:05:23.598 Test: blob_relations3 ...passed 00:05:23.856 Test: blobstore_clean_power_failure ...passed 00:05:23.856 Test: blob_delete_snapshot_power_failure ...[2024-05-14 21:47:24.242347] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.856 [2024-05-14 21:47:24.253585] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:23.856 [2024-05-14 21:47:24.253631] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:23.856 [2024-05-14 21:47:24.253640] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.856 [2024-05-14 21:47:24.264684] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.856 [2024-05-14 21:47:24.264718] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:23.856 [2024-05-14 21:47:24.264727] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:23.856 [2024-05-14 21:47:24.264735] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.856 [2024-05-14 21:47:24.275711] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:23.856 [2024-05-14 21:47:24.275748] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.856 [2024-05-14 21:47:24.286728] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:23.856 [2024-05-14 21:47:24.286773] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.856 [2024-05-14 21:47:24.297631] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:23.856 [2024-05-14 21:47:24.297672] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.856 passed 00:05:23.856 Test: blob_create_snapshot_power_failure ...[2024-05-14 21:47:24.331256] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:23.856 [2024-05-14 21:47:24.353026] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.856 [2024-05-14 21:47:24.363809] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:23.856 passed 00:05:23.856 Test: blob_io_unit ...passed 00:05:23.856 Test: blob_io_unit_compatibility ...passed 00:05:23.856 Test: blob_ext_md_pages ...passed 00:05:24.115 Test: blob_esnap_io_4096_4096 ...passed 00:05:24.115 Test: blob_esnap_io_512_512 ...passed 00:05:24.115 Test: blob_esnap_io_4096_512 ...passed 00:05:24.115 Test: blob_esnap_io_512_4096 ...passed 00:05:24.115 Test: blob_esnap_clone_resize ...passed 00:05:24.115 Suite: blob_bs_nocopy_noextent 00:05:24.115 Test: blob_open ...passed 00:05:24.115 Test: blob_create ...[2024-05-14 21:47:24.597840] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:24.115 passed 00:05:24.115 Test: blob_create_loop ...passed 00:05:24.115 Test: blob_create_fail ...[2024-05-14 21:47:24.679700] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:24.115 passed 00:05:24.375 Test: blob_create_internal ...passed 00:05:24.375 Test: blob_create_zero_extent ...passed 00:05:24.375 Test: blob_snapshot ...passed 00:05:24.375 Test: blob_clone ...passed 00:05:24.375 Test: blob_inflate ...[2024-05-14 21:47:24.856130] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:24.375 passed 00:05:24.375 Test: blob_delete ...passed 00:05:24.375 Test: blob_resize_test ...[2024-05-14 21:47:24.920059] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:24.375 passed 00:05:24.637 Test: blob_resize_thin_test ...passed 00:05:24.637 Test: channel_ops ...passed 00:05:24.637 Test: blob_super ...passed 00:05:24.637 Test: blob_rw_verify_iov ...passed 00:05:24.637 Test: blob_unmap ...passed 00:05:24.637 Test: blob_iter ...passed 00:05:24.637 Test: blob_parse_md ...passed 00:05:24.637 Test: bs_load_pending_removal ...passed 00:05:24.637 Test: bs_unload ...[2024-05-14 21:47:25.220093] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:24.895 passed 00:05:24.895 Test: bs_usable_clusters ...passed 00:05:24.895 Test: blob_crc ...[2024-05-14 21:47:25.287753] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:24.895 [2024-05-14 21:47:25.287808] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:24.895 passed 00:05:24.895 Test: blob_flags ...passed 00:05:24.895 Test: bs_version ...passed 00:05:24.895 Test: blob_set_xattrs_test ...[2024-05-14 21:47:25.389591] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:24.895 [2024-05-14 21:47:25.389654] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:24.895 passed 00:05:24.895 Test: blob_thin_prov_alloc ...passed 00:05:24.895 Test: blob_insert_cluster_msg_test ...passed 00:05:25.154 Test: blob_thin_prov_rw ...passed 00:05:25.154 Test: blob_thin_prov_rle ...passed 00:05:25.154 Test: blob_thin_prov_rw_iov ...passed 00:05:25.154 Test: blob_snapshot_rw ...passed 00:05:25.154 Test: blob_snapshot_rw_iov ...passed 00:05:25.154 Test: blob_inflate_rw ...passed 00:05:25.412 Test: blob_snapshot_freeze_io ...passed 00:05:25.412 Test: blob_operation_split_rw ...passed 00:05:25.412 Test: blob_operation_split_rw_iov ...passed 00:05:25.412 Test: blob_simultaneous_operations ...[2024-05-14 21:47:25.904603] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.412 [2024-05-14 21:47:25.904675] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.412 [2024-05-14 21:47:25.904973] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.412 [2024-05-14 21:47:25.904984] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.412 [2024-05-14 21:47:25.908438] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.412 [2024-05-14 21:47:25.908466] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.412 [2024-05-14 21:47:25.908484] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.412 [2024-05-14 21:47:25.908491] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.412 passed 00:05:25.412 Test: blob_persist_test ...passed 00:05:25.412 Test: blob_decouple_snapshot ...passed 00:05:25.672 Test: blob_seek_io_unit ...passed 00:05:25.672 Test: blob_nested_freezes ...passed 00:05:25.672 Test: blob_clone_resize ...passed 00:05:25.672 Test: blob_shallow_copy ...[2024-05-14 21:47:26.139774] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:25.672 [2024-05-14 21:47:26.139852] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:25.672 [2024-05-14 21:47:26.139865] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:25.672 passed 00:05:25.672 Suite: blob_blob_nocopy_noextent 00:05:25.672 Test: blob_write ...passed 00:05:25.672 Test: blob_read ...passed 00:05:25.931 Test: blob_rw_verify ...passed 00:05:25.931 Test: blob_rw_verify_iov_nomem ...passed 00:05:25.931 Test: blob_rw_iov_read_only ...passed 00:05:25.931 Test: blob_xattr ...passed 00:05:25.931 Test: blob_dirty_shutdown ...passed 00:05:25.931 Test: blob_is_degraded ...passed 00:05:25.931 Suite: blob_esnap_bs_nocopy_noextent 00:05:25.931 Test: blob_esnap_create ...passed 00:05:25.931 Test: blob_esnap_thread_add_remove ...passed 00:05:26.189 Test: blob_esnap_clone_snapshot ...passed 00:05:26.189 Test: blob_esnap_clone_inflate ...passed 00:05:26.189 Test: blob_esnap_clone_decouple ...passed 00:05:26.189 Test: blob_esnap_clone_reload ...passed 00:05:26.189 Test: blob_esnap_hotplug ...passed 00:05:26.189 Suite: blob_nocopy_extent 00:05:26.189 Test: blob_init ...[2024-05-14 21:47:26.668463] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:26.189 passed 00:05:26.189 Test: blob_thin_provision ...passed 00:05:26.189 Test: blob_read_only ...passed 00:05:26.189 Test: bs_load ...[2024-05-14 21:47:26.716357] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:26.189 passed 00:05:26.189 Test: bs_load_custom_cluster_size ...passed 00:05:26.189 Test: bs_load_after_failed_grow ...passed 00:05:26.189 Test: bs_cluster_sz ...[2024-05-14 21:47:26.740407] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:26.189 [2024-05-14 21:47:26.740484] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:26.189 [2024-05-14 21:47:26.740500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:26.189 passed 00:05:26.189 Test: bs_resize_md ...passed 00:05:26.189 Test: bs_destroy ...passed 00:05:26.448 Test: bs_type ...passed 00:05:26.448 Test: bs_super_block ...passed 00:05:26.448 Test: bs_test_recover_cluster_count ...passed 00:05:26.448 Test: bs_grow_live ...passed 00:05:26.448 Test: bs_grow_live_no_space ...passed 00:05:26.448 Test: bs_test_grow ...passed 00:05:26.448 Test: blob_serialize_test ...passed 00:05:26.449 Test: super_block_crc ...passed 00:05:26.449 Test: blob_thin_prov_write_count_io ...passed 00:05:26.449 Test: blob_thin_prov_unmap_cluster ...passed 00:05:26.449 Test: bs_load_iter_test ...passed 00:05:26.449 Test: blob_relations ...[2024-05-14 21:47:26.914143] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.914211] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 [2024-05-14 21:47:26.914333] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.914343] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 passed 00:05:26.449 Test: blob_relations2 ...[2024-05-14 21:47:26.926048] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.926094] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 [2024-05-14 21:47:26.926104] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.926110] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 [2024-05-14 21:47:26.926253] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.926264] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 [2024-05-14 21:47:26.926302] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.449 [2024-05-14 21:47:26.926318] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.449 passed 00:05:26.449 Test: blob_relations3 ...passed 00:05:26.707 Test: blobstore_clean_power_failure ...passed 00:05:26.707 Test: blob_delete_snapshot_power_failure ...[2024-05-14 21:47:27.084191] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.707 [2024-05-14 21:47:27.095903] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.707 [2024-05-14 21:47:27.107286] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.707 [2024-05-14 21:47:27.107343] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.707 [2024-05-14 21:47:27.107353] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 [2024-05-14 21:47:27.118602] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.707 [2024-05-14 21:47:27.118642] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:26.707 [2024-05-14 21:47:27.118650] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.707 [2024-05-14 21:47:27.118658] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 [2024-05-14 21:47:27.129855] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.707 [2024-05-14 21:47:27.129897] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:26.707 [2024-05-14 21:47:27.129905] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.707 [2024-05-14 21:47:27.129913] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 [2024-05-14 21:47:27.141078] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:26.707 [2024-05-14 21:47:27.141120] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 [2024-05-14 21:47:27.152382] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:26.707 [2024-05-14 21:47:27.152429] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 [2024-05-14 21:47:27.163441] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:26.707 [2024-05-14 21:47:27.163496] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.707 passed 00:05:26.707 Test: blob_create_snapshot_power_failure ...[2024-05-14 21:47:27.196892] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.707 [2024-05-14 21:47:27.207655] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.707 [2024-05-14 21:47:27.229455] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.707 [2024-05-14 21:47:27.240229] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:26.707 passed 00:05:26.707 Test: blob_io_unit ...passed 00:05:26.966 Test: blob_io_unit_compatibility ...passed 00:05:26.966 Test: blob_ext_md_pages ...passed 00:05:26.966 Test: blob_esnap_io_4096_4096 ...passed 00:05:26.966 Test: blob_esnap_io_512_512 ...passed 00:05:26.966 Test: blob_esnap_io_4096_512 ...passed 00:05:26.966 Test: blob_esnap_io_512_4096 ...passed 00:05:26.966 Test: blob_esnap_clone_resize ...passed 00:05:26.966 Suite: blob_bs_nocopy_extent 00:05:26.966 Test: blob_open ...passed 00:05:26.966 Test: blob_create ...[2024-05-14 21:47:27.480193] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:26.966 passed 00:05:26.966 Test: blob_create_loop ...passed 00:05:27.225 Test: blob_create_fail ...[2024-05-14 21:47:27.559904] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.225 passed 00:05:27.225 Test: blob_create_internal ...passed 00:05:27.225 Test: blob_create_zero_extent ...passed 00:05:27.225 Test: blob_snapshot ...passed 00:05:27.225 Test: blob_clone ...passed 00:05:27.225 Test: blob_inflate ...[2024-05-14 21:47:27.734646] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:27.225 passed 00:05:27.225 Test: blob_delete ...passed 00:05:27.225 Test: blob_resize_test ...[2024-05-14 21:47:27.797936] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:27.225 passed 00:05:27.483 Test: blob_resize_thin_test ...passed 00:05:27.483 Test: channel_ops ...passed 00:05:27.483 Test: blob_super ...passed 00:05:27.484 Test: blob_rw_verify_iov ...passed 00:05:27.484 Test: blob_unmap ...passed 00:05:27.484 Test: blob_iter ...passed 00:05:27.484 Test: blob_parse_md ...passed 00:05:27.742 Test: bs_load_pending_removal ...passed 00:05:27.742 Test: bs_unload ...[2024-05-14 21:47:28.098554] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:27.742 passed 00:05:27.742 Test: bs_usable_clusters ...passed 00:05:27.742 Test: blob_crc ...[2024-05-14 21:47:28.165977] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.742 [2024-05-14 21:47:28.166045] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.742 passed 00:05:27.742 Test: blob_flags ...passed 00:05:27.742 Test: bs_version ...passed 00:05:27.742 Test: blob_set_xattrs_test ...[2024-05-14 21:47:28.265451] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.742 [2024-05-14 21:47:28.265513] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.742 passed 00:05:27.742 Test: blob_thin_prov_alloc ...passed 00:05:28.001 Test: blob_insert_cluster_msg_test ...passed 00:05:28.001 Test: blob_thin_prov_rw ...passed 00:05:28.001 Test: blob_thin_prov_rle ...passed 00:05:28.001 Test: blob_thin_prov_rw_iov ...passed 00:05:28.001 Test: blob_snapshot_rw ...passed 00:05:28.001 Test: blob_snapshot_rw_iov ...passed 00:05:28.260 Test: blob_inflate_rw ...passed 00:05:28.260 Test: blob_snapshot_freeze_io ...passed 00:05:28.260 Test: blob_operation_split_rw ...passed 00:05:28.260 Test: blob_operation_split_rw_iov ...passed 00:05:28.260 Test: blob_simultaneous_operations ...[2024-05-14 21:47:28.772690] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.260 [2024-05-14 21:47:28.772756] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.260 [2024-05-14 21:47:28.773049] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.260 [2024-05-14 21:47:28.773059] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.260 [2024-05-14 21:47:28.776479] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.260 [2024-05-14 21:47:28.776501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.260 [2024-05-14 21:47:28.776519] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.260 [2024-05-14 21:47:28.776526] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.260 passed 00:05:28.260 Test: blob_persist_test ...passed 00:05:28.519 Test: blob_decouple_snapshot ...passed 00:05:28.519 Test: blob_seek_io_unit ...passed 00:05:28.519 Test: blob_nested_freezes ...passed 00:05:28.519 Test: blob_clone_resize ...passed 00:05:28.519 Test: blob_shallow_copy ...[2024-05-14 21:47:28.994306] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:28.519 [2024-05-14 21:47:28.994386] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:28.519 [2024-05-14 21:47:28.994398] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:28.519 passed 00:05:28.519 Suite: blob_blob_nocopy_extent 00:05:28.519 Test: blob_write ...passed 00:05:28.519 Test: blob_read ...passed 00:05:28.778 Test: blob_rw_verify ...passed 00:05:28.778 Test: blob_rw_verify_iov_nomem ...passed 00:05:28.778 Test: blob_rw_iov_read_only ...passed 00:05:28.778 Test: blob_xattr ...passed 00:05:28.778 Test: blob_dirty_shutdown ...passed 00:05:28.778 Test: blob_is_degraded ...passed 00:05:28.778 Suite: blob_esnap_bs_nocopy_extent 00:05:28.778 Test: blob_esnap_create ...passed 00:05:28.778 Test: blob_esnap_thread_add_remove ...passed 00:05:29.057 Test: blob_esnap_clone_snapshot ...passed 00:05:29.057 Test: blob_esnap_clone_inflate ...passed 00:05:29.057 Test: blob_esnap_clone_decouple ...passed 00:05:29.057 Test: blob_esnap_clone_reload ...passed 00:05:29.057 Test: blob_esnap_hotplug ...passed 00:05:29.057 Suite: blob_copy_noextent 00:05:29.057 Test: blob_init ...[2024-05-14 21:47:29.531858] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:29.057 passed 00:05:29.057 Test: blob_thin_provision ...passed 00:05:29.057 Test: blob_read_only ...passed 00:05:29.057 Test: bs_load ...[2024-05-14 21:47:29.578200] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:29.057 passed 00:05:29.057 Test: bs_load_custom_cluster_size ...passed 00:05:29.057 Test: bs_load_after_failed_grow ...passed 00:05:29.057 Test: bs_cluster_sz ...[2024-05-14 21:47:29.602233] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:29.057 [2024-05-14 21:47:29.602308] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:29.057 [2024-05-14 21:47:29.602322] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:29.057 passed 00:05:29.057 Test: bs_resize_md ...passed 00:05:29.057 Test: bs_destroy ...passed 00:05:29.316 Test: bs_type ...passed 00:05:29.316 Test: bs_super_block ...passed 00:05:29.316 Test: bs_test_recover_cluster_count ...passed 00:05:29.316 Test: bs_grow_live ...passed 00:05:29.316 Test: bs_grow_live_no_space ...passed 00:05:29.316 Test: bs_test_grow ...passed 00:05:29.316 Test: blob_serialize_test ...passed 00:05:29.316 Test: super_block_crc ...passed 00:05:29.316 Test: blob_thin_prov_write_count_io ...passed 00:05:29.316 Test: blob_thin_prov_unmap_cluster ...passed 00:05:29.316 Test: bs_load_iter_test ...passed 00:05:29.316 Test: blob_relations ...[2024-05-14 21:47:29.772598] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.772668] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 [2024-05-14 21:47:29.772780] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.772791] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 passed 00:05:29.316 Test: blob_relations2 ...[2024-05-14 21:47:29.785170] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.785226] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 [2024-05-14 21:47:29.785235] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.785242] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 [2024-05-14 21:47:29.785375] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.785386] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 [2024-05-14 21:47:29.785432] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.316 [2024-05-14 21:47:29.785439] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.316 passed 00:05:29.316 Test: blob_relations3 ...passed 00:05:29.574 Test: blobstore_clean_power_failure ...passed 00:05:29.574 Test: blob_delete_snapshot_power_failure ...[2024-05-14 21:47:29.947737] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.574 [2024-05-14 21:47:29.958582] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.574 [2024-05-14 21:47:29.958631] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.574 [2024-05-14 21:47:29.958640] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.574 [2024-05-14 21:47:29.969500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.574 [2024-05-14 21:47:29.969545] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:29.574 [2024-05-14 21:47:29.969553] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.574 [2024-05-14 21:47:29.969561] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.574 [2024-05-14 21:47:29.980362] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:29.574 [2024-05-14 21:47:29.980408] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.574 [2024-05-14 21:47:29.991557] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:29.574 [2024-05-14 21:47:29.991614] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.574 [2024-05-14 21:47:30.002842] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:29.574 [2024-05-14 21:47:30.002890] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.574 passed 00:05:29.574 Test: blob_create_snapshot_power_failure ...[2024-05-14 21:47:30.037423] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.574 [2024-05-14 21:47:30.060593] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.574 [2024-05-14 21:47:30.072044] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:29.574 passed 00:05:29.574 Test: blob_io_unit ...passed 00:05:29.574 Test: blob_io_unit_compatibility ...passed 00:05:29.574 Test: blob_ext_md_pages ...passed 00:05:29.574 Test: blob_esnap_io_4096_4096 ...passed 00:05:29.832 Test: blob_esnap_io_512_512 ...passed 00:05:29.832 Test: blob_esnap_io_4096_512 ...passed 00:05:29.832 Test: blob_esnap_io_512_4096 ...passed 00:05:29.832 Test: blob_esnap_clone_resize ...passed 00:05:29.832 Suite: blob_bs_copy_noextent 00:05:29.832 Test: blob_open ...passed 00:05:29.832 Test: blob_create ...[2024-05-14 21:47:30.309045] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:29.832 passed 00:05:29.832 Test: blob_create_loop ...passed 00:05:29.832 Test: blob_create_fail ...[2024-05-14 21:47:30.389809] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:29.832 passed 00:05:30.090 Test: blob_create_internal ...passed 00:05:30.090 Test: blob_create_zero_extent ...passed 00:05:30.090 Test: blob_snapshot ...passed 00:05:30.090 Test: blob_clone ...passed 00:05:30.090 Test: blob_inflate ...[2024-05-14 21:47:30.557027] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:30.090 passed 00:05:30.090 Test: blob_delete ...passed 00:05:30.090 Test: blob_resize_test ...[2024-05-14 21:47:30.624516] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:30.090 passed 00:05:30.090 Test: blob_resize_thin_test ...passed 00:05:30.348 Test: channel_ops ...passed 00:05:30.348 Test: blob_super ...passed 00:05:30.348 Test: blob_rw_verify_iov ...passed 00:05:30.348 Test: blob_unmap ...passed 00:05:30.348 Test: blob_iter ...passed 00:05:30.348 Test: blob_parse_md ...passed 00:05:30.348 Test: bs_load_pending_removal ...passed 00:05:30.607 Test: bs_unload ...[2024-05-14 21:47:30.945268] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:30.607 passed 00:05:30.607 Test: bs_usable_clusters ...passed 00:05:30.607 Test: blob_crc ...[2024-05-14 21:47:31.017379] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.607 [2024-05-14 21:47:31.017447] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.607 passed 00:05:30.607 Test: blob_flags ...passed 00:05:30.607 Test: bs_version ...passed 00:05:30.607 Test: blob_set_xattrs_test ...[2024-05-14 21:47:31.120097] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.607 [2024-05-14 21:47:31.120159] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.607 passed 00:05:30.607 Test: blob_thin_prov_alloc ...passed 00:05:30.865 Test: blob_insert_cluster_msg_test ...passed 00:05:30.865 Test: blob_thin_prov_rw ...passed 00:05:30.865 Test: blob_thin_prov_rle ...passed 00:05:30.865 Test: blob_thin_prov_rw_iov ...passed 00:05:30.865 Test: blob_snapshot_rw ...passed 00:05:30.865 Test: blob_snapshot_rw_iov ...passed 00:05:31.125 Test: blob_inflate_rw ...passed 00:05:31.125 Test: blob_snapshot_freeze_io ...passed 00:05:31.125 Test: blob_operation_split_rw ...passed 00:05:31.125 Test: blob_operation_split_rw_iov ...passed 00:05:31.125 Test: blob_simultaneous_operations ...[2024-05-14 21:47:31.642004] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.125 [2024-05-14 21:47:31.642079] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.125 [2024-05-14 21:47:31.642419] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.125 [2024-05-14 21:47:31.642432] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.125 [2024-05-14 21:47:31.644742] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.125 [2024-05-14 21:47:31.644781] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.125 [2024-05-14 21:47:31.644798] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.125 [2024-05-14 21:47:31.644806] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.125 passed 00:05:31.125 Test: blob_persist_test ...passed 00:05:31.383 Test: blob_decouple_snapshot ...passed 00:05:31.383 Test: blob_seek_io_unit ...passed 00:05:31.383 Test: blob_nested_freezes ...passed 00:05:31.383 Test: blob_clone_resize ...passed 00:05:31.383 Test: blob_shallow_copy ...[2024-05-14 21:47:31.870748] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:31.383 [2024-05-14 21:47:31.870823] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:31.383 [2024-05-14 21:47:31.870834] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:31.383 passed 00:05:31.384 Suite: blob_blob_copy_noextent 00:05:31.384 Test: blob_write ...passed 00:05:31.384 Test: blob_read ...passed 00:05:31.642 Test: blob_rw_verify ...passed 00:05:31.642 Test: blob_rw_verify_iov_nomem ...passed 00:05:31.642 Test: blob_rw_iov_read_only ...passed 00:05:31.642 Test: blob_xattr ...passed 00:05:31.642 Test: blob_dirty_shutdown ...passed 00:05:31.642 Test: blob_is_degraded ...passed 00:05:31.642 Suite: blob_esnap_bs_copy_noextent 00:05:31.642 Test: blob_esnap_create ...passed 00:05:31.642 Test: blob_esnap_thread_add_remove ...passed 00:05:31.899 Test: blob_esnap_clone_snapshot ...passed 00:05:31.900 Test: blob_esnap_clone_inflate ...passed 00:05:31.900 Test: blob_esnap_clone_decouple ...passed 00:05:31.900 Test: blob_esnap_clone_reload ...passed 00:05:31.900 Test: blob_esnap_hotplug ...passed 00:05:31.900 Suite: blob_copy_extent 00:05:31.900 Test: blob_init ...[2024-05-14 21:47:32.391705] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5464:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:31.900 passed 00:05:31.900 Test: blob_thin_provision ...passed 00:05:31.900 Test: blob_read_only ...passed 00:05:31.900 Test: bs_load ...[2024-05-14 21:47:32.438546] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 939:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:31.900 passed 00:05:31.900 Test: bs_load_custom_cluster_size ...passed 00:05:31.900 Test: bs_load_after_failed_grow ...passed 00:05:31.900 Test: bs_cluster_sz ...[2024-05-14 21:47:32.462680] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:31.900 [2024-05-14 21:47:32.462761] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5596:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:31.900 [2024-05-14 21:47:32.462775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3857:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:31.900 passed 00:05:31.900 Test: bs_resize_md ...passed 00:05:32.158 Test: bs_destroy ...passed 00:05:32.158 Test: bs_type ...passed 00:05:32.158 Test: bs_super_block ...passed 00:05:32.158 Test: bs_test_recover_cluster_count ...passed 00:05:32.158 Test: bs_grow_live ...passed 00:05:32.158 Test: bs_grow_live_no_space ...passed 00:05:32.158 Test: bs_test_grow ...passed 00:05:32.158 Test: blob_serialize_test ...passed 00:05:32.158 Test: super_block_crc ...passed 00:05:32.158 Test: blob_thin_prov_write_count_io ...passed 00:05:32.158 Test: blob_thin_prov_unmap_cluster ...passed 00:05:32.158 Test: bs_load_iter_test ...passed 00:05:32.158 Test: blob_relations ...[2024-05-14 21:47:32.622340] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.622416] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 [2024-05-14 21:47:32.622541] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.622552] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 passed 00:05:32.158 Test: blob_relations2 ...[2024-05-14 21:47:32.634910] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.634945] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 [2024-05-14 21:47:32.634954] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.634976] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 [2024-05-14 21:47:32.635127] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.635138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 [2024-05-14 21:47:32.635179] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.158 [2024-05-14 21:47:32.635188] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.158 passed 00:05:32.158 Test: blob_relations3 ...passed 00:05:32.416 Test: blobstore_clean_power_failure ...passed 00:05:32.416 Test: blob_delete_snapshot_power_failure ...[2024-05-14 21:47:32.801673] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.416 [2024-05-14 21:47:32.814127] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.416 [2024-05-14 21:47:32.826521] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.416 [2024-05-14 21:47:32.826580] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.416 [2024-05-14 21:47:32.826589] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.416 [2024-05-14 21:47:32.838723] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.416 [2024-05-14 21:47:32.838763] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:32.416 [2024-05-14 21:47:32.838772] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.416 [2024-05-14 21:47:32.838780] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.416 [2024-05-14 21:47:32.850812] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.416 [2024-05-14 21:47:32.850851] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:32.416 [2024-05-14 21:47:32.850859] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.416 [2024-05-14 21:47:32.850866] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.417 [2024-05-14 21:47:32.862761] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:32.417 [2024-05-14 21:47:32.862810] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.417 [2024-05-14 21:47:32.874899] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:32.417 [2024-05-14 21:47:32.874947] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.417 [2024-05-14 21:47:32.887031] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:32.417 [2024-05-14 21:47:32.887086] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.417 passed 00:05:32.417 Test: blob_create_snapshot_power_failure ...[2024-05-14 21:47:32.923808] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.417 [2024-05-14 21:47:32.935807] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.417 [2024-05-14 21:47:32.959706] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1643:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.417 [2024-05-14 21:47:32.971642] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:32.674 passed 00:05:32.674 Test: blob_io_unit ...passed 00:05:32.674 Test: blob_io_unit_compatibility ...passed 00:05:32.674 Test: blob_ext_md_pages ...passed 00:05:32.674 Test: blob_esnap_io_4096_4096 ...passed 00:05:32.674 Test: blob_esnap_io_512_512 ...passed 00:05:32.674 Test: blob_esnap_io_4096_512 ...passed 00:05:32.674 Test: blob_esnap_io_512_4096 ...passed 00:05:32.674 Test: blob_esnap_clone_resize ...passed 00:05:32.674 Suite: blob_bs_copy_extent 00:05:32.674 Test: blob_open ...passed 00:05:32.674 Test: blob_create ...[2024-05-14 21:47:33.217044] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:32.674 passed 00:05:32.932 Test: blob_create_loop ...passed 00:05:32.932 Test: blob_create_fail ...[2024-05-14 21:47:33.299408] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:32.932 passed 00:05:32.932 Test: blob_create_internal ...passed 00:05:32.932 Test: blob_create_zero_extent ...passed 00:05:32.933 Test: blob_snapshot ...passed 00:05:32.933 Test: blob_clone ...passed 00:05:32.933 Test: blob_inflate ...[2024-05-14 21:47:33.469384] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:32.933 passed 00:05:32.933 Test: blob_delete ...passed 00:05:33.190 Test: blob_resize_test ...[2024-05-14 21:47:33.537991] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:33.190 passed 00:05:33.190 Test: blob_resize_thin_test ...passed 00:05:33.190 Test: channel_ops ...passed 00:05:33.190 Test: blob_super ...passed 00:05:33.190 Test: blob_rw_verify_iov ...passed 00:05:33.190 Test: blob_unmap ...passed 00:05:33.190 Test: blob_iter ...passed 00:05:33.449 Test: blob_parse_md ...passed 00:05:33.449 Test: bs_load_pending_removal ...passed 00:05:33.449 Test: bs_unload ...[2024-05-14 21:47:33.865392] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:33.449 passed 00:05:33.449 Test: bs_usable_clusters ...passed 00:05:33.449 Test: blob_crc ...[2024-05-14 21:47:33.931513] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.449 [2024-05-14 21:47:33.931568] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1652:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.449 passed 00:05:33.449 Test: blob_flags ...passed 00:05:33.449 Test: bs_version ...passed 00:05:33.449 Test: blob_set_xattrs_test ...[2024-05-14 21:47:34.029506] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.449 [2024-05-14 21:47:34.029559] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6301:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.707 passed 00:05:33.707 Test: blob_thin_prov_alloc ...passed 00:05:33.707 Test: blob_insert_cluster_msg_test ...passed 00:05:33.707 Test: blob_thin_prov_rw ...passed 00:05:33.707 Test: blob_thin_prov_rle ...passed 00:05:33.707 Test: blob_thin_prov_rw_iov ...passed 00:05:33.707 Test: blob_snapshot_rw ...passed 00:05:33.707 Test: blob_snapshot_rw_iov ...passed 00:05:33.965 Test: blob_inflate_rw ...passed 00:05:33.965 Test: blob_snapshot_freeze_io ...passed 00:05:33.965 Test: blob_operation_split_rw ...passed 00:05:33.965 Test: blob_operation_split_rw_iov ...passed 00:05:33.965 Test: blob_simultaneous_operations ...[2024-05-14 21:47:34.536684] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.965 [2024-05-14 21:47:34.536753] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.965 [2024-05-14 21:47:34.537059] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.965 [2024-05-14 21:47:34.537069] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.965 [2024-05-14 21:47:34.539492] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.965 [2024-05-14 21:47:34.539514] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:33.965 [2024-05-14 21:47:34.539532] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:33.965 [2024-05-14 21:47:34.539539] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.223 passed 00:05:34.223 Test: blob_persist_test ...passed 00:05:34.223 Test: blob_decouple_snapshot ...passed 00:05:34.223 Test: blob_seek_io_unit ...passed 00:05:34.223 Test: blob_nested_freezes ...passed 00:05:34.223 Test: blob_clone_resize ...passed 00:05:34.223 Test: blob_shallow_copy ...[2024-05-14 21:47:34.775301] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:34.223 [2024-05-14 21:47:34.775390] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7316:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:34.223 [2024-05-14 21:47:34.775416] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7324:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:34.223 passed 00:05:34.223 Suite: blob_blob_copy_extent 00:05:34.482 Test: blob_write ...passed 00:05:34.482 Test: blob_read ...passed 00:05:34.482 Test: blob_rw_verify ...passed 00:05:34.482 Test: blob_rw_verify_iov_nomem ...passed 00:05:34.482 Test: blob_rw_iov_read_only ...passed 00:05:34.482 Test: blob_xattr ...passed 00:05:34.482 Test: blob_dirty_shutdown ...passed 00:05:34.482 Test: blob_is_degraded ...passed 00:05:34.482 Suite: blob_esnap_bs_copy_extent 00:05:34.741 Test: blob_esnap_create ...passed 00:05:34.741 Test: blob_esnap_thread_add_remove ...passed 00:05:34.741 Test: blob_esnap_clone_snapshot ...passed 00:05:34.741 Test: blob_esnap_clone_inflate ...passed 00:05:34.741 Test: blob_esnap_clone_decouple ...passed 00:05:34.741 Test: blob_esnap_clone_reload ...passed 00:05:34.741 Test: blob_esnap_hotplug ...passed 00:05:34.741 00:05:34.741 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.741 suites 16 16 n/a 0 0 00:05:34.741 tests 368 368 368 0 0 00:05:34.741 asserts 142985 142985 142985 0 n/a 00:05:34.741 00:05:34.741 Elapsed time = 11.500 seconds 00:05:34.741 21:47:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:05:34.741 00:05:34.741 00:05:34.741 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.741 http://cunit.sourceforge.net/ 00:05:34.741 00:05:34.741 00:05:34.741 Suite: blob_bdev 00:05:34.741 Test: create_bs_dev ...passed 00:05:34.741 Test: create_bs_dev_ro ...passed 00:05:34.741 Test: create_bs_dev_rw ...passed 00:05:34.741 Test: claim_bs_dev ...passed 00:05:34.741 Test: claim_bs_dev_ro ...passed 00:05:34.741 Test: deferred_destroy_refs ...passed 00:05:34.741 Test: deferred_destroy_channels ...passed 00:05:34.741 Test: deferred_destroy_threads ...passed 00:05:34.741 00:05:34.741 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.741 suites 1 1 n/a 0 0 00:05:34.741 tests 8 8 8 0 0 00:05:34.741 asserts 119 119 119 0 n/a 00:05:34.741 00:05:34.741 Elapsed time = 0.000 seconds 00:05:34.741 [2024-05-14 21:47:35.315685] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:05:34.741 [2024-05-14 21:47:35.315851] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:05:34.741 21:47:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:05:34.741 00:05:34.741 00:05:34.741 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.741 http://cunit.sourceforge.net/ 00:05:34.741 00:05:34.741 00:05:34.741 Suite: tree 00:05:34.741 Test: blobfs_tree_op_test ...passed 00:05:34.741 00:05:34.741 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.741 suites 1 1 n/a 0 0 00:05:34.742 tests 1 1 1 0 0 00:05:34.742 asserts 27 27 27 0 n/a 00:05:34.742 00:05:34.742 Elapsed time = 0.000 seconds 00:05:34.742 21:47:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:05:34.742 00:05:34.742 00:05:34.742 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.742 http://cunit.sourceforge.net/ 00:05:34.742 00:05:34.742 00:05:34.742 Suite: blobfs_async_ut 00:05:35.000 Test: fs_init ...passed 00:05:35.000 Test: fs_open ...passed 00:05:35.000 Test: fs_create ...passed 00:05:35.000 Test: fs_truncate ...passed 00:05:35.000 Test: fs_rename ...[2024-05-14 21:47:35.427918] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:05:35.000 passed 00:05:35.000 Test: fs_rw_async ...passed 00:05:35.000 Test: fs_writev_readv_async ...passed 00:05:35.000 Test: tree_find_buffer_ut ...passed 00:05:35.000 Test: channel_ops ...passed 00:05:35.000 Test: channel_ops_sync ...passed 00:05:35.000 00:05:35.000 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.000 suites 1 1 n/a 0 0 00:05:35.000 tests 10 10 10 0 0 00:05:35.000 asserts 292 292 292 0 n/a 00:05:35.000 00:05:35.000 Elapsed time = 0.148 seconds 00:05:35.000 21:47:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:05:35.000 00:05:35.000 00:05:35.000 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.000 http://cunit.sourceforge.net/ 00:05:35.000 00:05:35.000 00:05:35.000 Suite: blobfs_sync_ut 00:05:35.000 Test: cache_read_after_write ...[2024-05-14 21:47:35.535715] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:05:35.000 passed 00:05:35.000 Test: file_length ...passed 00:05:35.000 Test: append_write_to_extend_blob ...passed 00:05:35.000 Test: partial_buffer ...passed 00:05:35.000 Test: cache_write_null_buffer ...passed 00:05:35.260 Test: fs_create_sync ...passed 00:05:35.260 Test: fs_rename_sync ...passed 00:05:35.260 Test: cache_append_no_cache ...passed 00:05:35.260 Test: fs_delete_file_without_close ...passed 00:05:35.260 00:05:35.260 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.260 suites 1 1 n/a 0 0 00:05:35.260 tests 9 9 9 0 0 00:05:35.260 asserts 345 345 345 0 n/a 00:05:35.260 00:05:35.260 Elapsed time = 0.281 seconds 00:05:35.260 21:47:35 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:05:35.260 00:05:35.260 00:05:35.260 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.260 http://cunit.sourceforge.net/ 00:05:35.260 00:05:35.260 00:05:35.260 Suite: blobfs_bdev_ut 00:05:35.260 Test: spdk_blobfs_bdev_detect_test ...passed 00:05:35.260 Test: spdk_blobfs_bdev_create_test ...passed 00:05:35.260 Test: spdk_blobfs_bdev_mount_test ...passed 00:05:35.260 00:05:35.260 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.260 suites 1 1 n/a 0 0 00:05:35.260 tests 3 3 3 0 0 00:05:35.260 asserts 9 9 9 0 n/a 00:05:35.260 00:05:35.260 Elapsed time = 0.000 seconds 00:05:35.260 [2024-05-14 21:47:35.640408] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:35.260 [2024-05-14 21:47:35.640588] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:35.260 00:05:35.260 real 0m11.838s 00:05:35.260 user 0m11.814s 00:05:35.260 sys 0m0.166s 00:05:35.260 21:47:35 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.260 ************************************ 00:05:35.260 END TEST unittest_blob_blobfs 00:05:35.260 ************************************ 00:05:35.260 21:47:35 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:05:35.260 21:47:35 unittest -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:05:35.260 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.260 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.260 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.260 ************************************ 00:05:35.260 START TEST unittest_event 00:05:35.260 ************************************ 00:05:35.260 21:47:35 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:05:35.260 21:47:35 unittest.unittest_event -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:05:35.260 00:05:35.260 00:05:35.260 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.260 http://cunit.sourceforge.net/ 00:05:35.260 00:05:35.260 00:05:35.260 Suite: app_suite 00:05:35.260 Test: test_spdk_app_parse_args ...app_ut [options] 00:05:35.260 00:05:35.260 CPU options: 00:05:35.260 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:35.260 (like [0,1,10]) 00:05:35.260 --lcores lcore to CPU mapping list. The list is in the format: 00:05:35.260 [<,lcores[@CPUs]>...] 00:05:35.260 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:35.260 Within the group, '-' is used for range separator, 00:05:35.260 ',' is used for single number separator. 00:05:35.260 '( )' can be omitted for single element group, 00:05:35.260 '@' can be omitted if cpus and lcores have the same value 00:05:35.260 --disable-cpumask-locks Disable CPU core lock files. 00:05:35.260 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:35.260 pollers in the app support interrupt mode) 00:05:35.260 -p, --main-core main (primary) core for DPDK 00:05:35.260 00:05:35.260 Configuration options: 00:05:35.260 -c, --config, --json JSON config file 00:05:35.260 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:35.260 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:35.260 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:35.260 --rpcs-allowed comma-separated list of permitted RPCS 00:05:35.260 --json-ignore-init-errors don't exit on invalid config entry 00:05:35.260 00:05:35.260 Memory options: 00:05:35.260 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:35.260 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:35.260 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:35.260 -R, --huge-unlink unlink huge files after initialization 00:05:35.260 -n, --mem-channels number of memory channels used for DPDK 00:05:35.260 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:35.260 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:35.260 --no-huge run without using hugepages 00:05:35.260 -i, --shm-id shared memory ID (optional) 00:05:35.260 -g, --single-file-segments force creating just one hugetlbfs file 00:05:35.260 00:05:35.260 PCI options: 00:05:35.260 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:35.260 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:35.260 -u, --no-pci disable PCI access 00:05:35.260 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:35.260 00:05:35.260 Log options: 00:05:35.260 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:35.260 --silence-noticelog disable notice level logging to stderr 00:05:35.260 00:05:35.260 Trace options: 00:05:35.260 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:35.260 setting 0 to disable trace (default 32768) 00:05:35.260 Tracepoints vary in size and can use more than one trace entry. 00:05:35.260 -e, --tpoint-group [:] 00:05:35.260 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:35.260 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:35.260 a tracepoint group. First tpoint inside a group can be enabled by 00:05:35.260 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:35.261 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:35.261 in /include/spdk_internal/trace_defs.h 00:05:35.261 00:05:35.261 Other options: 00:05:35.261 -h, --help show this usage 00:05:35.261 -v, --version print SPDK version 00:05:35.261 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:35.261 --env-context Opaque context for use of the env implementation 00:05:35.261 app_ut: invalid option -- z 00:05:35.261 app_ut: unrecognized option `--test-long-opt' 00:05:35.261 app_ut [options] 00:05:35.261 00:05:35.261 CPU options: 00:05:35.261 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:35.261 (like [0,1,10]) 00:05:35.261 --lcores lcore to CPU mapping list. The list is in the format: 00:05:35.261 [<,lcores[@CPUs]>...] 00:05:35.261 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:35.261 Within the group, '-' is used for range separator, 00:05:35.261 ',' is used for single number separator. 00:05:35.261 '( )' can be omitted for single element group, 00:05:35.261 '@' can be omitted if cpus and lcores have the same value 00:05:35.261 --disable-cpumask-locks Disable CPU core lock files. 00:05:35.261 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:35.261 pollers in the app support interrupt mode) 00:05:35.261 -p, --main-core main (primary) core for DPDK 00:05:35.261 00:05:35.261 Configuration options: 00:05:35.261 -c, --config, --json JSON config file 00:05:35.261 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:35.261 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:35.261 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:35.261 --rpcs-allowed comma-separated list of permitted RPCS 00:05:35.261 --json-ignore-init-errors don't exit on invalid config entry 00:05:35.261 00:05:35.261 Memory options: 00:05:35.261 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:35.261 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:35.261 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:35.261 -R, --huge-unlink unlink huge files after initialization 00:05:35.261 -n, --mem-channels number of memory channels used for DPDK 00:05:35.261 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:35.261 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:35.261 --no-huge run without using hugepages 00:05:35.261 -i, --shm-id shared memory ID (optional) 00:05:35.261 -g, --single-file-segments force creating just one hugetlbfs file 00:05:35.261 00:05:35.261 PCI options: 00:05:35.261 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:35.261 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:35.261 -u, --no-pci disable PCI access 00:05:35.261 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:35.261 00:05:35.261 Log options: 00:05:35.261 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:35.261 --silence-noticelog disable notice level logging to stderr 00:05:35.261 00:05:35.261 Trace options: 00:05:35.261 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:35.261 setting 0 to disable trace (default 32768) 00:05:35.261 Tracepoints vary in size and can use more than one trace entry. 00:05:35.261 -e, --tpoint-group [:] 00:05:35.261 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:35.261 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:35.261 a tracepoint group. First tpoint inside a group can be enabled by 00:05:35.261 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:35.261 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:35.261 in /include/spdk_internal/trace_defs.h 00:05:35.261 00:05:35.261 Other options: 00:05:35.261 -h, --help show this usage 00:05:35.261 -v, --version print SPDK version 00:05:35.261 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:35.261 --env-context Opaque context for use of the env implementation 00:05:35.261 [2024-05-14 21:47:35.680109] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:05:35.261 app_ut [options] 00:05:35.261 00:05:35.261 CPU options: 00:05:35.261 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:35.261 (like [0,1,10]) 00:05:35.261 --lcores lcore to CPU mapping list. The list is in the format: 00:05:35.261 [<,lcores[@CPUs]>...] 00:05:35.261 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:35.261 Within the group, '-' is used for range separator, 00:05:35.261 ',' is used for single number separator. 00:05:35.261 '( )' can be omitted for single element group, 00:05:35.261 '@' can be omitted if cpus and lcores have the same value 00:05:35.261 --disable-cpumask-locks Disable CPU core lock files. 00:05:35.261 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:35.261 pollers in the app support interrupt mode) 00:05:35.261 -p, --main-core main (primary) core for DPDK 00:05:35.261 00:05:35.261 Configuration options: 00:05:35.261 -c, --config, --json JSON config file 00:05:35.261 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:35.261 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:35.261 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:35.261 --rpcs-allowed comma-separated list of permitted RPCS 00:05:35.261 --json-ignore-init-errors don't exit on invalid config entry 00:05:35.261 00:05:35.261 Memory options: 00:05:35.261 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:35.261 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:35.261 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:35.261 -R, --huge-unlink unlink huge files after initialization 00:05:35.261 -n, --mem-channels number of memory channels used for DPDK 00:05:35.261 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:05:35.261 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:35.261 --no-huge run without using hugepages 00:05:35.261 -i, --shm-id shared memory ID (optional) 00:05:35.261 -g, --single-file-segments force creating just one hugetlbfs file 00:05:35.261 00:05:35.261 PCI options: 00:05:35.261 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:35.261 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:35.261 -u, --no-pci disable PCI access 00:05:35.261 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:35.261 00:05:35.261 Log options: 00:05:35.261 -L, --logflag enable log flag (all, app_rpc, [2024-05-14 21:47:35.680342] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:05:35.261 json_util, rpc, thread, trace) 00:05:35.261 --silence-noticelog disable notice level logging to stderr 00:05:35.261 00:05:35.261 Trace options: 00:05:35.261 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:35.261 setting 0 to disable trace (default 32768) 00:05:35.261 Tracepoints vary in size and can use more than one trace entry. 00:05:35.261 -e, --tpoint-group [:] 00:05:35.261 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:35.261 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:35.261 a tracepoint group. First tpoint inside a group can be enabled by 00:05:35.261 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:35.261 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:35.261 in /include/spdk_internal/trace_defs.h 00:05:35.261 00:05:35.261 Other options: 00:05:35.261 -h, --help show this usage 00:05:35.261 -v, --version print SPDK version 00:05:35.261 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:35.261 --env-context Opaque context for use of the env implementation 00:05:35.261 passed 00:05:35.261 00:05:35.261 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.261 suites 1 1 n/a 0 0 00:05:35.261 tests 1 1 1 0 0 00:05:35.261 asserts 8 8 8 0 n/a 00:05:35.261 00:05:35.261 Elapsed time = 0.000 seconds 00:05:35.261 [2024-05-14 21:47:35.680486] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:05:35.261 21:47:35 unittest.unittest_event -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:05:35.261 00:05:35.261 00:05:35.261 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.261 http://cunit.sourceforge.net/ 00:05:35.261 00:05:35.261 00:05:35.261 Suite: app_suite 00:05:35.261 Test: test_create_reactor ...passed 00:05:35.261 Test: test_init_reactors ...passed 00:05:35.261 Test: test_event_call ...passed 00:05:35.261 Test: test_schedule_thread ...passed 00:05:35.261 Test: test_reschedule_thread ...passed 00:05:35.261 Test: test_bind_thread ...passed 00:05:35.261 Test: test_for_each_reactor ...passed 00:05:35.261 Test: test_reactor_stats ...passed 00:05:35.261 Test: test_scheduler ...passed 00:05:35.261 Test: test_governor ...passed 00:05:35.261 00:05:35.261 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.261 suites 1 1 n/a 0 0 00:05:35.261 tests 10 10 10 0 0 00:05:35.261 asserts 336 336 336 0 n/a 00:05:35.261 00:05:35.261 Elapsed time = 0.000 seconds 00:05:35.262 00:05:35.262 real 0m0.013s 00:05:35.262 user 0m0.012s 00:05:35.262 sys 0m0.005s 00:05:35.262 21:47:35 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.262 21:47:35 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 END TEST unittest_event 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@233 -- # uname -s 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 START TEST unittest_accel 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:35.262 00:05:35.262 00:05:35.262 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.262 http://cunit.sourceforge.net/ 00:05:35.262 00:05:35.262 00:05:35.262 Suite: accel_sequence 00:05:35.262 Test: test_sequence_fill_copy ...passed 00:05:35.262 Test: test_sequence_abort ...passed 00:05:35.262 Test: test_sequence_append_error ...passed 00:05:35.262 Test: test_sequence_completion_error ...[2024-05-14 21:47:35.732166] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82da750c0 00:05:35.262 passed 00:05:35.262 Test: test_sequence_decompress ...[2024-05-14 21:47:35.732353] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82da750c0 00:05:35.262 [2024-05-14 21:47:35.732374] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82da750c0 00:05:35.262 [2024-05-14 21:47:35.732385] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82da750c0 00:05:35.262 passed 00:05:35.262 Test: test_sequence_reverse ...passed 00:05:35.262 Test: test_sequence_copy_elision ...passed 00:05:35.262 Test: test_sequence_accel_buffers ...passed 00:05:35.262 Test: test_sequence_memory_domain ...[2024-05-14 21:47:35.733903] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1704:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:05:35.262 [2024-05-14 21:47:35.733946] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1743:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:05:35.262 passed 00:05:35.262 Test: test_sequence_module_memory_domain ...passed 00:05:35.262 Test: test_sequence_crypto ...passed 00:05:35.262 Test: test_sequence_driver ...[2024-05-14 21:47:35.734777] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1851:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82da75940 using driver: ut 00:05:35.262 [2024-05-14 21:47:35.734819] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1916:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82da75940 through driver: ut 00:05:35.262 passed 00:05:35.262 Test: test_sequence_same_iovs ...passed 00:05:35.262 Test: test_sequence_crc32 ...passed 00:05:35.262 Suite: accel 00:05:35.262 Test: test_spdk_accel_task_complete ...passed 00:05:35.262 Test: test_get_task ...passed 00:05:35.262 Test: test_spdk_accel_submit_copy ...passed 00:05:35.262 Test: test_spdk_accel_submit_dualcast ...passed 00:05:35.262 Test: test_spdk_accel_submit_compare ...passed 00:05:35.262 Test: test_spdk_accel_submit_fill ...passed 00:05:35.262 Test: test_spdk_accel_submit_crc32c ...passed 00:05:35.262 Test: test_spdk_accel_submit_crc32cv ...passed 00:05:35.262 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:05:35.262 Test: test_spdk_accel_submit_xor ...passed 00:05:35.262 Test: test_spdk_accel_module_find_by_name ...passed 00:05:35.262 Test: test_spdk_accel_module_register ...[2024-05-14 21:47:35.735449] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:35.262 [2024-05-14 21:47:35.735466] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:35.262 passed 00:05:35.262 00:05:35.262 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.262 suites 2 2 n/a 0 0 00:05:35.262 tests 26 26 26 0 0 00:05:35.262 asserts 827 827 827 0 n/a 00:05:35.262 00:05:35.262 Elapsed time = 0.008 seconds 00:05:35.262 00:05:35.262 real 0m0.011s 00:05:35.262 user 0m0.000s 00:05:35.262 sys 0m0.016s 00:05:35.262 21:47:35 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.262 21:47:35 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 END TEST unittest_accel 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 START TEST unittest_ioat 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:35.262 00:05:35.262 00:05:35.262 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.262 http://cunit.sourceforge.net/ 00:05:35.262 00:05:35.262 00:05:35.262 Suite: ioat 00:05:35.262 Test: ioat_state_check ...passed 00:05:35.262 00:05:35.262 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.262 suites 1 1 n/a 0 0 00:05:35.262 tests 1 1 1 0 0 00:05:35.262 asserts 32 32 32 0 n/a 00:05:35.262 00:05:35.262 Elapsed time = 0.000 seconds 00:05:35.262 00:05:35.262 real 0m0.004s 00:05:35.262 user 0m0.000s 00:05:35.262 sys 0m0.003s 00:05:35.262 21:47:35 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.262 21:47:35 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 END TEST unittest_ioat 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 START TEST unittest_idxd_user 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:35.262 00:05:35.262 00:05:35.262 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.262 http://cunit.sourceforge.net/ 00:05:35.262 00:05:35.262 00:05:35.262 Suite: idxd_user 00:05:35.262 Test: test_idxd_wait_cmd ...[2024-05-14 21:47:35.811937] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:35.262 passed 00:05:35.262 Test: test_idxd_reset_dev ...passed 00:05:35.262 Test: test_idxd_group_config ...passed 00:05:35.262 Test: test_idxd_wq_config ...passed 00:05:35.262 00:05:35.262 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.262 suites 1 1 n/a 0 0 00:05:35.262 tests 4 4 4 0 0 00:05:35.262 asserts 20 20 20 0 n/a 00:05:35.262 00:05:35.262 Elapsed time = 0.000 seconds 00:05:35.262 [2024-05-14 21:47:35.812369] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:05:35.262 [2024-05-14 21:47:35.812395] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:35.262 [2024-05-14 21:47:35.812406] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:05:35.262 00:05:35.262 real 0m0.005s 00:05:35.262 user 0m0.004s 00:05:35.262 sys 0m0.004s 00:05:35.262 21:47:35 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.262 21:47:35 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 END TEST unittest_idxd_user 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.262 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 START TEST unittest_iscsi 00:05:35.262 ************************************ 00:05:35.262 21:47:35 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:05:35.262 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:05:35.524 00:05:35.524 00:05:35.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.524 http://cunit.sourceforge.net/ 00:05:35.524 00:05:35.524 00:05:35.524 Suite: conn_suite 00:05:35.524 Test: read_task_split_in_order_case ...passed 00:05:35.524 Test: read_task_split_reverse_order_case ...passed 00:05:35.524 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:05:35.524 Test: process_non_read_task_completion_test ...passed 00:05:35.524 Test: free_tasks_on_connection ...passed 00:05:35.524 Test: free_tasks_with_queued_datain ...passed 00:05:35.524 Test: abort_queued_datain_task_test ...passed 00:05:35.524 Test: abort_queued_datain_tasks_test ...passed 00:05:35.524 00:05:35.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.524 suites 1 1 n/a 0 0 00:05:35.524 tests 8 8 8 0 0 00:05:35.524 asserts 230 230 230 0 n/a 00:05:35.524 00:05:35.524 Elapsed time = 0.000 seconds 00:05:35.524 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:05:35.524 00:05:35.524 00:05:35.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.524 http://cunit.sourceforge.net/ 00:05:35.524 00:05:35.524 00:05:35.524 Suite: iscsi_suite 00:05:35.524 Test: param_negotiation_test ...passed 00:05:35.524 Test: list_negotiation_test ...passed 00:05:35.524 Test: parse_valid_test ...passed 00:05:35.524 Test: parse_invalid_test ...[2024-05-14 21:47:35.853739] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:35.524 [2024-05-14 21:47:35.853891] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:35.524 [2024-05-14 21:47:35.853905] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:05:35.524 passed 00:05:35.524 00:05:35.524 [2024-05-14 21:47:35.853926] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:05:35.524 [2024-05-14 21:47:35.853941] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:05:35.524 [2024-05-14 21:47:35.853951] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:05:35.524 [2024-05-14 21:47:35.853960] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:05:35.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.524 suites 1 1 n/a 0 0 00:05:35.524 tests 4 4 4 0 0 00:05:35.524 asserts 161 161 161 0 n/a 00:05:35.524 00:05:35.524 Elapsed time = 0.000 seconds 00:05:35.524 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:05:35.524 00:05:35.524 00:05:35.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.524 http://cunit.sourceforge.net/ 00:05:35.524 00:05:35.524 00:05:35.524 Suite: iscsi_target_node_suite 00:05:35.524 Test: add_lun_test_cases ...[2024-05-14 21:47:35.858886] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:05:35.524 passed 00:05:35.524 Test: allow_any_allowed ...passed 00:05:35.524 Test: allow_ipv6_allowed ...passed 00:05:35.524 Test: allow_ipv6_denied ...passed 00:05:35.524 Test: allow_ipv6_invalid ...passed 00:05:35.524 Test: allow_ipv4_allowed ...passed 00:05:35.524 Test: allow_ipv4_denied ...passed 00:05:35.524 Test: allow_ipv4_invalid ...passed 00:05:35.524 Test: node_access_allowed ...passed 00:05:35.524 Test: node_access_denied_by_empty_netmask ...passed 00:05:35.524 Test: node_access_multi_initiator_groups_cases ...[2024-05-14 21:47:35.859041] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:05:35.524 [2024-05-14 21:47:35.859061] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:35.524 [2024-05-14 21:47:35.859071] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:35.524 [2024-05-14 21:47:35.859079] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:05:35.524 passed 00:05:35.524 Test: allow_iscsi_name_multi_maps_case ...passed 00:05:35.524 Test: chap_param_test_cases ...[2024-05-14 21:47:35.859163] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:05:35.524 passed 00:05:35.524 00:05:35.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.524 suites 1 1 n/a 0 0 00:05:35.524 tests 13 13 13 0 0 00:05:35.524 asserts 50 50 50 0 n/a 00:05:35.524 00:05:35.524 Elapsed time = 0.000 seconds 00:05:35.524 [2024-05-14 21:47:35.859174] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:05:35.524 [2024-05-14 21:47:35.859182] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:05:35.524 [2024-05-14 21:47:35.859190] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:05:35.524 [2024-05-14 21:47:35.859198] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:05:35.524 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:05:35.524 00:05:35.524 00:05:35.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.524 http://cunit.sourceforge.net/ 00:05:35.524 00:05:35.524 00:05:35.524 Suite: iscsi_suite 00:05:35.524 Test: op_login_check_target_test ...passed 00:05:35.524 Test: op_login_session_normal_test ...[2024-05-14 21:47:35.863734] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:05:35.524 [2024-05-14 21:47:35.863888] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:35.524 [2024-05-14 21:47:35.863902] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:35.524 [2024-05-14 21:47:35.863911] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:35.524 [2024-05-14 21:47:35.863936] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:05:35.524 [2024-05-14 21:47:35.863947] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:35.524 [2024-05-14 21:47:35.863967] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:05:35.524 [2024-05-14 21:47:35.863976] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:35.524 passed 00:05:35.524 Test: maxburstlength_test ...[2024-05-14 21:47:35.864421] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:35.524 [2024-05-14 21:47:35.864465] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4557:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:05:35.524 passed 00:05:35.524 Test: underflow_for_read_transfer_test ...passed 00:05:35.524 Test: underflow_for_zero_read_transfer_test ...passed 00:05:35.524 Test: underflow_for_request_sense_test ...passed 00:05:35.524 Test: underflow_for_check_condition_test ...passed 00:05:35.524 Test: add_transfer_task_test ...passed 00:05:35.524 Test: get_transfer_task_test ...passed 00:05:35.524 Test: del_transfer_task_test ...passed 00:05:35.524 Test: clear_all_transfer_tasks_test ...passed 00:05:35.524 Test: build_iovs_test ...passed 00:05:35.524 Test: build_iovs_with_md_test ...passed 00:05:35.525 Test: pdu_hdr_op_login_test ...passed 00:05:35.525 Test: pdu_hdr_op_text_test ...[2024-05-14 21:47:35.864601] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:05:35.525 [2024-05-14 21:47:35.864621] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:05:35.525 [2024-05-14 21:47:35.864631] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:05:35.525 [2024-05-14 21:47:35.864643] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2247:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:35.525 [2024-05-14 21:47:35.864652] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:05:35.525 passed 00:05:35.525 Test: pdu_hdr_op_logout_test ...passed 00:05:35.525 Test: pdu_hdr_op_scsi_test ...passed 00:05:35.525 Test: pdu_hdr_op_task_mgmt_test ...passed 00:05:35.525 Test: pdu_hdr_op_nopout_test ...[2024-05-14 21:47:35.864661] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2292:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:05:35.525 [2024-05-14 21:47:35.864680] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2523:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:05:35.525 [2024-05-14 21:47:35.864706] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:35.525 [2024-05-14 21:47:35.864720] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:35.525 [2024-05-14 21:47:35.864733] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:05:35.525 [2024-05-14 21:47:35.864748] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:35.525 [2024-05-14 21:47:35.864763] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3411:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:05:35.525 [2024-05-14 21:47:35.864778] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:05:35.525 [2024-05-14 21:47:35.864796] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:05:35.525 [2024-05-14 21:47:35.864811] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:05:35.525 [2024-05-14 21:47:35.864831] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:05:35.525 passed 00:05:35.525 Test: pdu_hdr_op_data_test ...passed 00:05:35.525 Test: empty_text_with_cbit_test ...passed 00:05:35.525 Test: pdu_payload_read_test ...[2024-05-14 21:47:35.864845] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:35.525 [2024-05-14 21:47:35.864858] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:35.525 [2024-05-14 21:47:35.864871] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:05:35.525 [2024-05-14 21:47:35.864887] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:05:35.525 [2024-05-14 21:47:35.864906] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:05:35.525 [2024-05-14 21:47:35.864920] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:35.525 [2024-05-14 21:47:35.864934] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4223:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:05:35.525 [2024-05-14 21:47:35.864949] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:05:35.525 [2024-05-14 21:47:35.864963] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:05:35.525 [2024-05-14 21:47:35.864976] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:05:35.525 [2024-05-14 21:47:35.865376] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4638:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:05:35.525 passed 00:05:35.525 Test: data_out_pdu_sequence_test ...passed 00:05:35.525 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:05:35.525 00:05:35.525 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.525 suites 1 1 n/a 0 0 00:05:35.525 tests 24 24 24 0 0 00:05:35.525 asserts 150253 150253 150253 0 n/a 00:05:35.525 00:05:35.525 Elapsed time = 0.000 seconds 00:05:35.525 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:05:35.525 00:05:35.525 00:05:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.525 http://cunit.sourceforge.net/ 00:05:35.525 00:05:35.525 00:05:35.525 Suite: init_grp_suite 00:05:35.525 Test: create_initiator_group_success_case ...passed 00:05:35.525 Test: find_initiator_group_success_case ...passed 00:05:35.525 Test: register_initiator_group_twice_case ...passed 00:05:35.525 Test: add_initiator_name_success_case ...passed 00:05:35.525 Test: add_initiator_name_fail_case ...[2024-05-14 21:47:35.871473] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:05:35.525 passed 00:05:35.525 Test: delete_all_initiator_names_success_case ...passed 00:05:35.525 Test: add_netmask_success_case ...passed 00:05:35.525 Test: add_netmask_fail_case ...passed 00:05:35.525 Test: delete_all_netmasks_success_case ...passed 00:05:35.525 Test: initiator_name_overwrite_all_to_any_case ...passed 00:05:35.525 Test: netmask_overwrite_all_to_any_case ...passed 00:05:35.525 Test: add_delete_initiator_names_case ...passed[2024-05-14 21:47:35.871650] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:05:35.525 00:05:35.525 Test: add_duplicated_initiator_names_case ...passed 00:05:35.525 Test: delete_nonexisting_initiator_names_case ...passed 00:05:35.525 Test: add_delete_netmasks_case ...passed 00:05:35.525 Test: add_duplicated_netmasks_case ...passed 00:05:35.525 Test: delete_nonexisting_netmasks_case ...passed 00:05:35.525 00:05:35.525 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.525 suites 1 1 n/a 0 0 00:05:35.525 tests 17 17 17 0 0 00:05:35.525 asserts 108 108 108 0 n/a 00:05:35.525 00:05:35.525 Elapsed time = 0.000 seconds 00:05:35.525 21:47:35 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:05:35.525 00:05:35.525 00:05:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.525 http://cunit.sourceforge.net/ 00:05:35.525 00:05:35.525 00:05:35.525 Suite: portal_grp_suite 00:05:35.525 Test: portal_create_ipv4_normal_case ...passed 00:05:35.525 Test: portal_create_ipv6_normal_case ...passed 00:05:35.525 Test: portal_create_ipv4_wildcard_case ...passed 00:05:35.525 Test: portal_create_ipv6_wildcard_case ...passed 00:05:35.525 Test: portal_create_twice_case ...passed 00:05:35.525 Test: portal_grp_register_unregister_case ...passed 00:05:35.525 Test: portal_grp_register_twice_case ...passed 00:05:35.525 Test: portal_grp_add_delete_case ...[2024-05-14 21:47:35.876714] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:05:35.525 passed 00:05:35.525 Test: portal_grp_add_delete_twice_case ...passed 00:05:35.525 00:05:35.525 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.525 suites 1 1 n/a 0 0 00:05:35.525 tests 9 9 9 0 0 00:05:35.525 asserts 44 44 44 0 n/a 00:05:35.525 00:05:35.525 Elapsed time = 0.000 seconds 00:05:35.525 00:05:35.525 real 0m0.033s 00:05:35.525 user 0m0.000s 00:05:35.525 sys 0m0.034s 00:05:35.525 21:47:35 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.525 21:47:35 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:05:35.525 ************************************ 00:05:35.525 END TEST unittest_iscsi 00:05:35.525 ************************************ 00:05:35.525 21:47:35 unittest -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:05:35.525 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.525 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.525 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.525 ************************************ 00:05:35.525 START TEST unittest_json 00:05:35.525 ************************************ 00:05:35.525 21:47:35 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:05:35.525 21:47:35 unittest.unittest_json -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:05:35.525 00:05:35.525 00:05:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.525 http://cunit.sourceforge.net/ 00:05:35.525 00:05:35.525 00:05:35.525 Suite: json 00:05:35.525 Test: test_parse_literal ...passed 00:05:35.525 Test: test_parse_string_simple ...passed 00:05:35.525 Test: test_parse_string_control_chars ...passed 00:05:35.525 Test: test_parse_string_utf8 ...passed 00:05:35.525 Test: test_parse_string_escapes_twochar ...passed 00:05:35.525 Test: test_parse_string_escapes_unicode ...passed 00:05:35.525 Test: test_parse_number ...passed 00:05:35.525 Test: test_parse_array ...passed 00:05:35.525 Test: test_parse_object ...passed 00:05:35.525 Test: test_parse_nesting ...passed 00:05:35.525 Test: test_parse_comment ...passed 00:05:35.525 00:05:35.525 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.525 suites 1 1 n/a 0 0 00:05:35.525 tests 11 11 11 0 0 00:05:35.525 asserts 1516 1516 1516 0 n/a 00:05:35.525 00:05:35.525 Elapsed time = 0.000 seconds 00:05:35.525 21:47:35 unittest.unittest_json -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:05:35.525 00:05:35.525 00:05:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.525 http://cunit.sourceforge.net/ 00:05:35.525 00:05:35.525 00:05:35.525 Suite: json 00:05:35.525 Test: test_strequal ...passed 00:05:35.525 Test: test_num_to_uint16 ...passed 00:05:35.525 Test: test_num_to_int32 ...passed 00:05:35.525 Test: test_num_to_uint64 ...passed 00:05:35.525 Test: test_decode_object ...passed 00:05:35.525 Test: test_decode_array ...passed 00:05:35.526 Test: test_decode_bool ...passed 00:05:35.526 Test: test_decode_uint16 ...passed 00:05:35.526 Test: test_decode_int32 ...passed 00:05:35.526 Test: test_decode_uint32 ...passed 00:05:35.526 Test: test_decode_uint64 ...passed 00:05:35.526 Test: test_decode_string ...passed 00:05:35.526 Test: test_decode_uuid ...passed 00:05:35.526 Test: test_find ...passed 00:05:35.526 Test: test_find_array ...passed 00:05:35.526 Test: test_iterating ...passed 00:05:35.526 Test: test_free_object ...passed 00:05:35.526 00:05:35.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.526 suites 1 1 n/a 0 0 00:05:35.526 tests 17 17 17 0 0 00:05:35.526 asserts 236 236 236 0 n/a 00:05:35.526 00:05:35.526 Elapsed time = 0.000 seconds 00:05:35.526 21:47:35 unittest.unittest_json -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:05:35.526 00:05:35.526 00:05:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.526 http://cunit.sourceforge.net/ 00:05:35.526 00:05:35.526 00:05:35.526 Suite: json 00:05:35.526 Test: test_write_literal ...passed 00:05:35.526 Test: test_write_string_simple ...passed 00:05:35.526 Test: test_write_string_escapes ...passed 00:05:35.526 Test: test_write_string_utf16le ...passed 00:05:35.526 Test: test_write_number_int32 ...passed 00:05:35.526 Test: test_write_number_uint32 ...passed 00:05:35.526 Test: test_write_number_uint128 ...passed 00:05:35.526 Test: test_write_string_number_uint128 ...passed 00:05:35.526 Test: test_write_number_int64 ...passed 00:05:35.526 Test: test_write_number_uint64 ...passed 00:05:35.526 Test: test_write_number_double ...passed 00:05:35.526 Test: test_write_uuid ...passed 00:05:35.526 Test: test_write_array ...passed 00:05:35.526 Test: test_write_object ...passed 00:05:35.526 Test: test_write_nesting ...passed 00:05:35.526 Test: test_write_val ...passed 00:05:35.526 00:05:35.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.526 suites 1 1 n/a 0 0 00:05:35.526 tests 16 16 16 0 0 00:05:35.526 asserts 918 918 918 0 n/a 00:05:35.526 00:05:35.526 Elapsed time = 0.000 seconds 00:05:35.526 21:47:35 unittest.unittest_json -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:05:35.526 00:05:35.526 00:05:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.526 http://cunit.sourceforge.net/ 00:05:35.526 00:05:35.526 00:05:35.526 Suite: jsonrpc 00:05:35.526 Test: test_parse_request ...passed 00:05:35.526 Test: test_parse_request_streaming ...passed 00:05:35.526 00:05:35.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.526 suites 1 1 n/a 0 0 00:05:35.526 tests 2 2 2 0 0 00:05:35.526 asserts 289 289 289 0 n/a 00:05:35.526 00:05:35.526 Elapsed time = 0.000 seconds 00:05:35.526 00:05:35.526 real 0m0.020s 00:05:35.526 user 0m0.003s 00:05:35.526 sys 0m0.016s 00:05:35.526 21:47:35 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.526 ************************************ 00:05:35.526 END TEST unittest_json 00:05:35.526 ************************************ 00:05:35.526 21:47:35 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 21:47:35 unittest -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 ************************************ 00:05:35.526 START TEST unittest_rpc 00:05:35.526 ************************************ 00:05:35.526 21:47:35 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:05:35.526 21:47:35 unittest.unittest_rpc -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:05:35.526 00:05:35.526 00:05:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.526 http://cunit.sourceforge.net/ 00:05:35.526 00:05:35.526 00:05:35.526 Suite: rpc 00:05:35.526 Test: test_jsonrpc_handler ...passed 00:05:35.526 Test: test_spdk_rpc_is_method_allowed ...passed 00:05:35.526 Test: test_rpc_get_methods ...passed 00:05:35.526 Test: test_rpc_spdk_get_version ...passed 00:05:35.526 Test: test_spdk_rpc_listen_close ...passed 00:05:35.526 Test: test_rpc_run_multiple_servers ...passed 00:05:35.526 00:05:35.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.526 suites 1 1 n/a 0 0 00:05:35.526 tests 6 6 6 0 0 00:05:35.526 asserts 23 23 23 0 n/a 00:05:35.526 00:05:35.526 Elapsed time = 0.000 seconds 00:05:35.526 [2024-05-14 21:47:35.961945] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:05:35.526 00:05:35.526 real 0m0.004s 00:05:35.526 user 0m0.003s 00:05:35.526 sys 0m0.003s 00:05:35.526 21:47:35 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.526 21:47:35 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 ************************************ 00:05:35.526 END TEST unittest_rpc 00:05:35.526 ************************************ 00:05:35.526 21:47:35 unittest -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.526 21:47:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 ************************************ 00:05:35.526 START TEST unittest_notify 00:05:35.526 ************************************ 00:05:35.526 21:47:35 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:35.526 00:05:35.526 00:05:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.526 http://cunit.sourceforge.net/ 00:05:35.526 00:05:35.526 00:05:35.526 Suite: app_suite 00:05:35.526 Test: notify ...passed 00:05:35.526 00:05:35.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.526 suites 1 1 n/a 0 0 00:05:35.526 tests 1 1 1 0 0 00:05:35.526 asserts 13 13 13 0 n/a 00:05:35.526 00:05:35.526 Elapsed time = 0.000 seconds 00:05:35.526 00:05:35.526 real 0m0.004s 00:05:35.526 user 0m0.004s 00:05:35.526 sys 0m0.003s 00:05:35.526 21:47:35 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.526 ************************************ 00:05:35.526 END TEST unittest_notify 00:05:35.526 ************************************ 00:05:35.526 21:47:35 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 21:47:36 unittest -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:05:35.526 21:47:36 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.526 21:47:36 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.526 21:47:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:35.526 ************************************ 00:05:35.526 START TEST unittest_nvme 00:05:35.526 ************************************ 00:05:35.526 21:47:36 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:05:35.526 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:05:35.526 00:05:35.526 00:05:35.526 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.526 http://cunit.sourceforge.net/ 00:05:35.526 00:05:35.526 00:05:35.526 Suite: nvme 00:05:35.526 Test: test_opc_data_transfer ...passed 00:05:35.526 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:05:35.526 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:05:35.526 Test: test_trid_parse_and_compare ...[2024-05-14 21:47:36.032599] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:05:35.526 [2024-05-14 21:47:36.032853] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:35.526 [2024-05-14 21:47:36.032875] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1189:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:05:35.526 [2024-05-14 21:47:36.032886] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:35.526 [2024-05-14 21:47:36.032897] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:05:35.526 [2024-05-14 21:47:36.032907] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:35.526 passed 00:05:35.526 Test: test_trid_trtype_str ...passed 00:05:35.526 Test: test_trid_adrfam_str ...passed 00:05:35.526 Test: test_nvme_ctrlr_probe ...passed 00:05:35.526 Test: test_spdk_nvme_probe ...passed 00:05:35.526 Test: test_spdk_nvme_connect ...[2024-05-14 21:47:36.033005] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:35.526 [2024-05-14 21:47:36.033033] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:35.526 [2024-05-14 21:47:36.033045] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:35.526 [2024-05-14 21:47:36.033058] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:05:35.526 [2024-05-14 21:47:36.033069] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:35.526 [2024-05-14 21:47:36.033090] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:05:35.526 passed 00:05:35.526 Test: test_nvme_ctrlr_probe_internal ...[2024-05-14 21:47:36.033144] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:35.526 [2024-05-14 21:47:36.033165] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:05:35.526 [2024-05-14 21:47:36.033227] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:35.526 passed 00:05:35.526 Test: test_nvme_init_controllers ...passed 00:05:35.527 Test: test_nvme_driver_init ...[2024-05-14 21:47:36.033252] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:05:35.527 [2024-05-14 21:47:36.033282] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:05:35.527 [2024-05-14 21:47:36.033330] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:05:35.527 [2024-05-14 21:47:36.033353] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:35.786 [2024-05-14 21:47:36.143594] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:05:35.786 passed 00:05:35.786 Test: test_spdk_nvme_detach ...passed 00:05:35.786 Test: test_nvme_completion_poll_cb ...passed 00:05:35.786 Test: test_nvme_user_copy_cmd_complete ...passed 00:05:35.786 Test: test_nvme_allocate_request_null ...passed 00:05:35.786 Test: test_nvme_allocate_request ...passed 00:05:35.786 Test: test_nvme_free_request ...passed 00:05:35.786 Test: test_nvme_allocate_request_user_copy ...passed 00:05:35.786 Test: test_nvme_robust_mutex_init_shared ...passed 00:05:35.786 Test: test_nvme_request_check_timeout ...passed 00:05:35.786 Test: test_nvme_wait_for_completion ...passed 00:05:35.786 Test: test_spdk_nvme_parse_func ...passed 00:05:35.786 Test: test_spdk_nvme_detach_async ...passed 00:05:35.786 Test: test_nvme_parse_addr ...passed 00:05:35.786 00:05:35.786 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.786 suites 1 1 n/a 0 0 00:05:35.786 tests 25 25 25 0 0 00:05:35.786 asserts 326 326 326 0 n/a 00:05:35.786 00:05:35.786 Elapsed time = 0.000 seconds 00:05:35.786 [2024-05-14 21:47:36.143782] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:05:35.786 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:05:35.786 00:05:35.786 00:05:35.786 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.786 http://cunit.sourceforge.net/ 00:05:35.786 00:05:35.786 00:05:35.786 Suite: nvme_ctrlr 00:05:35.786 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-14 21:47:36.149355] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.786 passed 00:05:35.786 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-14 21:47:36.150763] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.786 passed 00:05:35.786 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-14 21:47:36.151930] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.786 passed 00:05:35.786 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-14 21:47:36.153098] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.786 passed 00:05:35.786 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-14 21:47:36.154279] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.155428] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 21:47:36.156561] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 21:47:36.157702] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:35.787 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-14 21:47:36.160032] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.162327] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 21:47:36.163490] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:35.787 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-14 21:47:36.165762] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.166929] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 21:47:36.169240] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:35.787 Test: test_nvme_ctrlr_init_delay ...[2024-05-14 21:47:36.171559] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_alloc_io_qpair_rr_1 ...[2024-05-14 21:47:36.172733] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.172775] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:35.787 [2024-05-14 21:47:36.172794] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:35.787 passed 00:05:35.787 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:05:35.787 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:05:35.787 Test: test_alloc_io_qpair_wrr_1 ...passed 00:05:35.787 Test: test_alloc_io_qpair_wrr_2 ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-05-14 21:47:36.172805] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:35.787 [2024-05-14 21:47:36.172815] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:35.787 [2024-05-14 21:47:36.172876] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.172897] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.172911] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:35.787 [2024-05-14 21:47:36.172933] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:05:35.787 [2024-05-14 21:47:36.172944] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:35.787 [2024-05-14 21:47:36.172953] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_fail ...passed 00:05:35.787 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:05:35.787 Test: test_nvme_ctrlr_set_supported_features ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:05:35.787 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-14 21:47:36.172963] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:35.787 [2024-05-14 21:47:36.172974] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:05:35.787 [2024-05-14 21:47:36.173015] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:05:35.787 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-14 21:47:36.212416] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-14 21:47:36.219321] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-14 21:47:36.220546] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.220599] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:05:35.787 passed 00:05:35.787 Test: test_alloc_io_qpair_fail ...[2024-05-14 21:47:36.221746] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 [2024-05-14 21:47:36.221792] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_add_remove_process ...passed 00:05:35.787 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:05:35.787 Test: test_nvme_ctrlr_set_state ...[2024-05-14 21:47:36.221838] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-14 21:47:36.221859] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-14 21:47:36.225157] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-14 21:47:36.231886] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_reset ...[2024-05-14 21:47:36.233091] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_aer_callback ...[2024-05-14 21:47:36.233158] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-14 21:47:36.234363] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:05:35.787 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:05:35.787 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-14 21:47:36.235623] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:05:35.787 Test: test_nvme_ctrlr_ana_resize ...[2024-05-14 21:47:36.236803] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:05:35.787 Test: test_nvme_transport_ctrlr_ready ...[2024-05-14 21:47:36.238004] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:05:35.787 [2024-05-14 21:47:36.238043] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:05:35.787 passed 00:05:35.787 Test: test_nvme_ctrlr_disable ...[2024-05-14 21:47:36.238064] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:35.787 passed 00:05:35.787 00:05:35.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.787 suites 1 1 n/a 0 0 00:05:35.787 tests 43 43 43 0 0 00:05:35.787 asserts 10418 10418 10418 0 n/a 00:05:35.787 00:05:35.787 Elapsed time = 0.039 seconds 00:05:35.787 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:05:35.787 00:05:35.787 00:05:35.787 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.787 http://cunit.sourceforge.net/ 00:05:35.787 00:05:35.787 00:05:35.787 Suite: nvme_ctrlr_cmd 00:05:35.787 Test: test_get_log_pages ...passed 00:05:35.787 Test: test_set_feature_cmd ...passed 00:05:35.787 Test: test_set_feature_ns_cmd ...passed 00:05:35.787 Test: test_get_feature_cmd ...passed 00:05:35.787 Test: test_get_feature_ns_cmd ...passed 00:05:35.787 Test: test_abort_cmd ...passed 00:05:35.787 Test: test_set_host_id_cmds ...[2024-05-14 21:47:36.245957] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:05:35.787 passed 00:05:35.787 Test: test_io_cmd_raw_no_payload_build ...passed 00:05:35.787 Test: test_io_raw_cmd ...passed 00:05:35.787 Test: test_io_raw_cmd_with_md ...passed 00:05:35.787 Test: test_namespace_attach ...passed 00:05:35.787 Test: test_namespace_detach ...passed 00:05:35.787 Test: test_namespace_create ...passed 00:05:35.787 Test: test_namespace_delete ...passed 00:05:35.787 Test: test_doorbell_buffer_config ...passed 00:05:35.787 Test: test_format_nvme ...passed 00:05:35.787 Test: test_fw_commit ...passed 00:05:35.787 Test: test_fw_image_download ...passed 00:05:35.787 Test: test_sanitize ...passed 00:05:35.787 Test: test_directive ...passed 00:05:35.787 Test: test_nvme_request_add_abort ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:05:35.787 Test: test_nvme_ctrlr_cmd_identify ...passed 00:05:35.787 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:05:35.787 00:05:35.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.787 suites 1 1 n/a 0 0 00:05:35.787 tests 24 24 24 0 0 00:05:35.787 asserts 198 198 198 0 n/a 00:05:35.787 00:05:35.787 Elapsed time = 0.000 seconds 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme_ctrlr_cmd 00:05:35.788 Test: test_geometry_cmd ...passed 00:05:35.788 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:05:35.788 00:05:35.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.788 suites 1 1 n/a 0 0 00:05:35.788 tests 2 2 2 0 0 00:05:35.788 asserts 7 7 7 0 n/a 00:05:35.788 00:05:35.788 Elapsed time = 0.000 seconds 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme 00:05:35.788 Test: test_nvme_ns_construct ...passed 00:05:35.788 Test: test_nvme_ns_uuid ...passed 00:05:35.788 Test: test_nvme_ns_csi ...passed 00:05:35.788 Test: test_nvme_ns_data ...passed 00:05:35.788 Test: test_nvme_ns_set_identify_data ...passed 00:05:35.788 Test: test_spdk_nvme_ns_get_values ...passed 00:05:35.788 Test: test_spdk_nvme_ns_is_active ...passed 00:05:35.788 Test: spdk_nvme_ns_supports ...passed 00:05:35.788 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:05:35.788 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:05:35.788 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:05:35.788 Test: test_nvme_ns_find_id_desc ...passed 00:05:35.788 00:05:35.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.788 suites 1 1 n/a 0 0 00:05:35.788 tests 12 12 12 0 0 00:05:35.788 asserts 83 83 83 0 n/a 00:05:35.788 00:05:35.788 Elapsed time = 0.000 seconds 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme_ns_cmd 00:05:35.788 Test: split_test ...passed 00:05:35.788 Test: split_test2 ...passed 00:05:35.788 Test: split_test3 ...passed 00:05:35.788 Test: split_test4 ...passed 00:05:35.788 Test: test_nvme_ns_cmd_flush ...passed 00:05:35.788 Test: test_nvme_ns_cmd_dataset_management ...passed 00:05:35.788 Test: test_nvme_ns_cmd_copy ...passed 00:05:35.788 Test: test_io_flags ...passed 00:05:35.788 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:05:35.788 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:05:35.788 Test: test_nvme_ns_cmd_reservation_register ...passed 00:05:35.788 Test: test_nvme_ns_cmd_reservation_release ...passed 00:05:35.788 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:05:35.788 Test: test_nvme_ns_cmd_reservation_report ...passed 00:05:35.788 Test: test_cmd_child_request ...passed 00:05:35.788 Test: test_nvme_ns_cmd_readv ...passed 00:05:35.788 Test: test_nvme_ns_cmd_read_with_md ...passed 00:05:35.788 Test: test_nvme_ns_cmd_writev ...passed 00:05:35.788 Test: test_nvme_ns_cmd_write_with_md ...passed[2024-05-14 21:47:36.259735] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:05:35.788 [2024-05-14 21:47:36.259977] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:05:35.788 00:05:35.788 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:05:35.788 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:05:35.788 Test: test_nvme_ns_cmd_comparev ...passed 00:05:35.788 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:05:35.788 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:05:35.788 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:05:35.788 Test: test_nvme_ns_cmd_setup_request ...passed 00:05:35.788 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:05:35.788 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:05:35.788 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:05:35.788 Test: test_nvme_ns_cmd_verify ...passed 00:05:35.788 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:05:35.788 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:05:35.788 00:05:35.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.788 suites 1 1 n/a 0 0 00:05:35.788 tests 32 32 32 0 0 00:05:35.788 asserts 550 550 550 0 n/a 00:05:35.788 00:05:35.788 Elapsed time = 0.000 seconds 00:05:35.788 [2024-05-14 21:47:36.260089] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:35.788 [2024-05-14 21:47:36.260107] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme_ns_cmd 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:05:35.788 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:05:35.788 00:05:35.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.788 suites 1 1 n/a 0 0 00:05:35.788 tests 12 12 12 0 0 00:05:35.788 asserts 123 123 123 0 n/a 00:05:35.788 00:05:35.788 Elapsed time = 0.000 seconds 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme_qpair 00:05:35.788 Test: test3 ...passed 00:05:35.788 Test: test_ctrlr_failed ...passed 00:05:35.788 Test: struct_packing ...passed 00:05:35.788 Test: test_nvme_qpair_process_completions ...[2024-05-14 21:47:36.270090] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:35.788 passed 00:05:35.788 Test: test_nvme_completion_is_retry ...passed 00:05:35.788 Test: test_get_status_string ...passed 00:05:35.788 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:05:35.788 Test: test_nvme_qpair_submit_request ...passed 00:05:35.788 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:05:35.788 Test: test_nvme_qpair_manual_complete_request ...passed 00:05:35.788 Test: test_nvme_qpair_init_deinit ...[2024-05-14 21:47:36.270273] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:35.788 [2024-05-14 21:47:36.270317] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:05:35.788 [2024-05-14 21:47:36.270329] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:05:35.788 passed 00:05:35.788 Test: test_nvme_get_sgl_print_info ...passed[2024-05-14 21:47:36.270370] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:35.788 00:05:35.788 00:05:35.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.788 suites 1 1 n/a 0 0 00:05:35.788 tests 12 12 12 0 0 00:05:35.788 asserts 154 154 154 0 n/a 00:05:35.788 00:05:35.788 Elapsed time = 0.000 seconds 00:05:35.788 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:05:35.788 00:05:35.788 00:05:35.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.788 http://cunit.sourceforge.net/ 00:05:35.788 00:05:35.788 00:05:35.788 Suite: nvme_pcie 00:05:35.788 Test: test_prp_list_append ...[2024-05-14 21:47:36.274552] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:35.788 [2024-05-14 21:47:36.274762] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:05:35.788 [2024-05-14 21:47:36.274784] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:05:35.788 [2024-05-14 21:47:36.274841] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:35.788 [2024-05-14 21:47:36.274870] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:35.788 passed 00:05:35.788 Test: test_nvme_pcie_hotplug_monitor ...passed 00:05:35.788 Test: test_shadow_doorbell_update ...passed 00:05:35.788 Test: test_build_contig_hw_sgl_request ...passed 00:05:35.788 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:05:35.788 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:05:35.788 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:05:35.788 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:05:35.788 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:05:35.788 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:05:35.788 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:05:35.788 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:05:35.788 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-05-14 21:47:36.274974] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:35.788 [2024-05-14 21:47:36.275008] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:05:35.789 [2024-05-14 21:47:36.275023] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:05:35.789 passed 00:05:35.789 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:05:35.789 00:05:35.789 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.789 suites 1 1 n/a 0 0 00:05:35.789 tests 14 14 14 0 0 00:05:35.789 asserts 235 235 235 0 n/a 00:05:35.789 00:05:35.789 Elapsed time = 0.000 seconds 00:05:35.789 [2024-05-14 21:47:36.275035] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:05:35.789 [2024-05-14 21:47:36.275045] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:05:35.789 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:05:35.789 00:05:35.789 00:05:35.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.789 http://cunit.sourceforge.net/ 00:05:35.789 00:05:35.789 00:05:35.789 Suite: nvme_ns_cmd 00:05:35.789 Test: nvme_poll_group_create_test ...passed 00:05:35.789 Test: nvme_poll_group_add_remove_test ...passed 00:05:35.789 Test: nvme_poll_group_process_completions ...passed 00:05:35.789 Test: nvme_poll_group_destroy_test ...passed 00:05:35.789 Test: nvme_poll_group_get_free_stats ...passed 00:05:35.789 00:05:35.789 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.789 suites 1 1 n/a 0 0 00:05:35.789 tests 5 5 5 0 0 00:05:35.789 asserts 75 75 75 0 n/a 00:05:35.789 00:05:35.789 Elapsed time = 0.000 seconds 00:05:35.789 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:05:35.789 00:05:35.789 00:05:35.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.789 http://cunit.sourceforge.net/ 00:05:35.789 00:05:35.789 00:05:35.789 Suite: nvme_quirks 00:05:35.789 Test: test_nvme_quirks_striping ...passed 00:05:35.789 00:05:35.789 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.789 suites 1 1 n/a 0 0 00:05:35.789 tests 1 1 1 0 0 00:05:35.789 asserts 5 5 5 0 n/a 00:05:35.789 00:05:35.789 Elapsed time = 0.000 seconds 00:05:35.789 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:05:35.789 00:05:35.789 00:05:35.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.789 http://cunit.sourceforge.net/ 00:05:35.789 00:05:35.789 00:05:35.789 Suite: nvme_tcp 00:05:35.789 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:05:35.789 Test: test_nvme_tcp_build_iovs ...passed 00:05:35.789 Test: test_nvme_tcp_build_sgl_request ...[2024-05-14 21:47:36.288794] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82053b5b8, and the iovcnt=16, remaining_size=28672 00:05:35.789 passed 00:05:35.789 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:05:35.789 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:05:35.789 Test: test_nvme_tcp_req_complete_safe ...passed 00:05:35.789 Test: test_nvme_tcp_req_get ...passed 00:05:35.789 Test: test_nvme_tcp_req_init ...passed 00:05:35.789 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:05:35.789 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:05:35.789 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:05:35.789 Test: test_nvme_tcp_alloc_reqs ...passed 00:05:35.789 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-05-14 21:47:36.289169] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(6) to be set 00:05:35.789 [2024-05-14 21:47:36.289255] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 passed 00:05:35.789 Test: test_nvme_tcp_pdu_ch_handle ...[2024-05-14 21:47:36.289289] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x82053c8d8 00:05:35.789 [2024-05-14 21:47:36.289311] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1227:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:05:35.789 [2024-05-14 21:47:36.289332] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289352] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:05:35.789 [2024-05-14 21:47:36.289370] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289392] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:05:35.789 [2024-05-14 21:47:36.289412] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 passed 00:05:35.789 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-14 21:47:36.289432] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289451] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289481] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289507] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289528] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.289590] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:05:35.789 [2024-05-14 21:47:36.289613] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:35.789 [2024-05-14 21:47:36.328962] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:35.789 passed 00:05:35.789 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:05:35.789 Test: test_nvme_tcp_c2h_payload_handle ...[2024-05-14 21:47:36.329034] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x82053cd10): PDU Sequence Error 00:05:35.789 passed 00:05:35.789 Test: test_nvme_tcp_icresp_handle ...passed 00:05:35.789 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:05:35.789 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:05:35.789 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:05:35.789 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-14 21:47:36.329060] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:05:35.789 [2024-05-14 21:47:36.329077] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1575:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:05:35.789 [2024-05-14 21:47:36.329089] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.329097] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:05:35.789 [2024-05-14 21:47:36.329110] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.329122] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053d148 is same with the state(0) to be set 00:05:35.789 [2024-05-14 21:47:36.329141] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x82053cd10): PDU Sequence Error 00:05:35.789 [2024-05-14 21:47:36.329168] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x82053d148 00:05:35.789 [2024-05-14 21:47:36.329238] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x82053aea8, errno=0, rc=0 00:05:35.789 [2024-05-14 21:47:36.329258] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053aea8 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.329276] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82053aea8 is same with the state(5) to be set 00:05:35.789 [2024-05-14 21:47:36.329336] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82053aea8 (0): No error: 0 00:05:35.789 [2024-05-14 21:47:36.329364] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82053aea8 (0): No error: 0 00:05:35.789 passed 00:05:36.049 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-14 21:47:36.395199] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:36.049 [2024-05-14 21:47:36.395258] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:36.049 passed 00:05:36.049 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:05:36.049 Test: test_nvme_tcp_poll_group_get_stats ...[2024-05-14 21:47:36.395297] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:36.049 [2024-05-14 21:47:36.395308] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:36.049 passed 00:05:36.049 Test: test_nvme_tcp_ctrlr_construct ...passed 00:05:36.049 Test: test_nvme_tcp_qpair_submit_request ...passed 00:05:36.049 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 27 27 27 0 0 00:05:36.049 asserts 624 624 624 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.070 seconds 00:05:36.049 [2024-05-14 21:47:36.395379] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:36.049 [2024-05-14 21:47:36.395392] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:36.049 [2024-05-14 21:47:36.395405] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:05:36.049 [2024-05-14 21:47:36.395414] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:36.049 [2024-05-14 21:47:36.395432] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2375:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82cf81000 with addr=192.168.1.78, port=23 00:05:36.049 [2024-05-14 21:47:36.395441] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:36.049 [2024-05-14 21:47:36.395460] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82cf54180, and the iovcnt=1, remaining_size=1024 00:05:36.049 [2024-05-14 21:47:36.395470] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:05:36.049 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:05:36.049 00:05:36.049 00:05:36.049 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.049 http://cunit.sourceforge.net/ 00:05:36.049 00:05:36.049 00:05:36.049 Suite: nvme_transport 00:05:36.049 Test: test_nvme_get_transport ...passed 00:05:36.049 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:05:36.049 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:05:36.049 Test: test_nvme_transport_poll_group_add_remove ...passed 00:05:36.049 Test: test_ctrlr_get_memory_domains ...passed 00:05:36.049 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 5 5 5 0 0 00:05:36.049 asserts 28 28 28 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.000 seconds 00:05:36.049 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:05:36.049 00:05:36.049 00:05:36.049 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.049 http://cunit.sourceforge.net/ 00:05:36.049 00:05:36.049 00:05:36.049 Suite: nvme_io_msg 00:05:36.049 Test: test_nvme_io_msg_send ...passed 00:05:36.049 Test: test_nvme_io_msg_process ...passed 00:05:36.049 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:05:36.049 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 3 3 3 0 0 00:05:36.049 asserts 56 56 56 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.000 seconds 00:05:36.049 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:05:36.049 00:05:36.049 00:05:36.049 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.049 http://cunit.sourceforge.net/ 00:05:36.049 00:05:36.049 00:05:36.049 Suite: nvme_pcie_common 00:05:36.049 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-05-14 21:47:36.413993] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:05:36.049 passed 00:05:36.049 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:05:36.049 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:05:36.049 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:05:36.049 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-05-14 21:47:36.414320] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:05:36.049 [2024-05-14 21:47:36.414348] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:05:36.049 [2024-05-14 21:47:36.414367] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:05:36.049 passed 00:05:36.049 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:05:36.049 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 6 6 6 0 0 00:05:36.049 asserts 148 148 148 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.000 seconds 00:05:36.049 [2024-05-14 21:47:36.414490] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:36.049 [2024-05-14 21:47:36.414518] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:36.049 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:05:36.049 00:05:36.049 00:05:36.049 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.049 http://cunit.sourceforge.net/ 00:05:36.049 00:05:36.049 00:05:36.049 Suite: nvme_fabric 00:05:36.049 Test: test_nvme_fabric_prop_set_cmd ...passed 00:05:36.049 Test: test_nvme_fabric_prop_get_cmd ...passed 00:05:36.049 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:05:36.049 Test: test_nvme_fabric_discover_probe ...passed 00:05:36.049 Test: test_nvme_fabric_qpair_connect ...[2024-05-14 21:47:36.419051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:05:36.049 passed 00:05:36.049 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 5 5 5 0 0 00:05:36.049 asserts 60 60 60 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.000 seconds 00:05:36.049 21:47:36 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:05:36.049 00:05:36.049 00:05:36.049 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.049 http://cunit.sourceforge.net/ 00:05:36.049 00:05:36.049 00:05:36.049 Suite: nvme_opal 00:05:36.049 Test: test_opal_nvme_security_recv_send_done ...passed 00:05:36.049 Test: test_opal_add_short_atom_header ...passed 00:05:36.049 00:05:36.049 [2024-05-14 21:47:36.422745] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:05:36.049 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.049 suites 1 1 n/a 0 0 00:05:36.049 tests 2 2 2 0 0 00:05:36.049 asserts 22 22 22 0 n/a 00:05:36.049 00:05:36.049 Elapsed time = 0.000 seconds 00:05:36.049 00:05:36.049 real 0m0.395s 00:05:36.049 user 0m0.053s 00:05:36.049 sys 0m0.154s 00:05:36.049 ************************************ 00:05:36.049 END TEST unittest_nvme 00:05:36.049 ************************************ 00:05:36.049 21:47:36 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.049 21:47:36 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:36.050 21:47:36 unittest -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:36.050 21:47:36 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.050 21:47:36 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.050 21:47:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.050 ************************************ 00:05:36.050 START TEST unittest_log 00:05:36.050 ************************************ 00:05:36.050 21:47:36 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:36.050 00:05:36.050 00:05:36.050 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.050 http://cunit.sourceforge.net/ 00:05:36.050 00:05:36.050 00:05:36.050 Suite: log 00:05:36.050 Test: log_test ...passed 00:05:36.050 Test: deprecation ...[2024-05-14 21:47:36.459482] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:05:36.050 [2024-05-14 21:47:36.459758] log_ut.c: 57:log_test: *DEBUG*: log test 00:05:36.050 log dump test: 00:05:36.050 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:05:36.050 spdk dump test: 00:05:36.050 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:05:36.050 spdk dump test: 00:05:36.050 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:05:36.050 00000010 65 20 63 68 61 72 73 e chars 00:05:36.988 passed 00:05:36.988 00:05:36.988 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.988 suites 1 1 n/a 0 0 00:05:36.988 tests 2 2 2 0 0 00:05:36.988 asserts 73 73 73 0 n/a 00:05:36.988 00:05:36.988 Elapsed time = 0.000 seconds 00:05:36.988 00:05:36.988 real 0m1.024s 00:05:36.988 user 0m0.000s 00:05:36.988 sys 0m0.004s 00:05:36.988 21:47:37 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.988 ************************************ 00:05:36.988 END TEST unittest_log 00:05:36.988 ************************************ 00:05:36.989 21:47:37 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:05:36.989 21:47:37 unittest -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.989 ************************************ 00:05:36.989 START TEST unittest_lvol 00:05:36.989 ************************************ 00:05:36.989 21:47:37 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:36.989 00:05:36.989 00:05:36.989 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.989 http://cunit.sourceforge.net/ 00:05:36.989 00:05:36.989 00:05:36.989 Suite: lvol 00:05:36.989 Test: lvs_init_unload_success ...passed 00:05:36.989 Test: lvs_init_destroy_success ...passed 00:05:36.989 Test: lvs_init_opts_success ...passed 00:05:36.989 Test: lvs_unload_lvs_is_null_fail ...[2024-05-14 21:47:37.528934] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:05:36.989 [2024-05-14 21:47:37.529178] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:05:36.989 [2024-05-14 21:47:37.529228] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:05:36.989 passed 00:05:36.989 Test: lvs_names ...[2024-05-14 21:47:37.529246] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:05:36.989 passed 00:05:36.989 Test: lvol_create_destroy_success ...passed 00:05:36.989 Test: lvol_create_fail ...passed 00:05:36.989 Test: lvol_destroy_fail ...passed 00:05:36.989 Test: lvol_close ...passed 00:05:36.989 Test: lvol_resize ...passed 00:05:36.989 Test: lvol_set_read_only ...passed 00:05:36.989 Test: test_lvs_load ...passed 00:05:36.989 Test: lvols_load ...passed 00:05:36.989 Test: lvol_open ...passed 00:05:36.989 Test: lvol_snapshot ...passed 00:05:36.989 Test: lvol_snapshot_fail ...[2024-05-14 21:47:37.529259] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:05:36.989 [2024-05-14 21:47:37.529286] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:05:36.989 [2024-05-14 21:47:37.529346] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:05:36.989 [2024-05-14 21:47:37.529363] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:05:36.989 [2024-05-14 21:47:37.529405] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:05:36.989 [2024-05-14 21:47:37.529432] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:05:36.989 [2024-05-14 21:47:37.529456] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:05:36.989 [2024-05-14 21:47:37.529517] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:05:36.989 [2024-05-14 21:47:37.529532] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:05:36.989 [2024-05-14 21:47:37.529565] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:36.989 [2024-05-14 21:47:37.529598] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:36.989 [2024-05-14 21:47:37.529673] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:05:36.989 passed 00:05:36.989 Test: lvol_clone ...passed 00:05:36.989 Test: lvol_clone_fail ...passed 00:05:36.989 Test: lvol_iter_clones ...passed 00:05:36.989 Test: lvol_refcnt ...passed 00:05:36.989 Test: lvol_names ...passed 00:05:36.989 Test: lvol_create_thin_provisioned ...passed 00:05:36.989 Test: lvol_rename ...[2024-05-14 21:47:37.529727] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:05:36.989 [2024-05-14 21:47:37.529774] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 947bf8a8-123b-11ef-8c90-4585f0cfab08 because it is still open 00:05:36.989 [2024-05-14 21:47:37.529797] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:36.989 [2024-05-14 21:47:37.529814] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:36.989 [2024-05-14 21:47:37.529837] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:05:36.989 [2024-05-14 21:47:37.529880] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:36.989 [2024-05-14 21:47:37.529899] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:05:36.989 passed 00:05:36.989 Test: lvs_rename ...passed 00:05:36.989 Test: lvol_inflate ...passed 00:05:36.989 Test: lvol_decouple_parent ...passed 00:05:36.989 Test: lvol_get_xattr ...passed 00:05:36.989 Test: lvol_esnap_reload ...passed 00:05:36.989 Test: lvol_esnap_create_bad_args ...passed 00:05:36.989 Test: lvol_esnap_create_delete ...passed 00:05:36.989 Test: lvol_esnap_load_esnaps ...passed 00:05:36.989 Test: lvol_esnap_missing ...passed 00:05:36.989 Test: lvol_esnap_hotplug ... 00:05:36.989 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:05:36.989 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:05:36.989 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:05:36.989 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:05:36.989 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:05:36.989 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:05:36.989 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:05:36.989 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:05:36.989 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:05:36.989 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:05:36.989 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:05:36.989 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:05:36.989 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:05:36.989 passed 00:05:36.989 Test: lvol_get_by ...passed 00:05:36.989 Test: lvol_shallow_copy ...[2024-05-14 21:47:37.529928] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:05:36.989 [2024-05-14 21:47:37.529955] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:36.989 [2024-05-14 21:47:37.529981] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:36.989 [2024-05-14 21:47:37.530036] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:05:36.989 [2024-05-14 21:47:37.530051] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:36.989 [2024-05-14 21:47:37.530064] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:05:36.989 [2024-05-14 21:47:37.530082] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:36.989 [2024-05-14 21:47:37.530110] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:05:36.989 [2024-05-14 21:47:37.530147] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:05:36.989 [2024-05-14 21:47:37.530175] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:36.989 [2024-05-14 21:47:37.530188] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:36.989 [2024-05-14 21:47:37.530257] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 947c0b79-123b-11ef-8c90-4585f0cfab08: failed to create esnap bs_dev: error -12 00:05:36.989 [2024-05-14 21:47:37.530321] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 947c0dd3-123b-11ef-8c90-4585f0cfab08: failed to create esnap bs_dev: error -12 00:05:36.989 [2024-05-14 21:47:37.530353] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 947c0f43-123b-11ef-8c90-4585f0cfab08: failed to create esnap bs_dev: error -12 00:05:36.989 [2024-05-14 21:47:37.530538] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:36.989 passed 00:05:36.989 00:05:36.989 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.989 suites 1 1 n/a 0 0 00:05:36.989 tests 35 35 35 0 0 00:05:36.989 asserts 1459 1459 1459 0 n/a 00:05:36.989 00:05:36.989 Elapsed time = 0.000 seconds 00:05:36.989 [2024-05-14 21:47:37.530556] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 947c167c-123b-11ef-8c90-4585f0cfab08 shallow copy, ext_dev must not be NULL 00:05:36.989 00:05:36.989 real 0m0.008s 00:05:36.989 user 0m0.004s 00:05:36.989 sys 0m0.008s 00:05:36.989 ************************************ 00:05:36.989 END TEST unittest_lvol 00:05:36.989 ************************************ 00:05:36.989 21:47:37 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.989 21:47:37 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:36.989 21:47:37 unittest -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:36.989 21:47:37 unittest -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.989 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.989 ************************************ 00:05:36.989 START TEST unittest_nvme_rdma 00:05:36.989 ************************************ 00:05:36.989 21:47:37 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:36.989 00:05:36.989 00:05:36.989 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.989 http://cunit.sourceforge.net/ 00:05:36.989 00:05:36.989 00:05:36.989 Suite: nvme_rdma 00:05:37.251 Test: test_nvme_rdma_build_sgl_request ...passed 00:05:37.251 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:05:37.251 Test: test_nvme_rdma_build_contig_request ...passed 00:05:37.251 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:05:37.251 Test: test_nvme_rdma_create_reqs ...passed 00:05:37.251 Test: test_nvme_rdma_create_rsps ...passed 00:05:37.251 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:05:37.251 Test: test_nvme_rdma_poller_create ...[2024-05-14 21:47:37.577418] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:05:37.251 [2024-05-14 21:47:37.577627] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:37.251 [2024-05-14 21:47:37.577643] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1689:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:05:37.251 [2024-05-14 21:47:37.577665] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1570:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:37.251 [2024-05-14 21:47:37.577687] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:05:37.251 [2024-05-14 21:47:37.577725] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:05:37.251 [2024-05-14 21:47:37.577751] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:37.251 [2024-05-14 21:47:37.577764] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:37.251 passed 00:05:37.251 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:05:37.251 Test: test_nvme_rdma_ctrlr_construct ...passed 00:05:37.251 Test: test_nvme_rdma_req_put_and_get ...passed[2024-05-14 21:47:37.577792] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:05:37.251 00:05:37.251 Test: test_nvme_rdma_req_init ...passed 00:05:37.251 Test: test_nvme_rdma_validate_cm_event ...passed 00:05:37.251 Test: test_nvme_rdma_qpair_init ...passed 00:05:37.251 Test: test_nvme_rdma_qpair_submit_request ...passed 00:05:37.251 Test: test_nvme_rdma_memory_domain ...passed 00:05:37.251 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:05:37.251 Test: test_rdma_get_memory_translation ...passed 00:05:37.251 Test: test_get_rdma_qpair_from_wc ...passed 00:05:37.251 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:05:37.251 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:05:37.251 Test: test_nvme_rdma_qpair_set_poller ...passed 00:05:37.251 00:05:37.251 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.251 suites 1 1 n/a 0 0 00:05:37.251 tests 22 22 22 0 0 00:05:37.251 asserts 412 412 412 0 n/a 00:05:37.251 00:05:37.251 Elapsed time = 0.000 seconds 00:05:37.251 [2024-05-14 21:47:37.577881] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:05:37.251 [2024-05-14 21:47:37.577894] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:05:37.251 [2024-05-14 21:47:37.577934] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:05:37.251 [2024-05-14 21:47:37.577953] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:05:37.251 [2024-05-14 21:47:37.577963] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:05:37.251 [2024-05-14 21:47:37.577983] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:37.251 [2024-05-14 21:47:37.577993] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:37.251 [2024-05-14 21:47:37.578021] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:05:37.251 [2024-05-14 21:47:37.578031] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:05:37.251 [2024-05-14 21:47:37.578041] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820a3d568 on poll group 0x82bf0a000 00:05:37.251 [2024-05-14 21:47:37.578051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:05:37.251 [2024-05-14 21:47:37.578061] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:05:37.251 [2024-05-14 21:47:37.578070] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820a3d568 on poll group 0x82bf0a000 00:05:37.251 [2024-05-14 21:47:37.578115] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:05:37.251 00:05:37.251 real 0m0.006s 00:05:37.251 user 0m0.000s 00:05:37.251 sys 0m0.005s 00:05:37.251 21:47:37 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.251 ************************************ 00:05:37.251 END TEST unittest_nvme_rdma 00:05:37.251 ************************************ 00:05:37.251 21:47:37 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 21:47:37 unittest -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:37.251 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.251 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.251 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 ************************************ 00:05:37.251 START TEST unittest_nvmf_transport 00:05:37.251 ************************************ 00:05:37.251 21:47:37 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:37.251 00:05:37.251 00:05:37.251 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.251 http://cunit.sourceforge.net/ 00:05:37.251 00:05:37.251 00:05:37.251 Suite: nvmf 00:05:37.251 Test: test_spdk_nvmf_transport_create ...[2024-05-14 21:47:37.625342] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:05:37.251 [2024-05-14 21:47:37.625517] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:05:37.251 [2024-05-14 21:47:37.625531] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:05:37.251 passed 00:05:37.251 Test: test_nvmf_transport_poll_group_create ...[2024-05-14 21:47:37.625566] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:05:37.251 passed 00:05:37.252 Test: test_spdk_nvmf_transport_opts_init ...passed 00:05:37.252 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:05:37.252 00:05:37.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.252 suites 1 1 n/a 0 0 00:05:37.252 tests 4 4 4 0 0 00:05:37.252 asserts 49 49 49 0 n/a 00:05:37.252 00:05:37.252 Elapsed time = 0.000 seconds 00:05:37.252 [2024-05-14 21:47:37.625594] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:05:37.252 [2024-05-14 21:47:37.625603] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:05:37.252 [2024-05-14 21:47:37.625611] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:05:37.252 00:05:37.252 real 0m0.005s 00:05:37.252 user 0m0.000s 00:05:37.252 sys 0m0.004s 00:05:37.252 21:47:37 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.252 21:47:37 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 END TEST unittest_nvmf_transport 00:05:37.252 ************************************ 00:05:37.252 21:47:37 unittest -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 START TEST unittest_rdma 00:05:37.252 ************************************ 00:05:37.252 21:47:37 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:37.252 00:05:37.252 00:05:37.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.252 http://cunit.sourceforge.net/ 00:05:37.252 00:05:37.252 00:05:37.252 Suite: rdma_common 00:05:37.252 Test: test_spdk_rdma_pd ...[2024-05-14 21:47:37.666687] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:37.252 passed 00:05:37.252 00:05:37.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.252 suites 1 1 n/a 0 0 00:05:37.252 tests 1 1 1 0 0 00:05:37.252 asserts 31 31 31 0 n/a 00:05:37.252 00:05:37.252 Elapsed time = 0.000 seconds 00:05:37.252 [2024-05-14 21:47:37.666906] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:37.252 00:05:37.252 real 0m0.005s 00:05:37.252 user 0m0.004s 00:05:37.252 sys 0m0.000s 00:05:37.252 21:47:37 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.252 ************************************ 00:05:37.252 END TEST unittest_rdma 00:05:37.252 ************************************ 00:05:37.252 21:47:37 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 21:47:37 unittest -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.252 21:47:37 unittest -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.252 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 START TEST unittest_nvmf 00:05:37.252 ************************************ 00:05:37.252 21:47:37 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:05:37.252 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:05:37.252 00:05:37.252 00:05:37.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.252 http://cunit.sourceforge.net/ 00:05:37.252 00:05:37.252 00:05:37.252 Suite: nvmf 00:05:37.252 Test: test_get_log_page ...[2024-05-14 21:47:37.710530] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:05:37.252 passed 00:05:37.252 Test: test_process_fabrics_cmd ...passed 00:05:37.252 Test: test_connect ...[2024-05-14 21:47:37.710770] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:05:37.252 [2024-05-14 21:47:37.710868] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:05:37.252 [2024-05-14 21:47:37.710896] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:05:37.252 [2024-05-14 21:47:37.710912] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:05:37.252 [2024-05-14 21:47:37.710928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:05:37.252 [2024-05-14 21:47:37.710953] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:05:37.252 [2024-05-14 21:47:37.710969] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 888:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:05:37.252 [2024-05-14 21:47:37.710986] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:05:37.252 [2024-05-14 21:47:37.711003] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:05:37.252 [2024-05-14 21:47:37.711016] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:05:37.252 [2024-05-14 21:47:37.711026] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:05:37.252 [2024-05-14 21:47:37.711044] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:05:37.252 [2024-05-14 21:47:37.711068] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:05:37.252 [2024-05-14 21:47:37.711086] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 690:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:05:37.252 [2024-05-14 21:47:37.711104] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 714:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:05:37.252 [2024-05-14 21:47:37.711133] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:05:37.252 [2024-05-14 21:47:37.711162] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:05:37.252 passed 00:05:37.252 Test: test_get_ns_id_desc_list ...passed 00:05:37.252 Test: test_identify_ns ...passed 00:05:37.252 Test: test_identify_ns_iocs_specific ...passed 00:05:37.252 Test: test_reservation_write_exclusive ...[2024-05-14 21:47:37.711180] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:05:37.252 [2024-05-14 21:47:37.711240] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:37.252 [2024-05-14 21:47:37.711303] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:05:37.252 [2024-05-14 21:47:37.711326] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:05:37.252 [2024-05-14 21:47:37.711360] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:37.252 [2024-05-14 21:47:37.711416] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:37.252 passed 00:05:37.252 Test: test_reservation_exclusive_access ...passed 00:05:37.252 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:05:37.252 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:05:37.252 Test: test_reservation_notification_log_page ...passed 00:05:37.252 Test: test_get_dif_ctx ...passed 00:05:37.252 Test: test_set_get_features ...passed 00:05:37.252 Test: test_identify_ctrlr ...[2024-05-14 21:47:37.711534] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:37.252 [2024-05-14 21:47:37.711554] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:37.252 [2024-05-14 21:47:37.711570] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:05:37.252 [2024-05-14 21:47:37.711586] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:05:37.252 passed 00:05:37.252 Test: test_identify_ctrlr_iocs_specific ...passed 00:05:37.252 Test: test_custom_admin_cmd ...passed 00:05:37.252 Test: test_fused_compare_and_write ...[2024-05-14 21:47:37.711680] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:05:37.252 [2024-05-14 21:47:37.711704] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:37.252 [2024-05-14 21:47:37.711719] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:37.252 passed 00:05:37.252 Test: test_multi_async_event_reqs ...passed 00:05:37.252 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:05:37.252 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:05:37.252 Test: test_multi_async_events ...passed 00:05:37.252 Test: test_rae ...passed 00:05:37.252 Test: test_nvmf_ctrlr_create_destruct ...passed 00:05:37.252 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:05:37.252 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:05:37.252 Test: test_zcopy_read ...passed 00:05:37.252 Test: test_zcopy_write ...passed 00:05:37.252 Test: test_nvmf_property_set ...passed 00:05:37.252 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:05:37.252 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-05-14 21:47:37.711810] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:05:37.252 [2024-05-14 21:47:37.711832] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:05:37.252 [2024-05-14 21:47:37.711872] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:37.252 [2024-05-14 21:47:37.711882] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:37.252 [2024-05-14 21:47:37.711894] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:05:37.252 [2024-05-14 21:47:37.711909] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:05:37.252 passed 00:05:37.253 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:05:37.253 Test: test_nvmf_check_qpair_active ...[2024-05-14 21:47:37.711923] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:05:37.253 [2024-05-14 21:47:37.711961] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:05:37.253 [2024-05-14 21:47:37.711978] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4692:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:05:37.253 [2024-05-14 21:47:37.711994] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:05:37.253 [2024-05-14 21:47:37.712010] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:05:37.253 [2024-05-14 21:47:37.712024] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:05:37.253 passed 00:05:37.253 00:05:37.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.253 suites 1 1 n/a 0 0 00:05:37.253 tests 32 32 32 0 0 00:05:37.253 asserts 977 977 977 0 n/a 00:05:37.253 00:05:37.253 Elapsed time = 0.000 seconds 00:05:37.253 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:05:37.253 00:05:37.253 00:05:37.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.253 http://cunit.sourceforge.net/ 00:05:37.253 00:05:37.253 00:05:37.253 Suite: nvmf 00:05:37.253 Test: test_get_rw_params ...passed 00:05:37.253 Test: test_get_rw_ext_params ...passed 00:05:37.253 Test: test_lba_in_range ...passed 00:05:37.253 Test: test_get_dif_ctx ...passed 00:05:37.253 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:05:37.253 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-05-14 21:47:37.718594] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:05:37.253 [2024-05-14 21:47:37.718840] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:05:37.253 passed 00:05:37.253 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:05:37.253 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-05-14 21:47:37.718864] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:05:37.253 [2024-05-14 21:47:37.718887] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:05:37.253 [2024-05-14 21:47:37.718902] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:05:37.253 [2024-05-14 21:47:37.718921] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:05:37.253 [2024-05-14 21:47:37.718936] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:05:37.253 passed 00:05:37.253 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:05:37.253 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:05:37.253 00:05:37.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.253 suites 1 1 n/a 0 0 00:05:37.253 tests 10 10 10 0 0 00:05:37.253 asserts 159 159 159 0 n/a 00:05:37.253 00:05:37.253 Elapsed time = 0.000 seconds 00:05:37.253 [2024-05-14 21:47:37.718973] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:05:37.253 [2024-05-14 21:47:37.718987] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:05:37.253 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:05:37.253 00:05:37.253 00:05:37.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.253 http://cunit.sourceforge.net/ 00:05:37.253 00:05:37.253 00:05:37.253 Suite: nvmf 00:05:37.253 Test: test_discovery_log ...passed 00:05:37.253 Test: test_discovery_log_with_filters ...passed 00:05:37.253 00:05:37.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.253 suites 1 1 n/a 0 0 00:05:37.253 tests 2 2 2 0 0 00:05:37.253 asserts 238 238 238 0 n/a 00:05:37.253 00:05:37.253 Elapsed time = 0.000 seconds 00:05:37.253 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:05:37.253 00:05:37.253 00:05:37.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.253 http://cunit.sourceforge.net/ 00:05:37.253 00:05:37.253 00:05:37.253 Suite: nvmf 00:05:37.253 Test: nvmf_test_create_subsystem ...[2024-05-14 21:47:37.729205] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:05:37.253 [2024-05-14 21:47:37.729786] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:05:37.253 [2024-05-14 21:47:37.729814] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:05:37.253 [2024-05-14 21:47:37.729823] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:05:37.253 [2024-05-14 21:47:37.729831] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:05:37.253 [2024-05-14 21:47:37.729837] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:05:37.253 [2024-05-14 21:47:37.729849] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:05:37.253 [2024-05-14 21:47:37.729856] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:05:37.253 [2024-05-14 21:47:37.729863] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:05:37.253 [2024-05-14 21:47:37.729869] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:05:37.253 [2024-05-14 21:47:37.729876] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:05:37.253 [2024-05-14 21:47:37.729883] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:05:37.253 [2024-05-14 21:47:37.729893] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:05:37.253 [2024-05-14 21:47:37.729901] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:05:37.253 [2024-05-14 21:47:37.730040] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:05:37.253 [2024-05-14 21:47:37.730052] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:05:37.253 [2024-05-14 21:47:37.730061] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:05:37.253 passed 00:05:37.253 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:05:37.253 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:05:37.253 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:05:37.253 Test: test_spdk_nvmf_ns_visible ...passed 00:05:37.253 Test: test_reservation_register ...[2024-05-14 21:47:37.730174] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:05:37.253 [2024-05-14 21:47:37.730207] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:37.253 [2024-05-14 21:47:37.730224] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:37.253 [2024-05-14 21:47:37.730240] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:37.253 [2024-05-14 21:47:37.730250] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:37.253 [2024-05-14 21:47:37.730346] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:05:37.253 [2024-05-14 21:47:37.730373] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1962:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:05:37.253 [2024-05-14 21:47:37.730408] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2091:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:05:37.253 [2024-05-14 21:47:37.730455] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:05:37.253 [2024-05-14 21:47:37.730574] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.253 passed 00:05:37.253 Test: test_reservation_register_with_ptpl ...[2024-05-14 21:47:37.730610] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3087:nvmf_ns_reservation_register: *ERROR*: No registrant 00:05:37.253 passed 00:05:37.253 Test: test_reservation_acquire_preempt_1 ...passed 00:05:37.253 Test: test_reservation_acquire_release_with_ptpl ...passed 00:05:37.253 Test: test_reservation_release ...passed 00:05:37.253 Test: test_reservation_unregister_notification ...[2024-05-14 21:47:37.730884] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.253 [2024-05-14 21:47:37.731142] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.253 [2024-05-14 21:47:37.731184] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.253 passed 00:05:37.254 Test: test_reservation_release_notification ...passed 00:05:37.254 Test: test_reservation_release_notification_write_exclusive ...passed 00:05:37.254 Test: test_reservation_clear_notification ...passed 00:05:37.254 Test: test_reservation_preempt_notification ...[2024-05-14 21:47:37.731224] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.254 [2024-05-14 21:47:37.731259] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.254 [2024-05-14 21:47:37.731287] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.254 [2024-05-14 21:47:37.731319] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3031:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:37.254 passed 00:05:37.254 Test: test_spdk_nvmf_ns_event ...passed 00:05:37.254 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:05:37.254 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:05:37.254 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:05:37.254 Test: test_nvmf_ns_reservation_report ...passed 00:05:37.254 Test: test_nvmf_nqn_is_valid ...passed 00:05:37.254 Test: test_nvmf_ns_reservation_restore ...[2024-05-14 21:47:37.731469] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:05:37.254 [2024-05-14 21:47:37.731493] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:05:37.254 [2024-05-14 21:47:37.731515] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3393:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:05:37.254 [2024-05-14 21:47:37.731552] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:05:37.254 [2024-05-14 21:47:37.731573] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:949ac241-123b-11ef-8c90-4585f0cfab0": uuid is not the correct length 00:05:37.254 [2024-05-14 21:47:37.731593] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_subsystem_state_change ...passed 00:05:37.254 Test: test_nvmf_reservation_custom_ops ...passed 00:05:37.254 00:05:37.254 [2024-05-14 21:47:37.731647] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2586:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:05:37.254 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.254 suites 1 1 n/a 0 0 00:05:37.254 tests 24 24 24 0 0 00:05:37.254 asserts 499 499 499 0 n/a 00:05:37.254 00:05:37.254 Elapsed time = 0.000 seconds 00:05:37.254 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:05:37.254 00:05:37.254 00:05:37.254 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.254 http://cunit.sourceforge.net/ 00:05:37.254 00:05:37.254 00:05:37.254 Suite: nvmf 00:05:37.254 Test: test_nvmf_tcp_create ...[2024-05-14 21:47:37.739809] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_tcp_destroy ...passed 00:05:37.254 Test: test_nvmf_tcp_poll_group_create ...passed 00:05:37.254 Test: test_nvmf_tcp_send_c2h_data ...passed 00:05:37.254 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:05:37.254 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:05:37.254 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:05:37.254 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-05-14 21:47:37.752213] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752256] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752271] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed[2024-05-14 21:47:37.752282] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752293] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 00:05:37.254 Test: test_nvmf_tcp_icreq_handle ...passed 00:05:37.254 Test: test_nvmf_tcp_check_xfer_type ...passed 00:05:37.254 Test: test_nvmf_tcp_invalid_sgl ...passed 00:05:37.254 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-14 21:47:37.752340] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:37.254 [2024-05-14 21:47:37.752354] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752364] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ec00 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752375] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:37.254 [2024-05-14 21:47:37.752386] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ec00 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752396] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752407] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ec00 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752418] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752429] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ec00 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752451] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2509:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:05:37.254 [2024-05-14 21:47:37.752463] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752473] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ec00 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752487] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820d6e488 00:05:37.254 [2024-05-14 21:47:37.752499] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752509] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752521] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2299:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820d6ecf8 00:05:37.254 [2024-05-14 21:47:37.752532] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752542] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752554] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:05:37.254 [2024-05-14 21:47:37.752564] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752575] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752586] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:05:37.254 [2024-05-14 21:47:37.752597] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752607] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752619] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752629] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752640] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752650] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752661] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752671] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-14 21:47:37.752683] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752694] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752705] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752715] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 [2024-05-14 21:47:37.752727] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:05:37.254 [2024-05-14 21:47:37.752737] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d6ecf8 is same with the state(5) to be set 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:05:37.254 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-05-14 21:47:37.757953] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:05:37.254 [2024-05-14 21:47:37.757975] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:05:37.254 passed 00:05:37.254 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:05:37.254 00:05:37.254 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.254 suites 1 1 n/a 0 0 00:05:37.254 tests 17 17 17 0 0 00:05:37.254 asserts 222 222 222 0 n/a 00:05:37.254 00:05:37.254 Elapsed time = 0.008 seconds 00:05:37.254 [2024-05-14 21:47:37.758093] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:05:37.255 [2024-05-14 21:47:37.758108] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:05:37.255 [2024-05-14 21:47:37.758174] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:05:37.255 [2024-05-14 21:47:37.758185] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:05:37.255 21:47:37 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:05:37.255 00:05:37.255 00:05:37.255 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.255 http://cunit.sourceforge.net/ 00:05:37.255 00:05:37.255 00:05:37.255 Suite: nvmf 00:05:37.255 Test: test_nvmf_tgt_create_poll_group ...passed 00:05:37.255 00:05:37.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.255 suites 1 1 n/a 0 0 00:05:37.255 tests 1 1 1 0 0 00:05:37.255 asserts 17 17 17 0 n/a 00:05:37.255 00:05:37.255 Elapsed time = 0.008 seconds 00:05:37.255 00:05:37.255 real 0m0.064s 00:05:37.255 user 0m0.017s 00:05:37.255 sys 0m0.049s 00:05:37.255 ************************************ 00:05:37.255 21:47:37 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.255 21:47:37 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:05:37.255 END TEST unittest_nvmf 00:05:37.255 ************************************ 00:05:37.255 21:47:37 unittest -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.255 21:47:37 unittest -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.255 21:47:37 unittest -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:37.255 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.255 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.255 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.255 ************************************ 00:05:37.255 START TEST unittest_nvmf_rdma 00:05:37.255 ************************************ 00:05:37.255 21:47:37 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:37.255 00:05:37.255 00:05:37.255 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.255 http://cunit.sourceforge.net/ 00:05:37.255 00:05:37.255 00:05:37.255 Suite: nvmf 00:05:37.255 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-14 21:47:37.820350] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1861:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:05:37.255 passed 00:05:37.255 Test: test_spdk_nvmf_rdma_request_process ...[2024-05-14 21:47:37.820693] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:05:37.255 [2024-05-14 21:47:37.820725] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:05:37.255 passed 00:05:37.255 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:05:37.255 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:05:37.255 Test: test_nvmf_rdma_opts_init ...passed 00:05:37.255 Test: test_nvmf_rdma_request_free_data ...passed 00:05:37.255 Test: test_nvmf_rdma_resources_create ...passed 00:05:37.255 Test: test_nvmf_rdma_qpair_compare ...passed 00:05:37.255 Test: test_nvmf_rdma_resize_cq ...passed 00:05:37.255 00:05:37.255 [2024-05-14 21:47:37.821850] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 950:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:05:37.255 Using CQ of insufficient size may lead to CQ overrun 00:05:37.255 [2024-05-14 21:47:37.821879] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:05:37.255 [2024-05-14 21:47:37.821937] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:05:37.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.255 suites 1 1 n/a 0 0 00:05:37.255 tests 9 9 9 0 0 00:05:37.255 asserts 579 579 579 0 n/a 00:05:37.255 00:05:37.255 Elapsed time = 0.000 seconds 00:05:37.255 00:05:37.255 real 0m0.009s 00:05:37.255 user 0m0.007s 00:05:37.255 sys 0m0.000s 00:05:37.255 21:47:37 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.255 21:47:37 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:37.255 ************************************ 00:05:37.255 END TEST unittest_nvmf_rdma 00:05:37.255 ************************************ 00:05:37.517 21:47:37 unittest -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.517 21:47:37 unittest -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:05:37.517 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.517 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.517 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.517 ************************************ 00:05:37.517 START TEST unittest_scsi 00:05:37.517 ************************************ 00:05:37.517 21:47:37 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:05:37.517 21:47:37 unittest.unittest_scsi -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:05:37.517 00:05:37.517 00:05:37.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.517 http://cunit.sourceforge.net/ 00:05:37.517 00:05:37.517 00:05:37.517 Suite: dev_suite 00:05:37.517 Test: dev_destruct_null_dev ...passed 00:05:37.517 Test: dev_destruct_zero_luns ...passed 00:05:37.517 Test: dev_destruct_null_lun ...passed 00:05:37.517 Test: dev_destruct_success ...passed 00:05:37.517 Test: dev_construct_num_luns_zero ...[2024-05-14 21:47:37.866581] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:05:37.517 passed 00:05:37.517 Test: dev_construct_no_lun_zero ...passed 00:05:37.517 Test: dev_construct_null_lun ...passed 00:05:37.517 Test: dev_construct_name_too_long ...passed 00:05:37.517 Test: dev_construct_success ...passed 00:05:37.517 Test: dev_construct_success_lun_zero_not_first ...passed 00:05:37.517 Test: dev_queue_mgmt_task_success ...passed 00:05:37.517 Test: dev_queue_task_success ...passed 00:05:37.517 Test: dev_stop_success ...passed 00:05:37.517 Test: dev_add_port_max_ports ...[2024-05-14 21:47:37.866791] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:05:37.517 [2024-05-14 21:47:37.866808] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:05:37.517 [2024-05-14 21:47:37.866821] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:05:37.517 [2024-05-14 21:47:37.866868] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:05:37.517 passed 00:05:37.517 Test: dev_add_port_construct_failure1 ...passed 00:05:37.517 Test: dev_add_port_construct_failure2 ...passed 00:05:37.517 Test: dev_add_port_success1 ...passed 00:05:37.517 Test: dev_add_port_success2 ...passed 00:05:37.517 Test: dev_add_port_success3 ...passed 00:05:37.517 Test: dev_find_port_by_id_num_ports_zero ...passed 00:05:37.517 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:05:37.517 Test: dev_find_port_by_id_success ...passed 00:05:37.517 Test: dev_add_lun_bdev_not_found ...passed 00:05:37.517 Test: dev_add_lun_no_free_lun_id ...[2024-05-14 21:47:37.866882] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:05:37.517 [2024-05-14 21:47:37.866895] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:05:37.517 passed 00:05:37.517 Test: dev_add_lun_success1 ...passed 00:05:37.517 Test: dev_add_lun_success2 ...passed 00:05:37.517 Test: dev_check_pending_tasks ...passed 00:05:37.517 Test: dev_iterate_luns ...passed 00:05:37.517 Test: dev_find_free_lun ...[2024-05-14 21:47:37.867141] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:05:37.517 passed 00:05:37.517 00:05:37.517 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.517 suites 1 1 n/a 0 0 00:05:37.517 tests 29 29 29 0 0 00:05:37.517 asserts 97 97 97 0 n/a 00:05:37.517 00:05:37.517 Elapsed time = 0.000 seconds 00:05:37.517 21:47:37 unittest.unittest_scsi -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:05:37.517 00:05:37.517 00:05:37.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.517 http://cunit.sourceforge.net/ 00:05:37.517 00:05:37.517 00:05:37.517 Suite: lun_suite 00:05:37.517 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:05:37.517 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:05:37.517 Test: lun_task_mgmt_execute_lun_reset ...passed 00:05:37.517 Test: lun_task_mgmt_execute_target_reset ...passed 00:05:37.517 Test: lun_task_mgmt_execute_invalid_case ...passed 00:05:37.517 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:05:37.517 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:05:37.517 Test: lun_append_task_null_lun_not_supported ...passed 00:05:37.517 Test: lun_execute_scsi_task_pending ...passed 00:05:37.517 Test: lun_execute_scsi_task_complete ...passed 00:05:37.517 Test: lun_execute_scsi_task_resize ...passed 00:05:37.517 Test: lun_destruct_success ...passed 00:05:37.517 Test: lun_construct_null_ctx ...passed 00:05:37.517 Test: lun_construct_success ...passed 00:05:37.517 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:05:37.517 Test: lun_reset_task_suspend_scsi_task ...passed 00:05:37.517 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:05:37.517 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed[2024-05-14 21:47:37.873200] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:05:37.517 [2024-05-14 21:47:37.873415] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:05:37.517 [2024-05-14 21:47:37.873441] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:05:37.517 [2024-05-14 21:47:37.873477] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:05:37.517 00:05:37.517 00:05:37.517 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.517 suites 1 1 n/a 0 0 00:05:37.517 tests 18 18 18 0 0 00:05:37.517 asserts 153 153 153 0 n/a 00:05:37.517 00:05:37.517 Elapsed time = 0.000 seconds 00:05:37.517 21:47:37 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:05:37.517 00:05:37.517 00:05:37.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.517 http://cunit.sourceforge.net/ 00:05:37.517 00:05:37.517 00:05:37.517 Suite: scsi_suite 00:05:37.517 Test: scsi_init ...passed 00:05:37.517 00:05:37.517 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.517 suites 1 1 n/a 0 0 00:05:37.517 tests 1 1 1 0 0 00:05:37.517 asserts 1 1 1 0 n/a 00:05:37.517 00:05:37.517 Elapsed time = 0.000 seconds 00:05:37.517 21:47:37 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:05:37.517 00:05:37.517 00:05:37.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.517 http://cunit.sourceforge.net/ 00:05:37.517 00:05:37.517 00:05:37.517 Suite: translation_suite 00:05:37.517 Test: mode_select_6_test ...passed 00:05:37.517 Test: mode_select_6_test2 ...passed 00:05:37.517 Test: mode_sense_6_test ...passed 00:05:37.517 Test: mode_sense_10_test ...passed 00:05:37.517 Test: inquiry_evpd_test ...passed 00:05:37.517 Test: inquiry_standard_test ...passed 00:05:37.517 Test: inquiry_overflow_test ...passed 00:05:37.517 Test: task_complete_test ...passed 00:05:37.517 Test: lba_range_test ...passed 00:05:37.517 Test: xfer_len_test ...passed 00:05:37.517 Test: xfer_test ...passed 00:05:37.517 Test: scsi_name_padding_test ...passed 00:05:37.517 Test: get_dif_ctx_test ...passed 00:05:37.517 Test: unmap_split_test ...passed 00:05:37.517 00:05:37.517 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.517 suites 1 1 n/a 0 0 00:05:37.517 tests 14 14 14 0 0 00:05:37.517 asserts 1205 1205 1205 0 n/a 00:05:37.517 00:05:37.517 Elapsed time = 0.000 seconds 00:05:37.517 [2024-05-14 21:47:37.885125] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:05:37.517 21:47:37 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:05:37.517 00:05:37.517 00:05:37.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.517 http://cunit.sourceforge.net/ 00:05:37.517 00:05:37.517 00:05:37.517 Suite: reservation_suite 00:05:37.517 Test: test_reservation_register ...[2024-05-14 21:47:37.891093] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.517 passed 00:05:37.517 Test: test_reservation_reserve ...[2024-05-14 21:47:37.891787] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.517 [2024-05-14 21:47:37.891832] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:05:37.518 [2024-05-14 21:47:37.891862] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:05:37.518 passed 00:05:37.518 Test: test_reservation_preempt_non_all_regs ...[2024-05-14 21:47:37.892038] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.518 [2024-05-14 21:47:37.892078] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:05:37.518 passed 00:05:37.518 Test: test_reservation_preempt_all_regs ...passed 00:05:37.518 Test: test_reservation_cmds_conflict ...passed 00:05:37.518 Test: test_scsi2_reserve_release ...passed 00:05:37.518 Test: test_pr_with_scsi2_reserve_release ...passed 00:05:37.518 00:05:37.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.518 suites 1 1 n/a 0 0 00:05:37.518 tests 7 7 7 0 0 00:05:37.518 asserts 257 257 257 0 n/a 00:05:37.518 00:05:37.518 Elapsed time = 0.000 seconds 00:05:37.518 [2024-05-14 21:47:37.892134] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.518 [2024-05-14 21:47:37.892163] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.518 [2024-05-14 21:47:37.892190] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:05:37.518 [2024-05-14 21:47:37.892220] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:37.518 [2024-05-14 21:47:37.892241] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:37.518 [2024-05-14 21:47:37.892270] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:37.518 [2024-05-14 21:47:37.892290] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:37.518 [2024-05-14 21:47:37.892335] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:37.518 00:05:37.518 real 0m0.031s 00:05:37.518 user 0m0.004s 00:05:37.518 sys 0m0.023s 00:05:37.518 21:47:37 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.518 21:47:37 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 ************************************ 00:05:37.518 END TEST unittest_scsi 00:05:37.518 ************************************ 00:05:37.518 21:47:37 unittest -- unit/unittest.sh@276 -- # uname -s 00:05:37.518 21:47:37 unittest -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:05:37.518 21:47:37 unittest -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 ************************************ 00:05:37.518 START TEST unittest_thread 00:05:37.518 ************************************ 00:05:37.518 21:47:37 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:37.518 00:05:37.518 00:05:37.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.518 http://cunit.sourceforge.net/ 00:05:37.518 00:05:37.518 00:05:37.518 Suite: io_channel 00:05:37.518 Test: thread_alloc ...passed 00:05:37.518 Test: thread_send_msg ...passed 00:05:37.518 Test: thread_poller ...passed 00:05:37.518 Test: poller_pause ...passed 00:05:37.518 Test: thread_for_each ...passed 00:05:37.518 Test: for_each_channel_remove ...passed 00:05:37.518 Test: for_each_channel_unreg ...[2024-05-14 21:47:37.939877] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x820fc6d54 already registered (old:0x82bb18000 new:0x82bb18180) 00:05:37.518 passed 00:05:37.518 Test: thread_name ...passed 00:05:37.518 Test: channel ...passed 00:05:37.518 Test: channel_destroy_races ...passed 00:05:37.518 Test: thread_exit_test ...[2024-05-14 21:47:37.940573] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x2276c8 00:05:37.518 passed 00:05:37.518 Test: thread_update_stats_test ...[2024-05-14 21:47:37.941093] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 636:thread_exit: *ERROR*: thread 0x82badda80 got timeout, and move it to the exited state forcefully 00:05:37.518 passed 00:05:37.518 Test: nested_channel ...passed 00:05:37.518 Test: device_unregister_and_thread_exit_race ...passed 00:05:37.518 Test: cache_closest_timed_poller ...passed 00:05:37.518 Test: multi_timed_pollers_have_same_expiration ...passed 00:05:37.518 Test: io_device_lookup ...passed 00:05:37.518 Test: spdk_spin ...[2024-05-14 21:47:37.942099] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:37.518 [2024-05-14 21:47:37.942116] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820fc6d50 00:05:37.518 [2024-05-14 21:47:37.942126] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:37.518 [2024-05-14 21:47:37.942260] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:37.518 [2024-05-14 21:47:37.942270] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820fc6d50 00:05:37.518 [2024-05-14 21:47:37.942278] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:37.518 [2024-05-14 21:47:37.942286] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820fc6d50 00:05:37.518 [2024-05-14 21:47:37.942301] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:37.518 [2024-05-14 21:47:37.942309] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820fc6d50 00:05:37.518 [2024-05-14 21:47:37.942317] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:05:37.518 [2024-05-14 21:47:37.942325] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820fc6d50 00:05:37.518 passed 00:05:37.518 Test: for_each_channel_and_thread_exit_race ...passed 00:05:37.518 Test: for_each_thread_and_thread_exit_race ...passed 00:05:37.518 00:05:37.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.518 suites 1 1 n/a 0 0 00:05:37.518 tests 20 20 20 0 0 00:05:37.518 asserts 409 409 409 0 n/a 00:05:37.518 00:05:37.518 Elapsed time = 0.008 seconds 00:05:37.518 00:05:37.518 real 0m0.011s 00:05:37.518 user 0m0.010s 00:05:37.518 sys 0m0.006s 00:05:37.518 21:47:37 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.518 21:47:37 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 ************************************ 00:05:37.518 END TEST unittest_thread 00:05:37.518 ************************************ 00:05:37.518 21:47:37 unittest -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.518 21:47:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 ************************************ 00:05:37.518 START TEST unittest_iobuf 00:05:37.518 ************************************ 00:05:37.518 21:47:37 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:37.518 00:05:37.518 00:05:37.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.518 http://cunit.sourceforge.net/ 00:05:37.518 00:05:37.518 00:05:37.518 Suite: io_channel 00:05:37.518 Test: iobuf ...passed 00:05:37.518 Test: iobuf_cache ...[2024-05-14 21:47:37.983319] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:37.518 [2024-05-14 21:47:37.983468] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:37.518 [2024-05-14 21:47:37.983494] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:05:37.518 [2024-05-14 21:47:37.983503] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:37.518 [2024-05-14 21:47:37.983514] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:37.518 [2024-05-14 21:47:37.983522] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:37.518 passed 00:05:37.518 00:05:37.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.518 suites 1 1 n/a 0 0 00:05:37.518 tests 2 2 2 0 0 00:05:37.518 asserts 107 107 107 0 n/a 00:05:37.518 00:05:37.518 Elapsed time = 0.000 seconds 00:05:37.518 00:05:37.518 real 0m0.005s 00:05:37.518 user 0m0.005s 00:05:37.518 sys 0m0.000s 00:05:37.518 21:47:37 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.518 ************************************ 00:05:37.518 END TEST unittest_iobuf 00:05:37.518 ************************************ 00:05:37.518 21:47:37 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 21:47:38 unittest -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:05:37.518 21:47:38 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.518 21:47:38 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.518 21:47:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.518 ************************************ 00:05:37.518 START TEST unittest_util 00:05:37.518 ************************************ 00:05:37.518 21:47:38 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:05:37.518 21:47:38 unittest.unittest_util -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:05:37.518 00:05:37.518 00:05:37.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.518 http://cunit.sourceforge.net/ 00:05:37.518 00:05:37.519 00:05:37.519 Suite: base64 00:05:37.519 Test: test_base64_get_encoded_strlen ...passed 00:05:37.519 Test: test_base64_get_decoded_len ...passed 00:05:37.519 Test: test_base64_encode ...passed 00:05:37.519 Test: test_base64_decode ...passed 00:05:37.519 Test: test_base64_urlsafe_encode ...passed 00:05:37.519 Test: test_base64_urlsafe_decode ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 6 6 6 0 0 00:05:37.519 asserts 112 112 112 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: bit_array 00:05:37.519 Test: test_1bit ...passed 00:05:37.519 Test: test_64bit ...passed 00:05:37.519 Test: test_find ...passed 00:05:37.519 Test: test_resize ...passed 00:05:37.519 Test: test_errors ...passed 00:05:37.519 Test: test_count ...passed 00:05:37.519 Test: test_mask_store_load ...passed 00:05:37.519 Test: test_mask_clear ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 8 8 8 0 0 00:05:37.519 asserts 5075 5075 5075 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: cpuset 00:05:37.519 Test: test_cpuset ...passed 00:05:37.519 Test: test_cpuset_parse ...[2024-05-14 21:47:38.036917] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:05:37.519 [2024-05-14 21:47:38.037070] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:05:37.519 [2024-05-14 21:47:38.037093] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:05:37.519 [2024-05-14 21:47:38.037103] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:05:37.519 [2024-05-14 21:47:38.037111] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:05:37.519 [2024-05-14 21:47:38.037119] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:05:37.519 passed 00:05:37.519 Test: test_cpuset_fmt ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 3 3 3 0 0 00:05:37.519 asserts 65 65 65 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 [2024-05-14 21:47:38.037128] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:05:37.519 [2024-05-14 21:47:38.037137] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: crc16 00:05:37.519 Test: test_crc16_t10dif ...passed 00:05:37.519 Test: test_crc16_t10dif_seed ...passed 00:05:37.519 Test: test_crc16_t10dif_copy ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 3 3 3 0 0 00:05:37.519 asserts 5 5 5 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: crc32_ieee 00:05:37.519 Test: test_crc32_ieee ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 1 1 1 0 0 00:05:37.519 asserts 1 1 1 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: crc32c 00:05:37.519 Test: test_crc32c ...passed 00:05:37.519 Test: test_crc32c_nvme ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 2 2 2 0 0 00:05:37.519 asserts 16 16 16 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: crc64 00:05:37.519 Test: test_crc64_nvme ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 1 1 1 0 0 00:05:37.519 asserts 4 4 4 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: string 00:05:37.519 Test: test_parse_ip_addr ...passed 00:05:37.519 Test: test_str_chomp ...passed 00:05:37.519 Test: test_parse_capacity ...passed 00:05:37.519 Test: test_sprintf_append_realloc ...passed 00:05:37.519 Test: test_strtol ...passed 00:05:37.519 Test: test_strtoll ...passed 00:05:37.519 Test: test_strarray ...passed 00:05:37.519 Test: test_strcpy_replace ...passed 00:05:37.519 00:05:37.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.519 suites 1 1 n/a 0 0 00:05:37.519 tests 8 8 8 0 0 00:05:37.519 asserts 161 161 161 0 n/a 00:05:37.519 00:05:37.519 Elapsed time = 0.000 seconds 00:05:37.519 21:47:38 unittest.unittest_util -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:05:37.519 00:05:37.519 00:05:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.519 http://cunit.sourceforge.net/ 00:05:37.519 00:05:37.519 00:05:37.519 Suite: dif 00:05:37.519 Test: dif_generate_and_verify_test ...[2024-05-14 21:47:38.067235] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:37.519 [2024-05-14 21:47:38.067560] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:37.519 [2024-05-14 21:47:38.067638] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:37.519 [2024-05-14 21:47:38.067709] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:37.519 [2024-05-14 21:47:38.067779] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:37.519 passed 00:05:37.519 Test: dif_disable_check_test ...[2024-05-14 21:47:38.067848] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:37.519 [2024-05-14 21:47:38.068105] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:37.519 [2024-05-14 21:47:38.068194] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:37.519 [2024-05-14 21:47:38.068279] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:37.519 passed 00:05:37.519 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-14 21:47:38.068546] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:05:37.519 [2024-05-14 21:47:38.068638] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:05:37.519 [2024-05-14 21:47:38.068728] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:05:37.519 [2024-05-14 21:47:38.068828] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:05:37.519 [2024-05-14 21:47:38.068935] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:37.519 [2024-05-14 21:47:38.069021] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:37.519 [2024-05-14 21:47:38.069123] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:37.519 [2024-05-14 21:47:38.069236] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:37.520 [2024-05-14 21:47:38.069326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:37.520 passed 00:05:37.520 Test: dif_apptag_mask_test ...[2024-05-14 21:47:38.069399] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:37.520 [2024-05-14 21:47:38.069468] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:37.520 [2024-05-14 21:47:38.069544] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:37.520 [2024-05-14 21:47:38.069616] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:37.520 passed 00:05:37.520 Test: dif_sec_512_md_0_error_test ...passed 00:05:37.520 Test: dif_sec_4096_md_0_error_test ...passed 00:05:37.520 Test: dif_sec_4100_md_128_error_test ...passed 00:05:37.520 Test: dif_guard_seed_test ...passed 00:05:37.520 Test: dif_guard_value_test ...passed 00:05:37.520 Test: dif_disable_sec_512_md_8_single_iov_test ...[2024-05-14 21:47:38.069672] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:37.520 [2024-05-14 21:47:38.069691] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:37.520 [2024-05-14 21:47:38.069705] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:37.520 [2024-05-14 21:47:38.069722] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:37.520 [2024-05-14 21:47:38.069736] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:37.520 passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:37.520 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:37.520 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 21:47:38.075355] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd5c, Actual=fd4c 00:05:37.520 [2024-05-14 21:47:38.075670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe31, Actual=fe21 00:05:37.520 [2024-05-14 21:47:38.076014] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.076333] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.076650] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.520 [2024-05-14 21:47:38.076966] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.520 [2024-05-14 21:47:38.077289] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=23b8 00:05:37.520 [2024-05-14 21:47:38.077537] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=2690 00:05:37.520 [2024-05-14 21:47:38.077791] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753fd, Actual=1ab753ed 00:05:37.520 [2024-05-14 21:47:38.078108] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574670, Actual=38574660 00:05:37.520 [2024-05-14 21:47:38.078423] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.078737] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.079053] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.520 [2024-05-14 21:47:38.079375] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.520 [2024-05-14 21:47:38.079689] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6d43bec 00:05:37.520 [2024-05-14 21:47:38.079939] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=ee46d7b 00:05:37.520 [2024-05-14 21:47:38.080186] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.520 [2024-05-14 21:47:38.080499] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.520 [2024-05-14 21:47:38.080811] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.081123] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.081444] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.520 [2024-05-14 21:47:38.081758] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.520 [2024-05-14 21:47:38.082072] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.520 [2024-05-14 21:47:38.082319] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.520 passed 00:05:37.520 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-14 21:47:38.082436] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:05:37.520 [2024-05-14 21:47:38.082478] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:05:37.520 [2024-05-14 21:47:38.082519] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.082560] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.082601] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.520 [2024-05-14 21:47:38.082641] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.520 [2024-05-14 21:47:38.082683] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=23b8 00:05:37.520 [2024-05-14 21:47:38.082713] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2690 00:05:37.520 [2024-05-14 21:47:38.082745] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:05:37.520 [2024-05-14 21:47:38.082786] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:05:37.520 [2024-05-14 21:47:38.082826] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.082867] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.082908] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.520 [2024-05-14 21:47:38.082948] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.520 [2024-05-14 21:47:38.082989] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6d43bec 00:05:37.520 [2024-05-14 21:47:38.083019] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ee46d7b 00:05:37.520 [2024-05-14 21:47:38.083050] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.520 [2024-05-14 21:47:38.083091] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.520 [2024-05-14 21:47:38.083135] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.083177] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.083218] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.520 passed 00:05:37.520 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-14 21:47:38.083258] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.520 [2024-05-14 21:47:38.083300] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.520 [2024-05-14 21:47:38.083330] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.520 [2024-05-14 21:47:38.083364] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:05:37.520 [2024-05-14 21:47:38.083405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:05:37.520 [2024-05-14 21:47:38.083446] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.083487] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.520 [2024-05-14 21:47:38.083528] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.520 [2024-05-14 21:47:38.083568] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.520 [2024-05-14 21:47:38.083609] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=23b8 00:05:37.521 [2024-05-14 21:47:38.083639] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2690 00:05:37.521 [2024-05-14 21:47:38.083669] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:05:37.521 [2024-05-14 21:47:38.083710] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:05:37.521 [2024-05-14 21:47:38.083751] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.083798] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.083840] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.083881] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.083921] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6d43bec 00:05:37.521 [2024-05-14 21:47:38.083951] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ee46d7b 00:05:37.521 [2024-05-14 21:47:38.083983] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.521 [2024-05-14 21:47:38.084023] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.521 [2024-05-14 21:47:38.084064] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084104] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084145] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.521 [2024-05-14 21:47:38.084185] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.521 passed 00:05:37.521 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-14 21:47:38.084226] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.521 [2024-05-14 21:47:38.084256] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.521 [2024-05-14 21:47:38.084290] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:05:37.521 [2024-05-14 21:47:38.084330] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:05:37.521 [2024-05-14 21:47:38.084371] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084411] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084452] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.521 [2024-05-14 21:47:38.084492] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.521 [2024-05-14 21:47:38.084533] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=23b8 00:05:37.521 [2024-05-14 21:47:38.084564] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2690 00:05:37.521 [2024-05-14 21:47:38.084594] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:05:37.521 [2024-05-14 21:47:38.084635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:05:37.521 [2024-05-14 21:47:38.084676] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084718] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.084760] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.084801] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.084842] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6d43bec 00:05:37.521 [2024-05-14 21:47:38.084872] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ee46d7b 00:05:37.521 [2024-05-14 21:47:38.084903] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.521 passed 00:05:37.521 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-14 21:47:38.084946] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.521 [2024-05-14 21:47:38.084988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085029] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085069] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.521 [2024-05-14 21:47:38.085110] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.521 [2024-05-14 21:47:38.085150] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.521 [2024-05-14 21:47:38.085180] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.521 [2024-05-14 21:47:38.085230] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:05:37.521 [2024-05-14 21:47:38.085271] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:05:37.521 [2024-05-14 21:47:38.085311] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085352] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085393] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.521 passed 00:05:37.521 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-14 21:47:38.085433] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.521 [2024-05-14 21:47:38.085474] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=23b8 00:05:37.521 [2024-05-14 21:47:38.085508] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2690 00:05:37.521 [2024-05-14 21:47:38.085541] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:05:37.521 [2024-05-14 21:47:38.085582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:05:37.521 [2024-05-14 21:47:38.085622] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085663] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.521 [2024-05-14 21:47:38.085704] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.085745] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.521 [2024-05-14 21:47:38.085787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6d43bec 00:05:37.521 [2024-05-14 21:47:38.085817] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ee46d7b 00:05:37.521 [2024-05-14 21:47:38.085848] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.521 [2024-05-14 21:47:38.085889] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.522 [2024-05-14 21:47:38.085929] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.085969] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086026] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.522 [2024-05-14 21:47:38.086067] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.522 passed 00:05:37.522 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-14 21:47:38.086108] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.522 [2024-05-14 21:47:38.086139] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.522 [2024-05-14 21:47:38.086172] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:05:37.522 [2024-05-14 21:47:38.086213] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:05:37.522 [2024-05-14 21:47:38.086254] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086294] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 passed 00:05:37.522 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-14 21:47:38.086335] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.522 [2024-05-14 21:47:38.086376] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:05:37.522 [2024-05-14 21:47:38.086416] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=23b8 00:05:37.522 [2024-05-14 21:47:38.086446] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2690 00:05:37.522 [2024-05-14 21:47:38.086480] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:05:37.522 [2024-05-14 21:47:38.086520] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:05:37.522 [2024-05-14 21:47:38.086561] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086601] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086642] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.522 [2024-05-14 21:47:38.086683] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:05:37.522 [2024-05-14 21:47:38.086723] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6d43bec 00:05:37.522 [2024-05-14 21:47:38.086753] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ee46d7b 00:05:37.522 [2024-05-14 21:47:38.086784] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.522 [2024-05-14 21:47:38.086825] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:05:37.522 [2024-05-14 21:47:38.086866] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086907] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.086947] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.522 passed 00:05:37.522 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-05-14 21:47:38.086992] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:05:37.522 [2024-05-14 21:47:38.087035] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.522 [2024-05-14 21:47:38.087065] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=df46ba6a7ab915af 00:05:37.522 passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:37.522 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:37.522 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:37.522 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 21:47:38.092658] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd5c, Actual=fd4c 00:05:37.522 [2024-05-14 21:47:38.092839] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=dd22, Actual=dd32 00:05:37.522 [2024-05-14 21:47:38.093015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.093203] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.093380] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.522 [2024-05-14 21:47:38.093550] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.522 [2024-05-14 21:47:38.093733] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=23b8 00:05:37.522 [2024-05-14 21:47:38.093906] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=29f0 00:05:37.522 [2024-05-14 21:47:38.094080] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753fd, Actual=1ab753ed 00:05:37.522 [2024-05-14 21:47:38.094261] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=808d623f, Actual=808d622f 00:05:37.522 [2024-05-14 21:47:38.094438] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.094612] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.094790] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.522 [2024-05-14 21:47:38.094962] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.522 [2024-05-14 21:47:38.095136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6d43bec 00:05:37.522 [2024-05-14 21:47:38.095310] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=6c8e4e83 00:05:37.522 [2024-05-14 21:47:38.095484] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.522 [2024-05-14 21:47:38.095659] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=82ee31f2c4551ded, Actual=82ee31e2c4551ded 00:05:37.522 [2024-05-14 21:47:38.095833] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.096016] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.096192] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.522 [2024-05-14 21:47:38.096367] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.522 [2024-05-14 21:47:38.096541] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.522 [2024-05-14 21:47:38.096716] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=ea8d5fbf5d2506f9 00:05:37.522 passed 00:05:37.522 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-14 21:47:38.096769] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd5c, Actual=fd4c 00:05:37.522 [2024-05-14 21:47:38.096813] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bea3, Actual=beb3 00:05:37.522 [2024-05-14 21:47:38.096856] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.096898] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.096941] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:05:37.522 [2024-05-14 21:47:38.096984] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:05:37.522 [2024-05-14 21:47:38.097027] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=23b8 00:05:37.522 [2024-05-14 21:47:38.097072] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4a71 00:05:37.522 [2024-05-14 21:47:38.097116] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753fd, Actual=1ab753ed 00:05:37.522 [2024-05-14 21:47:38.097159] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=410d57ca, Actual=410d57da 00:05:37.522 [2024-05-14 21:47:38.097209] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.097252] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.097295] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:05:37.522 [2024-05-14 21:47:38.097338] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:05:37.522 [2024-05-14 21:47:38.097381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6d43bec 00:05:37.522 [2024-05-14 21:47:38.097424] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=ad0e7b76 00:05:37.522 [2024-05-14 21:47:38.097467] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.522 [2024-05-14 21:47:38.097510] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=770c3e61e2ba99b2, Actual=770c3e71e2ba99b2 00:05:37.522 [2024-05-14 21:47:38.097553] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.097597] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.522 [2024-05-14 21:47:38.097643] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059 00:05:37.522 [2024-05-14 21:47:38.097690] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059 00:05:37.522 passed 00:05:37.522 Test: dix_sec_512_md_0_error ...passed 00:05:37.522 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:05:37.522 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...[2024-05-14 21:47:38.097734] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.522 [2024-05-14 21:47:38.097778] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=1f6f502c7bca82a6 00:05:37.522 [2024-05-14 21:47:38.097798] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:37.522 passed 00:05:37.522 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:37.522 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:37.522 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:37.522 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:37.522 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:37.522 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:37.522 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:37.522 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 21:47:38.103182] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd5c, Actual=fd4c 00:05:37.787 [2024-05-14 21:47:38.103363] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=dd22, Actual=dd32 00:05:37.787 [2024-05-14 21:47:38.103539] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.103708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.103882] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.787 [2024-05-14 21:47:38.104057] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4d 00:05:37.787 [2024-05-14 21:47:38.104228] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=23b8 00:05:37.787 [2024-05-14 21:47:38.104399] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=29f0 00:05:37.787 [2024-05-14 21:47:38.104572] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753fd, Actual=1ab753ed 00:05:37.787 [2024-05-14 21:47:38.104743] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=808d623f, Actual=808d622f 00:05:37.787 [2024-05-14 21:47:38.104915] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.105093] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.105286] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.787 [2024-05-14 21:47:38.105464] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=100000005d 00:05:37.787 [2024-05-14 21:47:38.105635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6d43bec 00:05:37.787 [2024-05-14 21:47:38.105805] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=6c8e4e83 00:05:37.787 [2024-05-14 21:47:38.105976] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.787 [2024-05-14 21:47:38.106154] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=82ee31f2c4551ded, Actual=82ee31e2c4551ded 00:05:37.787 [2024-05-14 21:47:38.106325] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.106496] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=98 00:05:37.787 [2024-05-14 21:47:38.106669] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.787 [2024-05-14 21:47:38.106841] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10005d 00:05:37.787 [2024-05-14 21:47:38.107012] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.787 [2024-05-14 21:47:38.107183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=ea8d5fbf5d2506f9 00:05:37.787 passed 00:05:37.787 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-14 21:47:38.107235] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd5c, Actual=fd4c 00:05:37.787 [2024-05-14 21:47:38.107279] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bea3, Actual=beb3 00:05:37.787 [2024-05-14 21:47:38.107321] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.107364] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.107407] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:05:37.788 [2024-05-14 21:47:38.107456] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:05:37.788 [2024-05-14 21:47:38.107499] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=23b8 00:05:37.788 [2024-05-14 21:47:38.107541] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4a71 00:05:37.788 [2024-05-14 21:47:38.107584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753fd, Actual=1ab753ed 00:05:37.788 [2024-05-14 21:47:38.107627] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=410d57ca, Actual=410d57da 00:05:37.788 [2024-05-14 21:47:38.107669] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.107711] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.107760] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:05:37.788 [2024-05-14 21:47:38.107802] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:05:37.788 [2024-05-14 21:47:38.107844] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6d43bec 00:05:37.788 [2024-05-14 21:47:38.107886] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=ad0e7b76 00:05:37.788 [2024-05-14 21:47:38.107929] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:05:37.788 [2024-05-14 21:47:38.107972] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=770c3e61e2ba99b2, Actual=770c3e71e2ba99b2 00:05:37.788 [2024-05-14 21:47:38.108015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.108057] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:05:37.788 [2024-05-14 21:47:38.108099] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059 00:05:37.788 [2024-05-14 21:47:38.108142] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059 00:05:37.788 [2024-05-14 21:47:38.108184] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=d3efde088cbc1a64 00:05:37.788 passed 00:05:37.788 Test: set_md_interleave_iovs_test ...[2024-05-14 21:47:38.108227] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=1f6f502c7bca82a6 00:05:37.788 passed 00:05:37.788 Test: set_md_interleave_iovs_split_test ...passed 00:05:37.788 Test: dif_generate_stream_pi_16_test ...passed 00:05:37.788 Test: dif_generate_stream_test ...passed 00:05:37.788 Test: set_md_interleave_iovs_alignment_test ...[2024-05-14 21:47:38.109108] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:05:37.788 passed 00:05:37.788 Test: dif_generate_split_test ...passed 00:05:37.788 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:05:37.788 Test: dif_verify_split_test ...passed 00:05:37.788 Test: dif_verify_stream_multi_segments_test ...passed 00:05:37.788 Test: update_crc32c_pi_16_test ...passed 00:05:37.788 Test: update_crc32c_test ...passed 00:05:37.788 Test: dif_update_crc32c_split_test ...passed 00:05:37.788 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:05:37.788 Test: get_range_with_md_test ...passed 00:05:37.788 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:05:37.788 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:05:37.788 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:37.788 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:05:37.788 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:05:37.788 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:37.788 Test: dif_generate_and_verify_unmap_test ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 79 79 79 0 0 00:05:37.788 asserts 3584 3584 3584 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.047 seconds 00:05:37.788 21:47:38 unittest.unittest_util -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:05:37.788 00:05:37.788 00:05:37.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.788 http://cunit.sourceforge.net/ 00:05:37.788 00:05:37.788 00:05:37.788 Suite: iov 00:05:37.788 Test: test_single_iov ...passed 00:05:37.788 Test: test_simple_iov ...passed 00:05:37.788 Test: test_complex_iov ...passed 00:05:37.788 Test: test_iovs_to_buf ...passed 00:05:37.788 Test: test_buf_to_iovs ...passed 00:05:37.788 Test: test_memset ...passed 00:05:37.788 Test: test_iov_one ...passed 00:05:37.788 Test: test_iov_xfer ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 8 8 8 0 0 00:05:37.788 asserts 156 156 156 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.000 seconds 00:05:37.788 21:47:38 unittest.unittest_util -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:05:37.788 00:05:37.788 00:05:37.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.788 http://cunit.sourceforge.net/ 00:05:37.788 00:05:37.788 00:05:37.788 Suite: math 00:05:37.788 Test: test_serial_number_arithmetic ...passed 00:05:37.788 Suite: erase 00:05:37.788 Test: test_memset_s ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 2 2 n/a 0 0 00:05:37.788 tests 2 2 2 0 0 00:05:37.788 asserts 18 18 18 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.000 seconds 00:05:37.788 21:47:38 unittest.unittest_util -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:05:37.788 00:05:37.788 00:05:37.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.788 http://cunit.sourceforge.net/ 00:05:37.788 00:05:37.788 00:05:37.788 Suite: pipe 00:05:37.788 Test: test_create_destroy ...passed 00:05:37.788 Test: test_write_get_buffer ...passed 00:05:37.788 Test: test_write_advance ...passed 00:05:37.788 Test: test_read_get_buffer ...passed 00:05:37.788 Test: test_read_advance ...passed 00:05:37.788 Test: test_data ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 6 6 6 0 0 00:05:37.788 asserts 251 251 251 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.000 seconds 00:05:37.788 21:47:38 unittest.unittest_util -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:05:37.788 00:05:37.788 00:05:37.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.788 http://cunit.sourceforge.net/ 00:05:37.788 00:05:37.788 00:05:37.788 Suite: xor 00:05:37.788 Test: test_xor_gen ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 1 1 1 0 0 00:05:37.788 asserts 17 17 17 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.000 seconds 00:05:37.788 00:05:37.788 real 0m0.116s 00:05:37.788 user 0m0.061s 00:05:37.788 sys 0m0.055s 00:05:37.788 21:47:38 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.788 ************************************ 00:05:37.788 END TEST unittest_util 00:05:37.788 ************************************ 00:05:37.788 21:47:38 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 21:47:38 unittest -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:37.788 21:47:38 unittest -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:37.788 21:47:38 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.788 21:47:38 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.788 21:47:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 START TEST unittest_dma 00:05:37.788 ************************************ 00:05:37.788 21:47:38 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:37.788 00:05:37.788 00:05:37.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.788 http://cunit.sourceforge.net/ 00:05:37.788 00:05:37.788 00:05:37.788 Suite: dma_suite 00:05:37.788 Test: test_dma ...passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 1 1 1 0 0 00:05:37.788 asserts 54 54 54 0 n/a 00:05:37.789 00:05:37.789 Elapsed time = 0.000 seconds 00:05:37.789 [2024-05-14 21:47:38.188837] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:05:37.789 00:05:37.789 real 0m0.005s 00:05:37.789 user 0m0.000s 00:05:37.789 sys 0m0.008s 00:05:37.789 21:47:38 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.789 21:47:38 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 END TEST unittest_dma 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 START TEST unittest_init 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:05:37.789 21:47:38 unittest.unittest_init -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:05:37.789 00:05:37.789 00:05:37.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.789 http://cunit.sourceforge.net/ 00:05:37.789 00:05:37.789 00:05:37.789 Suite: subsystem_suite 00:05:37.789 Test: subsystem_sort_test_depends_on_single ...passed 00:05:37.789 Test: subsystem_sort_test_depends_on_multiple ...passed 00:05:37.789 Test: subsystem_sort_test_missing_dependency ...passed 00:05:37.789 00:05:37.789 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.789 suites 1 1 n/a 0 0 00:05:37.789 tests 3 3 3 0 0 00:05:37.789 asserts 20 20 20 0 n/a 00:05:37.789 00:05:37.789 Elapsed time = 0.000 seconds 00:05:37.789 [2024-05-14 21:47:38.235233] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:05:37.789 [2024-05-14 21:47:38.235401] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:05:37.789 00:05:37.789 real 0m0.005s 00:05:37.789 user 0m0.004s 00:05:37.789 sys 0m0.001s 00:05:37.789 21:47:38 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.789 21:47:38 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 END TEST unittest_init 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest -- unit/unittest.sh@288 -- # run_test unittest_keyring /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 START TEST unittest_keyring 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:37.789 00:05:37.789 00:05:37.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.789 http://cunit.sourceforge.net/ 00:05:37.789 00:05:37.789 00:05:37.789 Suite: keyring 00:05:37.789 Test: test_keyring_add_remove ...[2024-05-14 21:47:38.277068] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:05:37.789 [2024-05-14 21:47:38.277340] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:05:37.789 passed 00:05:37.789 Test: test_keyring_get_put ...passed 00:05:37.789 00:05:37.789 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.789 suites 1 1 n/a 0 0 00:05:37.789 tests 2 2 2 0 0 00:05:37.789 asserts 44 44 44 0 n/a 00:05:37.789 00:05:37.789 Elapsed time = 0.000 seconds 00:05:37.789 [2024-05-14 21:47:38.277367] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:05:37.789 00:05:37.789 real 0m0.005s 00:05:37.789 user 0m0.002s 00:05:37.789 sys 0m0.004s 00:05:37.789 21:47:38 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 END TEST unittest_keyring 00:05:37.789 ************************************ 00:05:37.789 21:47:38 unittest -- unit/unittest.sh@290 -- # '[' no = yes ']' 00:05:37.789 00:05:37.789 00:05:37.789 21:47:38 unittest -- unit/unittest.sh@303 -- # set +x 00:05:37.789 ===================== 00:05:37.789 All unit tests passed 00:05:37.789 ===================== 00:05:37.789 WARN: lcov not installed or SPDK built without coverage! 00:05:37.789 WARN: neither valgrind nor ASAN is enabled! 00:05:37.789 00:05:37.789 00:05:37.789 00:05:37.789 real 0m16.518s 00:05:37.789 user 0m13.644s 00:05:37.789 sys 0m1.653s 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.789 21:47:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 END TEST unittest 00:05:37.789 ************************************ 00:05:37.789 21:47:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:37.789 21:47:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.789 21:47:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.789 21:47:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:37.789 21:47:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.789 21:47:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 21:47:38 -- spdk/autotest.sh@164 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.789 21:47:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.789 21:47:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.789 21:47:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 START TEST env 00:05:37.789 ************************************ 00:05:37.789 21:47:38 env -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.064 * Looking for test storage... 00:05:38.064 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:05:38.064 21:47:38 env -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.064 21:47:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.064 21:47:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.064 21:47:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.324 ************************************ 00:05:38.324 START TEST env_memory 00:05:38.324 ************************************ 00:05:38.324 21:47:38 env.env_memory -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.324 00:05:38.324 00:05:38.324 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.324 http://cunit.sourceforge.net/ 00:05:38.324 00:05:38.324 00:05:38.324 Suite: memory 00:05:38.324 Test: alloc and free memory map ...[2024-05-14 21:47:38.672282] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.324 passed 00:05:38.324 Test: mem map translation ...[2024-05-14 21:47:38.679448] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.324 [2024-05-14 21:47:38.679477] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.324 [2024-05-14 21:47:38.679492] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.324 [2024-05-14 21:47:38.679501] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.324 passed 00:05:38.324 Test: mem map registration ...[2024-05-14 21:47:38.688836] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:38.324 [2024-05-14 21:47:38.688859] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:38.324 passed 00:05:38.324 Test: mem map adjacent registrations ...passed 00:05:38.324 00:05:38.324 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.324 suites 1 1 n/a 0 0 00:05:38.324 tests 4 4 4 0 0 00:05:38.324 asserts 152 152 152 0 n/a 00:05:38.324 00:05:38.324 Elapsed time = 0.031 seconds 00:05:38.324 00:05:38.324 real 0m0.045s 00:05:38.324 user 0m0.041s 00:05:38.324 sys 0m0.008s 00:05:38.324 21:47:38 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.324 21:47:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:38.324 ************************************ 00:05:38.324 END TEST env_memory 00:05:38.324 ************************************ 00:05:38.324 21:47:38 env -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.324 21:47:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.324 21:47:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.324 21:47:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.324 ************************************ 00:05:38.324 START TEST env_vtophys 00:05:38.324 ************************************ 00:05:38.324 21:47:38 env.env_vtophys -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.324 EAL: lib.eal log level changed from notice to debug 00:05:38.324 EAL: Sysctl reports 10 cpus 00:05:38.324 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.324 EAL: Detected lcore 1 as core 0 on socket 0 00:05:38.324 EAL: Detected lcore 2 as core 0 on socket 0 00:05:38.324 EAL: Detected lcore 3 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 4 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 5 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 6 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 7 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 8 as core 0 on socket 0 00:05:38.325 EAL: Detected lcore 9 as core 0 on socket 0 00:05:38.325 EAL: Maximum logical cores by configuration: 128 00:05:38.325 EAL: Detected CPU lcores: 10 00:05:38.325 EAL: Detected NUMA nodes: 1 00:05:38.325 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:38.325 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:38.325 EAL: Checking presence of .so 'librte_eal.so' 00:05:38.325 EAL: Detected static linkage of DPDK 00:05:38.325 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.325 EAL: PCI scan found 10 devices 00:05:38.325 EAL: Specific IOVA mode is not requested, autodetecting 00:05:38.325 EAL: Selecting IOVA mode according to bus requests 00:05:38.325 EAL: Bus pci wants IOVA as 'PA' 00:05:38.325 EAL: Selected IOVA mode 'PA' 00:05:38.325 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:05:38.325 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.325 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x10009e3000) not respected! 00:05:38.325 EAL: This may cause issues with mapping memory into secondary processes 00:05:38.325 EAL: Virtual area found at 0x10009e3000 (size = 0x2e000) 00:05:38.325 EAL: Setting up physically contiguous memory... 00:05:38.325 EAL: Ask a virtual area of 0x1000 bytes 00:05:38.325 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x100122c000) not respected! 00:05:38.325 EAL: This may cause issues with mapping memory into secondary processes 00:05:38.325 EAL: Virtual area found at 0x100122c000 (size = 0x1000) 00:05:38.325 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:05:38.325 EAL: Ask a virtual area of 0xf0000000 bytes 00:05:38.325 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:05:38.325 EAL: This may cause issues with mapping memory into secondary processes 00:05:38.325 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:05:38.325 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:05:38.325 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x270000000, len 268435456 00:05:38.325 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x280000000, len 268435456 00:05:38.584 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x290000000, len 268435456 00:05:38.584 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x2a0000000, len 268435456 00:05:38.584 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x2b0000000, len 268435456 00:05:38.843 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x2c0000000, len 268435456 00:05:38.843 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x2d0000000, len 268435456 00:05:38.843 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2e0000000, len 268435456 00:05:38.843 EAL: No shared files mode enabled, IPC is disabled 00:05:38.843 EAL: Added 2048M to heap on socket 0 00:05:38.843 EAL: TSC is not safe to use in SMP mode 00:05:38.843 EAL: TSC is not invariant 00:05:38.843 EAL: TSC frequency is ~2200008 KHz 00:05:38.843 EAL: Main lcore 0 is ready (tid=82cd77000;cpuset=[0]) 00:05:38.843 EAL: PCI scan found 10 devices 00:05:38.843 EAL: Registering mem event callbacks not supported 00:05:38.843 00:05:38.843 00:05:38.843 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.843 http://cunit.sourceforge.net/ 00:05:38.843 00:05:38.843 00:05:38.843 Suite: components_suite 00:05:38.843 Test: vtophys_malloc_test ...passed 00:05:39.411 Test: vtophys_spdk_malloc_test ...passed 00:05:39.411 00:05:39.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.411 suites 1 1 n/a 0 0 00:05:39.411 tests 2 2 2 0 0 00:05:39.411 asserts 497 497 497 0 n/a 00:05:39.411 00:05:39.411 Elapsed time = 0.375 seconds 00:05:39.411 00:05:39.411 real 0m1.013s 00:05:39.411 user 0m0.393s 00:05:39.411 sys 0m0.619s 00:05:39.411 ************************************ 00:05:39.411 END TEST env_vtophys 00:05:39.411 ************************************ 00:05:39.411 21:47:39 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.411 21:47:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.411 21:47:39 env -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.411 21:47:39 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.411 21:47:39 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.411 21:47:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.411 ************************************ 00:05:39.411 START TEST env_pci 00:05:39.411 ************************************ 00:05:39.411 21:47:39 env.env_pci -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.411 00:05:39.411 00:05:39.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.411 http://cunit.sourceforge.net/ 00:05:39.411 00:05:39.411 00:05:39.411 Suite: pci 00:05:39.411 Test: pci_hook ...passed 00:05:39.411 00:05:39.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.411 suites 1 1 n/a 0 0 00:05:39.411 tests 1 1 1 0 0 00:05:39.411 asserts 25 25 25 0 n/a 00:05:39.411 00:05:39.411 Elapsed time = 0.000 secondsEAL: Cannot find device (10000:00:01.0) 00:05:39.411 EAL: Failed to attach device on primary process 00:05:39.411 00:05:39.411 00:05:39.411 real 0m0.009s 00:05:39.411 user 0m0.000s 00:05:39.411 sys 0m0.008s 00:05:39.411 21:47:39 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.411 ************************************ 00:05:39.412 END TEST env_pci 00:05:39.412 ************************************ 00:05:39.412 21:47:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.412 21:47:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.412 21:47:39 env -- env/env.sh@15 -- # uname 00:05:39.412 21:47:39 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:05:39.412 21:47:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:05:39.412 21:47:39 env -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:39.412 21:47:39 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.412 21:47:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.412 ************************************ 00:05:39.412 START TEST env_dpdk_post_init 00:05:39.412 ************************************ 00:05:39.412 21:47:39 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:05:39.412 EAL: Sysctl reports 10 cpus 00:05:39.412 EAL: Detected CPU lcores: 10 00:05:39.412 EAL: Detected NUMA nodes: 1 00:05:39.412 EAL: Detected static linkage of DPDK 00:05:39.412 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.412 EAL: Selected IOVA mode 'PA' 00:05:39.412 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:05:39.412 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x270000000, len 268435456 00:05:39.670 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x280000000, len 268435456 00:05:39.670 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x290000000, len 268435456 00:05:39.670 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x2a0000000, len 268435456 00:05:39.670 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x2b0000000, len 268435456 00:05:39.928 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x2c0000000, len 268435456 00:05:39.928 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x2d0000000, len 268435456 00:05:39.928 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x2e0000000, len 268435456 00:05:39.928 EAL: TSC is not safe to use in SMP mode 00:05:39.928 EAL: TSC is not invariant 00:05:39.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.928 [2024-05-14 21:47:40.409134] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:05:39.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:39.928 Starting DPDK initialization... 00:05:39.928 Starting SPDK post initialization... 00:05:39.928 SPDK NVMe probe 00:05:39.928 Attaching to 0000:00:10.0 00:05:39.928 Attached to 0000:00:10.0 00:05:39.928 Cleaning up... 00:05:39.928 00:05:39.928 real 0m0.596s 00:05:39.928 user 0m0.004s 00:05:39.928 sys 0m0.587s 00:05:39.928 21:47:40 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.928 21:47:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.928 ************************************ 00:05:39.928 END TEST env_dpdk_post_init 00:05:39.928 ************************************ 00:05:39.928 21:47:40 env -- env/env.sh@26 -- # uname 00:05:39.928 21:47:40 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:05:39.928 00:05:39.928 real 0m2.137s 00:05:39.928 user 0m0.647s 00:05:39.928 sys 0m1.548s 00:05:39.928 21:47:40 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.928 21:47:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.928 ************************************ 00:05:39.928 END TEST env 00:05:39.928 ************************************ 00:05:40.187 21:47:40 -- spdk/autotest.sh@165 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.187 21:47:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.187 21:47:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.187 21:47:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.187 ************************************ 00:05:40.187 START TEST rpc 00:05:40.187 ************************************ 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.187 * Looking for test storage... 00:05:40.187 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.187 21:47:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45859 00:05:40.187 21:47:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.187 21:47:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45859 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@827 -- # '[' -z 45859 ']' 00:05:40.187 21:47:40 rpc -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.187 21:47:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.187 [2024-05-14 21:47:40.681627] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:40.187 [2024-05-14 21:47:40.681807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:40.754 EAL: TSC is not safe to use in SMP mode 00:05:40.754 EAL: TSC is not invariant 00:05:40.754 [2024-05-14 21:47:41.235055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.754 [2024-05-14 21:47:41.336383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:40.754 [2024-05-14 21:47:41.339105] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.754 [2024-05-14 21:47:41.339148] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45859' to capture a snapshot of events at runtime. 00:05:40.754 [2024-05-14 21:47:41.339183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.322 21:47:41 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.322 21:47:41 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.322 21:47:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.322 21:47:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.322 21:47:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.322 21:47:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.322 21:47:41 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.322 21:47:41 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.322 21:47:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 ************************************ 00:05:41.322 START TEST rpc_integrity 00:05:41.322 ************************************ 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.322 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.323 { 00:05:41.323 "name": "Malloc0", 00:05:41.323 "aliases": [ 00:05:41.323 "97018a08-123b-11ef-8c90-4585f0cfab08" 00:05:41.323 ], 00:05:41.323 "product_name": "Malloc disk", 00:05:41.323 "block_size": 512, 00:05:41.323 "num_blocks": 16384, 00:05:41.323 "uuid": "97018a08-123b-11ef-8c90-4585f0cfab08", 00:05:41.323 "assigned_rate_limits": { 00:05:41.323 "rw_ios_per_sec": 0, 00:05:41.323 "rw_mbytes_per_sec": 0, 00:05:41.323 "r_mbytes_per_sec": 0, 00:05:41.323 "w_mbytes_per_sec": 0 00:05:41.323 }, 00:05:41.323 "claimed": false, 00:05:41.323 "zoned": false, 00:05:41.323 "supported_io_types": { 00:05:41.323 "read": true, 00:05:41.323 "write": true, 00:05:41.323 "unmap": true, 00:05:41.323 "write_zeroes": true, 00:05:41.323 "flush": true, 00:05:41.323 "reset": true, 00:05:41.323 "compare": false, 00:05:41.323 "compare_and_write": false, 00:05:41.323 "abort": true, 00:05:41.323 "nvme_admin": false, 00:05:41.323 "nvme_io": false 00:05:41.323 }, 00:05:41.323 "memory_domains": [ 00:05:41.323 { 00:05:41.323 "dma_device_id": "system", 00:05:41.323 "dma_device_type": 1 00:05:41.323 }, 00:05:41.323 { 00:05:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.323 "dma_device_type": 2 00:05:41.323 } 00:05:41.323 ], 00:05:41.323 "driver_specific": {} 00:05:41.323 } 00:05:41.323 ]' 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 [2024-05-14 21:47:41.796541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.323 [2024-05-14 21:47:41.796599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.323 [2024-05-14 21:47:41.797246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d569780 00:05:41.323 [2024-05-14 21:47:41.797276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.323 [2024-05-14 21:47:41.797983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.323 [2024-05-14 21:47:41.798018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.323 Passthru0 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.323 { 00:05:41.323 "name": "Malloc0", 00:05:41.323 "aliases": [ 00:05:41.323 "97018a08-123b-11ef-8c90-4585f0cfab08" 00:05:41.323 ], 00:05:41.323 "product_name": "Malloc disk", 00:05:41.323 "block_size": 512, 00:05:41.323 "num_blocks": 16384, 00:05:41.323 "uuid": "97018a08-123b-11ef-8c90-4585f0cfab08", 00:05:41.323 "assigned_rate_limits": { 00:05:41.323 "rw_ios_per_sec": 0, 00:05:41.323 "rw_mbytes_per_sec": 0, 00:05:41.323 "r_mbytes_per_sec": 0, 00:05:41.323 "w_mbytes_per_sec": 0 00:05:41.323 }, 00:05:41.323 "claimed": true, 00:05:41.323 "claim_type": "exclusive_write", 00:05:41.323 "zoned": false, 00:05:41.323 "supported_io_types": { 00:05:41.323 "read": true, 00:05:41.323 "write": true, 00:05:41.323 "unmap": true, 00:05:41.323 "write_zeroes": true, 00:05:41.323 "flush": true, 00:05:41.323 "reset": true, 00:05:41.323 "compare": false, 00:05:41.323 "compare_and_write": false, 00:05:41.323 "abort": true, 00:05:41.323 "nvme_admin": false, 00:05:41.323 "nvme_io": false 00:05:41.323 }, 00:05:41.323 "memory_domains": [ 00:05:41.323 { 00:05:41.323 "dma_device_id": "system", 00:05:41.323 "dma_device_type": 1 00:05:41.323 }, 00:05:41.323 { 00:05:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.323 "dma_device_type": 2 00:05:41.323 } 00:05:41.323 ], 00:05:41.323 "driver_specific": {} 00:05:41.323 }, 00:05:41.323 { 00:05:41.323 "name": "Passthru0", 00:05:41.323 "aliases": [ 00:05:41.323 "9d9efeb7-1d52-4e59-89f9-6ae59f6928de" 00:05:41.323 ], 00:05:41.323 "product_name": "passthru", 00:05:41.323 "block_size": 512, 00:05:41.323 "num_blocks": 16384, 00:05:41.323 "uuid": "9d9efeb7-1d52-4e59-89f9-6ae59f6928de", 00:05:41.323 "assigned_rate_limits": { 00:05:41.323 "rw_ios_per_sec": 0, 00:05:41.323 "rw_mbytes_per_sec": 0, 00:05:41.323 "r_mbytes_per_sec": 0, 00:05:41.323 "w_mbytes_per_sec": 0 00:05:41.323 }, 00:05:41.323 "claimed": false, 00:05:41.323 "zoned": false, 00:05:41.323 "supported_io_types": { 00:05:41.323 "read": true, 00:05:41.323 "write": true, 00:05:41.323 "unmap": true, 00:05:41.323 "write_zeroes": true, 00:05:41.323 "flush": true, 00:05:41.323 "reset": true, 00:05:41.323 "compare": false, 00:05:41.323 "compare_and_write": false, 00:05:41.323 "abort": true, 00:05:41.323 "nvme_admin": false, 00:05:41.323 "nvme_io": false 00:05:41.323 }, 00:05:41.323 "memory_domains": [ 00:05:41.323 { 00:05:41.323 "dma_device_id": "system", 00:05:41.323 "dma_device_type": 1 00:05:41.323 }, 00:05:41.323 { 00:05:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.323 "dma_device_type": 2 00:05:41.323 } 00:05:41.323 ], 00:05:41.323 "driver_specific": { 00:05:41.323 "passthru": { 00:05:41.323 "name": "Passthru0", 00:05:41.323 "base_bdev_name": "Malloc0" 00:05:41.323 } 00:05:41.323 } 00:05:41.323 } 00:05:41.323 ]' 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.323 21:47:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.323 00:05:41.323 real 0m0.131s 00:05:41.323 user 0m0.043s 00:05:41.323 sys 0m0.029s 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.323 21:47:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 ************************************ 00:05:41.323 END TEST rpc_integrity 00:05:41.323 ************************************ 00:05:41.323 21:47:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.323 21:47:41 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.323 21:47:41 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.323 21:47:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 ************************************ 00:05:41.323 START TEST rpc_plugins 00:05:41.323 ************************************ 00:05:41.323 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.583 { 00:05:41.583 "name": "Malloc1", 00:05:41.583 "aliases": [ 00:05:41.583 "9719f3c3-123b-11ef-8c90-4585f0cfab08" 00:05:41.583 ], 00:05:41.583 "product_name": "Malloc disk", 00:05:41.583 "block_size": 4096, 00:05:41.583 "num_blocks": 256, 00:05:41.583 "uuid": "9719f3c3-123b-11ef-8c90-4585f0cfab08", 00:05:41.583 "assigned_rate_limits": { 00:05:41.583 "rw_ios_per_sec": 0, 00:05:41.583 "rw_mbytes_per_sec": 0, 00:05:41.583 "r_mbytes_per_sec": 0, 00:05:41.583 "w_mbytes_per_sec": 0 00:05:41.583 }, 00:05:41.583 "claimed": false, 00:05:41.583 "zoned": false, 00:05:41.583 "supported_io_types": { 00:05:41.583 "read": true, 00:05:41.583 "write": true, 00:05:41.583 "unmap": true, 00:05:41.583 "write_zeroes": true, 00:05:41.583 "flush": true, 00:05:41.583 "reset": true, 00:05:41.583 "compare": false, 00:05:41.583 "compare_and_write": false, 00:05:41.583 "abort": true, 00:05:41.583 "nvme_admin": false, 00:05:41.583 "nvme_io": false 00:05:41.583 }, 00:05:41.583 "memory_domains": [ 00:05:41.583 { 00:05:41.583 "dma_device_id": "system", 00:05:41.583 "dma_device_type": 1 00:05:41.583 }, 00:05:41.583 { 00:05:41.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.583 "dma_device_type": 2 00:05:41.583 } 00:05:41.583 ], 00:05:41.583 "driver_specific": {} 00:05:41.583 } 00:05:41.583 ]' 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.583 21:47:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.583 00:05:41.583 real 0m0.058s 00:05:41.583 user 0m0.021s 00:05:41.583 sys 0m0.009s 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.583 21:47:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.583 ************************************ 00:05:41.583 END TEST rpc_plugins 00:05:41.583 ************************************ 00:05:41.584 21:47:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 ************************************ 00:05:41.584 START TEST rpc_trace_cmd_test 00:05:41.584 ************************************ 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.584 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45859", 00:05:41.584 "tpoint_group_mask": "0x8", 00:05:41.584 "iscsi_conn": { 00:05:41.584 "mask": "0x2", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "scsi": { 00:05:41.584 "mask": "0x4", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "bdev": { 00:05:41.584 "mask": "0x8", 00:05:41.584 "tpoint_mask": "0xffffffffffffffff" 00:05:41.584 }, 00:05:41.584 "nvmf_rdma": { 00:05:41.584 "mask": "0x10", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "nvmf_tcp": { 00:05:41.584 "mask": "0x20", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "blobfs": { 00:05:41.584 "mask": "0x80", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "dsa": { 00:05:41.584 "mask": "0x200", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "thread": { 00:05:41.584 "mask": "0x400", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "nvme_pcie": { 00:05:41.584 "mask": "0x800", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "iaa": { 00:05:41.584 "mask": "0x1000", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "nvme_tcp": { 00:05:41.584 "mask": "0x2000", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "bdev_nvme": { 00:05:41.584 "mask": "0x4000", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 }, 00:05:41.584 "sock": { 00:05:41.584 "mask": "0x8000", 00:05:41.584 "tpoint_mask": "0x0" 00:05:41.584 } 00:05:41.584 }' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.584 00:05:41.584 real 0m0.053s 00:05:41.584 user 0m0.027s 00:05:41.584 sys 0m0.030s 00:05:41.584 ************************************ 00:05:41.584 END TEST rpc_trace_cmd_test 00:05:41.584 ************************************ 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 21:47:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:41.584 21:47:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.584 21:47:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.584 21:47:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 ************************************ 00:05:41.584 START TEST rpc_daemon_integrity 00:05:41.584 ************************************ 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.584 { 00:05:41.584 "name": "Malloc2", 00:05:41.584 "aliases": [ 00:05:41.584 "973a4d3d-123b-11ef-8c90-4585f0cfab08" 00:05:41.584 ], 00:05:41.584 "product_name": "Malloc disk", 00:05:41.584 "block_size": 512, 00:05:41.584 "num_blocks": 16384, 00:05:41.584 "uuid": "973a4d3d-123b-11ef-8c90-4585f0cfab08", 00:05:41.584 "assigned_rate_limits": { 00:05:41.584 "rw_ios_per_sec": 0, 00:05:41.584 "rw_mbytes_per_sec": 0, 00:05:41.584 "r_mbytes_per_sec": 0, 00:05:41.584 "w_mbytes_per_sec": 0 00:05:41.584 }, 00:05:41.584 "claimed": false, 00:05:41.584 "zoned": false, 00:05:41.584 "supported_io_types": { 00:05:41.584 "read": true, 00:05:41.584 "write": true, 00:05:41.584 "unmap": true, 00:05:41.584 "write_zeroes": true, 00:05:41.584 "flush": true, 00:05:41.584 "reset": true, 00:05:41.584 "compare": false, 00:05:41.584 "compare_and_write": false, 00:05:41.584 "abort": true, 00:05:41.584 "nvme_admin": false, 00:05:41.584 "nvme_io": false 00:05:41.584 }, 00:05:41.584 "memory_domains": [ 00:05:41.584 { 00:05:41.584 "dma_device_id": "system", 00:05:41.584 "dma_device_type": 1 00:05:41.584 }, 00:05:41.584 { 00:05:41.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.584 "dma_device_type": 2 00:05:41.584 } 00:05:41.584 ], 00:05:41.584 "driver_specific": {} 00:05:41.584 } 00:05:41.584 ]' 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.584 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 [2024-05-14 21:47:42.168553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:41.584 [2024-05-14 21:47:42.168609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.584 [2024-05-14 21:47:42.168638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d569780 00:05:41.584 [2024-05-14 21:47:42.168648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.584 [2024-05-14 21:47:42.169018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.584 [2024-05-14 21:47:42.169049] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.843 Passthru0 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.843 { 00:05:41.843 "name": "Malloc2", 00:05:41.843 "aliases": [ 00:05:41.843 "973a4d3d-123b-11ef-8c90-4585f0cfab08" 00:05:41.843 ], 00:05:41.843 "product_name": "Malloc disk", 00:05:41.843 "block_size": 512, 00:05:41.843 "num_blocks": 16384, 00:05:41.843 "uuid": "973a4d3d-123b-11ef-8c90-4585f0cfab08", 00:05:41.843 "assigned_rate_limits": { 00:05:41.843 "rw_ios_per_sec": 0, 00:05:41.843 "rw_mbytes_per_sec": 0, 00:05:41.843 "r_mbytes_per_sec": 0, 00:05:41.843 "w_mbytes_per_sec": 0 00:05:41.843 }, 00:05:41.843 "claimed": true, 00:05:41.843 "claim_type": "exclusive_write", 00:05:41.843 "zoned": false, 00:05:41.843 "supported_io_types": { 00:05:41.843 "read": true, 00:05:41.843 "write": true, 00:05:41.843 "unmap": true, 00:05:41.843 "write_zeroes": true, 00:05:41.843 "flush": true, 00:05:41.843 "reset": true, 00:05:41.843 "compare": false, 00:05:41.843 "compare_and_write": false, 00:05:41.843 "abort": true, 00:05:41.843 "nvme_admin": false, 00:05:41.843 "nvme_io": false 00:05:41.843 }, 00:05:41.843 "memory_domains": [ 00:05:41.843 { 00:05:41.843 "dma_device_id": "system", 00:05:41.843 "dma_device_type": 1 00:05:41.843 }, 00:05:41.843 { 00:05:41.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.843 "dma_device_type": 2 00:05:41.843 } 00:05:41.843 ], 00:05:41.843 "driver_specific": {} 00:05:41.843 }, 00:05:41.843 { 00:05:41.843 "name": "Passthru0", 00:05:41.843 "aliases": [ 00:05:41.843 "407f9b69-b112-f85c-a3b7-7336d421ca4c" 00:05:41.843 ], 00:05:41.843 "product_name": "passthru", 00:05:41.843 "block_size": 512, 00:05:41.843 "num_blocks": 16384, 00:05:41.843 "uuid": "407f9b69-b112-f85c-a3b7-7336d421ca4c", 00:05:41.843 "assigned_rate_limits": { 00:05:41.843 "rw_ios_per_sec": 0, 00:05:41.843 "rw_mbytes_per_sec": 0, 00:05:41.843 "r_mbytes_per_sec": 0, 00:05:41.843 "w_mbytes_per_sec": 0 00:05:41.843 }, 00:05:41.843 "claimed": false, 00:05:41.843 "zoned": false, 00:05:41.843 "supported_io_types": { 00:05:41.843 "read": true, 00:05:41.843 "write": true, 00:05:41.843 "unmap": true, 00:05:41.843 "write_zeroes": true, 00:05:41.843 "flush": true, 00:05:41.843 "reset": true, 00:05:41.843 "compare": false, 00:05:41.843 "compare_and_write": false, 00:05:41.843 "abort": true, 00:05:41.843 "nvme_admin": false, 00:05:41.843 "nvme_io": false 00:05:41.843 }, 00:05:41.843 "memory_domains": [ 00:05:41.843 { 00:05:41.843 "dma_device_id": "system", 00:05:41.843 "dma_device_type": 1 00:05:41.843 }, 00:05:41.843 { 00:05:41.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.843 "dma_device_type": 2 00:05:41.843 } 00:05:41.843 ], 00:05:41.843 "driver_specific": { 00:05:41.843 "passthru": { 00:05:41.843 "name": "Passthru0", 00:05:41.843 "base_bdev_name": "Malloc2" 00:05:41.843 } 00:05:41.843 } 00:05:41.843 } 00:05:41.843 ]' 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.843 00:05:41.843 real 0m0.129s 00:05:41.843 user 0m0.038s 00:05:41.843 sys 0m0.029s 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.843 ************************************ 00:05:41.843 END TEST rpc_daemon_integrity 00:05:41.843 21:47:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 ************************************ 00:05:41.843 21:47:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.843 21:47:42 rpc -- rpc/rpc.sh@84 -- # killprocess 45859 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@946 -- # '[' -z 45859 ']' 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@950 -- # kill -0 45859 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@951 -- # uname 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@954 -- # ps -c -o command 45859 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@954 -- # tail -1 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:05:41.843 killing process with pid 45859 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 45859' 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@965 -- # kill 45859 00:05:41.843 21:47:42 rpc -- common/autotest_common.sh@970 -- # wait 45859 00:05:42.103 00:05:42.103 real 0m2.016s 00:05:42.103 user 0m2.001s 00:05:42.103 sys 0m0.938s 00:05:42.104 21:47:42 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.104 21:47:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.104 ************************************ 00:05:42.104 END TEST rpc 00:05:42.104 ************************************ 00:05:42.104 21:47:42 -- spdk/autotest.sh@166 -- # run_test skip_rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.104 21:47:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.104 21:47:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.104 21:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:42.104 ************************************ 00:05:42.104 START TEST skip_rpc 00:05:42.104 ************************************ 00:05:42.104 21:47:42 skip_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.362 * Looking for test storage... 00:05:42.362 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.362 21:47:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.362 21:47:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.362 21:47:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.362 21:47:42 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.362 21:47:42 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.362 21:47:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 ************************************ 00:05:42.362 START TEST skip_rpc 00:05:42.362 ************************************ 00:05:42.362 21:47:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:42.362 21:47:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46035 00:05:42.362 21:47:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.362 21:47:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.362 21:47:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:42.362 [2024-05-14 21:47:42.763148] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:42.362 [2024-05-14 21:47:42.763383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:42.930 EAL: TSC is not safe to use in SMP mode 00:05:42.930 EAL: TSC is not invariant 00:05:42.930 [2024-05-14 21:47:43.347208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.930 [2024-05-14 21:47:43.450826] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:42.930 [2024-05-14 21:47:43.453485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46035 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 46035 ']' 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 46035 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # tail -1 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46035 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:05:48.201 killing process with pid 46035 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46035' 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 46035 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 46035 00:05:48.201 00:05:48.201 real 0m5.589s 00:05:48.201 user 0m4.995s 00:05:48.201 sys 0m0.612s 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.201 ************************************ 00:05:48.201 END TEST skip_rpc 00:05:48.201 ************************************ 00:05:48.201 21:47:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.201 21:47:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.201 21:47:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.201 21:47:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.201 21:47:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.201 ************************************ 00:05:48.201 START TEST skip_rpc_with_json 00:05:48.201 ************************************ 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46080 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46080 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 46080 ']' 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.201 21:47:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.201 [2024-05-14 21:47:48.403040] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:48.201 [2024-05-14 21:47:48.403347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:48.768 EAL: TSC is not safe to use in SMP mode 00:05:48.768 EAL: TSC is not invariant 00:05:48.768 [2024-05-14 21:47:49.122421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.768 [2024-05-14 21:47:49.224117] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:48.768 [2024-05-14 21:47:49.227063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.027 [2024-05-14 21:47:49.524715] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.027 request: 00:05:49.027 { 00:05:49.027 "trtype": "tcp", 00:05:49.027 "method": "nvmf_get_transports", 00:05:49.027 "req_id": 1 00:05:49.027 } 00:05:49.027 Got JSON-RPC error response 00:05:49.027 response: 00:05:49.027 { 00:05:49.027 "code": -19, 00:05:49.027 "message": "Operation not supported by device" 00:05:49.027 } 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.027 [2024-05-14 21:47:49.532732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.027 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.286 { 00:05:49.286 "subsystems": [ 00:05:49.286 { 00:05:49.286 "subsystem": "vmd", 00:05:49.286 "config": [] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "iobuf", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "iobuf_set_options", 00:05:49.286 "params": { 00:05:49.286 "small_pool_count": 8192, 00:05:49.286 "large_pool_count": 1024, 00:05:49.286 "small_bufsize": 8192, 00:05:49.286 "large_bufsize": 135168 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "scheduler", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "framework_set_scheduler", 00:05:49.286 "params": { 00:05:49.286 "name": "static" 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "sock", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "sock_impl_set_options", 00:05:49.286 "params": { 00:05:49.286 "impl_name": "posix", 00:05:49.286 "recv_buf_size": 2097152, 00:05:49.286 "send_buf_size": 2097152, 00:05:49.286 "enable_recv_pipe": true, 00:05:49.286 "enable_quickack": false, 00:05:49.286 "enable_placement_id": 0, 00:05:49.286 "enable_zerocopy_send_server": true, 00:05:49.286 "enable_zerocopy_send_client": false, 00:05:49.286 "zerocopy_threshold": 0, 00:05:49.286 "tls_version": 0, 00:05:49.286 "enable_ktls": false 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "sock_impl_set_options", 00:05:49.286 "params": { 00:05:49.286 "impl_name": "ssl", 00:05:49.286 "recv_buf_size": 4096, 00:05:49.286 "send_buf_size": 4096, 00:05:49.286 "enable_recv_pipe": true, 00:05:49.286 "enable_quickack": false, 00:05:49.286 "enable_placement_id": 0, 00:05:49.286 "enable_zerocopy_send_server": true, 00:05:49.286 "enable_zerocopy_send_client": false, 00:05:49.286 "zerocopy_threshold": 0, 00:05:49.286 "tls_version": 0, 00:05:49.286 "enable_ktls": false 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "keyring", 00:05:49.286 "config": [] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "accel", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "accel_set_options", 00:05:49.286 "params": { 00:05:49.286 "small_cache_size": 128, 00:05:49.286 "large_cache_size": 16, 00:05:49.286 "task_count": 2048, 00:05:49.286 "sequence_count": 2048, 00:05:49.286 "buf_count": 2048 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "bdev", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "bdev_set_options", 00:05:49.286 "params": { 00:05:49.286 "bdev_io_pool_size": 65535, 00:05:49.286 "bdev_io_cache_size": 256, 00:05:49.286 "bdev_auto_examine": true, 00:05:49.286 "iobuf_small_cache_size": 128, 00:05:49.286 "iobuf_large_cache_size": 16 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "bdev_raid_set_options", 00:05:49.286 "params": { 00:05:49.286 "process_window_size_kb": 1024 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "bdev_nvme_set_options", 00:05:49.286 "params": { 00:05:49.286 "action_on_timeout": "none", 00:05:49.286 "timeout_us": 0, 00:05:49.286 "timeout_admin_us": 0, 00:05:49.286 "keep_alive_timeout_ms": 10000, 00:05:49.286 "arbitration_burst": 0, 00:05:49.286 "low_priority_weight": 0, 00:05:49.286 "medium_priority_weight": 0, 00:05:49.286 "high_priority_weight": 0, 00:05:49.286 "nvme_adminq_poll_period_us": 10000, 00:05:49.286 "nvme_ioq_poll_period_us": 0, 00:05:49.286 "io_queue_requests": 0, 00:05:49.286 "delay_cmd_submit": true, 00:05:49.286 "transport_retry_count": 4, 00:05:49.286 "bdev_retry_count": 3, 00:05:49.286 "transport_ack_timeout": 0, 00:05:49.286 "ctrlr_loss_timeout_sec": 0, 00:05:49.286 "reconnect_delay_sec": 0, 00:05:49.286 "fast_io_fail_timeout_sec": 0, 00:05:49.286 "disable_auto_failback": false, 00:05:49.286 "generate_uuids": false, 00:05:49.286 "transport_tos": 0, 00:05:49.286 "nvme_error_stat": false, 00:05:49.286 "rdma_srq_size": 0, 00:05:49.286 "io_path_stat": false, 00:05:49.286 "allow_accel_sequence": false, 00:05:49.286 "rdma_max_cq_size": 0, 00:05:49.286 "rdma_cm_event_timeout_ms": 0, 00:05:49.286 "dhchap_digests": [ 00:05:49.286 "sha256", 00:05:49.286 "sha384", 00:05:49.286 "sha512" 00:05:49.286 ], 00:05:49.286 "dhchap_dhgroups": [ 00:05:49.286 "null", 00:05:49.286 "ffdhe2048", 00:05:49.286 "ffdhe3072", 00:05:49.286 "ffdhe4096", 00:05:49.286 "ffdhe6144", 00:05:49.286 "ffdhe8192" 00:05:49.286 ] 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "bdev_nvme_set_hotplug", 00:05:49.286 "params": { 00:05:49.286 "period_us": 100000, 00:05:49.286 "enable": false 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "bdev_wait_for_examine" 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "scsi", 00:05:49.286 "config": null 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "nvmf", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "nvmf_set_config", 00:05:49.286 "params": { 00:05:49.286 "discovery_filter": "match_any", 00:05:49.286 "admin_cmd_passthru": { 00:05:49.286 "identify_ctrlr": false 00:05:49.286 } 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "nvmf_set_max_subsystems", 00:05:49.286 "params": { 00:05:49.286 "max_subsystems": 1024 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "nvmf_set_crdt", 00:05:49.286 "params": { 00:05:49.286 "crdt1": 0, 00:05:49.286 "crdt2": 0, 00:05:49.286 "crdt3": 0 00:05:49.286 } 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "method": "nvmf_create_transport", 00:05:49.286 "params": { 00:05:49.286 "trtype": "TCP", 00:05:49.286 "max_queue_depth": 128, 00:05:49.286 "max_io_qpairs_per_ctrlr": 127, 00:05:49.286 "in_capsule_data_size": 4096, 00:05:49.286 "max_io_size": 131072, 00:05:49.286 "io_unit_size": 131072, 00:05:49.286 "max_aq_depth": 128, 00:05:49.286 "num_shared_buffers": 511, 00:05:49.286 "buf_cache_size": 4294967295, 00:05:49.286 "dif_insert_or_strip": false, 00:05:49.286 "zcopy": false, 00:05:49.286 "c2h_success": true, 00:05:49.286 "sock_priority": 0, 00:05:49.286 "abort_timeout_sec": 1, 00:05:49.286 "ack_timeout": 0, 00:05:49.286 "data_wr_pool_size": 0 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 }, 00:05:49.286 { 00:05:49.286 "subsystem": "iscsi", 00:05:49.286 "config": [ 00:05:49.286 { 00:05:49.286 "method": "iscsi_set_options", 00:05:49.286 "params": { 00:05:49.286 "node_base": "iqn.2016-06.io.spdk", 00:05:49.286 "max_sessions": 128, 00:05:49.286 "max_connections_per_session": 2, 00:05:49.286 "max_queue_depth": 64, 00:05:49.286 "default_time2wait": 2, 00:05:49.286 "default_time2retain": 20, 00:05:49.286 "first_burst_length": 8192, 00:05:49.286 "immediate_data": true, 00:05:49.286 "allow_duplicated_isid": false, 00:05:49.286 "error_recovery_level": 0, 00:05:49.286 "nop_timeout": 60, 00:05:49.286 "nop_in_interval": 30, 00:05:49.286 "disable_chap": false, 00:05:49.286 "require_chap": false, 00:05:49.286 "mutual_chap": false, 00:05:49.286 "chap_group": 0, 00:05:49.286 "max_large_datain_per_connection": 64, 00:05:49.286 "max_r2t_per_connection": 4, 00:05:49.286 "pdu_pool_size": 36864, 00:05:49.286 "immediate_data_pool_size": 16384, 00:05:49.286 "data_out_pool_size": 2048 00:05:49.286 } 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 } 00:05:49.286 ] 00:05:49.286 } 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46080 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46080 ']' 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46080 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46080 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:05:49.286 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:05:49.287 killing process with pid 46080 00:05:49.287 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46080' 00:05:49.287 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46080 00:05:49.287 21:47:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46080 00:05:49.545 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46094 00:05:49.545 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.545 21:47:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46094 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46094 ']' 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46094 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46094 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:05:54.812 killing process with pid 46094 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46094' 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46094 00:05:54.812 21:47:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46094 00:05:54.812 21:47:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.812 21:47:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.812 00:05:54.812 real 0m6.866s 00:05:54.812 user 0m6.043s 00:05:54.812 sys 0m1.479s 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 ************************************ 00:05:54.813 END TEST skip_rpc_with_json 00:05:54.813 ************************************ 00:05:54.813 21:47:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.813 21:47:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.813 21:47:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.813 21:47:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 ************************************ 00:05:54.813 START TEST skip_rpc_with_delay 00:05:54.813 ************************************ 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.813 [2024-05-14 21:47:55.315145] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.813 [2024-05-14 21:47:55.315489] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.813 00:05:54.813 real 0m0.012s 00:05:54.813 user 0m0.001s 00:05:54.813 sys 0m0.010s 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.813 ************************************ 00:05:54.813 END TEST skip_rpc_with_delay 00:05:54.813 ************************************ 00:05:54.813 21:47:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 21:47:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.813 21:47:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:05:54.813 21:47:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.813 00:05:54.813 real 0m12.764s 00:05:54.813 user 0m11.181s 00:05:54.813 sys 0m2.297s 00:05:54.813 21:47:55 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.813 ************************************ 00:05:54.813 END TEST skip_rpc 00:05:54.813 ************************************ 00:05:54.813 21:47:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 21:47:55 -- spdk/autotest.sh@167 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:54.813 21:47:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.813 21:47:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.813 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 ************************************ 00:05:54.813 START TEST rpc_client 00:05:54.813 ************************************ 00:05:54.813 21:47:55 rpc_client -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:55.072 * Looking for test storage... 00:05:55.072 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:55.072 21:47:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:55.072 OK 00:05:55.072 21:47:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.072 00:05:55.072 real 0m0.167s 00:05:55.072 user 0m0.112s 00:05:55.072 sys 0m0.130s 00:05:55.072 21:47:55 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.072 21:47:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.072 ************************************ 00:05:55.072 END TEST rpc_client 00:05:55.072 ************************************ 00:05:55.072 21:47:55 -- spdk/autotest.sh@168 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.072 21:47:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.072 21:47:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.072 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.072 ************************************ 00:05:55.072 START TEST json_config 00:05:55.072 ************************************ 00:05:55.072 21:47:55 json_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.330 21:47:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.330 21:47:55 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:05:55.330 21:47:55 json_config -- nvmf/common.sh@7 -- # return 0 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:55.330 21:47:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.331 INFO: JSON configuration test init 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 21:47:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:55.331 21:47:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:55.331 21:47:55 json_config -- json_config/common.sh@10 -- # shift 00:05:55.331 21:47:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.331 21:47:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.331 21:47:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.331 21:47:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.331 21:47:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.331 21:47:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46253 00:05:55.331 Waiting for target to run... 00:05:55.331 21:47:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.331 21:47:55 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:55.331 21:47:55 json_config -- json_config/common.sh@25 -- # waitforlisten 46253 /var/tmp/spdk_tgt.sock 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 46253 ']' 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.331 21:47:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 [2024-05-14 21:47:55.779025] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:55.331 [2024-05-14 21:47:55.779308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:55.590 EAL: TSC is not safe to use in SMP mode 00:05:55.590 EAL: TSC is not invariant 00:05:55.590 [2024-05-14 21:47:56.060307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.590 [2024-05-14 21:47:56.151456] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:55.590 [2024-05-14 21:47:56.153706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.526 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:56.526 21:47:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.526 21:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@273 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:56.526 21:47:56 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:56.526 21:47:56 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:56.785 [2024-05-14 21:47:57.223242] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:56.785 21:47:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.785 21:47:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:56.785 21:47:57 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:56.785 21:47:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:57.044 21:47:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.044 21:47:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:05:57.044 21:47:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.044 21:47:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:05:57.044 21:47:57 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:57.044 21:47:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:05:57.611 21:47:57 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:57.611 21:47:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:57.611 Nvme0n1p0 Nvme0n1p1 00:05:57.611 21:47:58 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:57.611 21:47:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:57.869 [2024-05-14 21:47:58.400054] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:57.869 [2024-05-14 21:47:58.400112] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:57.869 00:05:57.869 21:47:58 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:57.869 21:47:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:58.128 Malloc3 00:05:58.128 21:47:58 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:58.128 21:47:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:58.387 [2024-05-14 21:47:58.924083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:58.387 [2024-05-14 21:47:58.924147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.387 [2024-05-14 21:47:58.924176] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b9f00 00:05:58.387 [2024-05-14 21:47:58.924185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.387 [2024-05-14 21:47:58.924847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.387 [2024-05-14 21:47:58.924876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:58.387 PTBdevFromMalloc3 00:05:58.387 21:47:58 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:58.387 21:47:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:58.954 Null0 00:05:58.954 21:47:59 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:58.954 21:47:59 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:58.954 Malloc0 00:05:58.954 21:47:59 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:58.954 21:47:59 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:59.212 Malloc1 00:05:59.212 21:47:59 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:59.212 21:47:59 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:59.777 102400+0 records in 00:05:59.777 102400+0 records out 00:05:59.777 104857600 bytes transferred in 0.336869 secs (311271269 bytes/sec) 00:05:59.777 21:48:00 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:59.777 21:48:00 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:00.035 aio_disk 00:06:00.035 21:48:00 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:00.035 21:48:00 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:00.035 21:48:00 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:00.035 a23f4956-123b-11ef-8c90-4585f0cfab08 00:06:00.294 21:48:00 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:00.294 21:48:00 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:00.294 21:48:00 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:00.294 21:48:00 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:00.294 21:48:00 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:00.559 21:48:01 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:00.559 21:48:01 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:00.819 21:48:01 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:00.819 21:48:01 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:01.078 21:48:01 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:06:01.078 21:48:01 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@71 -- # sort 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@72 -- # sort 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:01.079 21:48:01 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:01.079 21:48:01 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.644 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\2\6\2\1\4\5\0\-\1\2\3\b\-\1\1\e\f\-\8\c\9\0\-\4\5\8\5\f\0\c\f\a\b\0\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\2\8\b\9\5\d\f\-\1\2\3\b\-\1\1\e\f\-\8\c\9\0\-\4\5\8\5\f\0\c\f\a\b\0\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\2\a\d\c\4\3\a\-\1\2\3\b\-\1\1\e\f\-\8\c\9\0\-\4\5\8\5\f\0\c\f\a\b\0\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\2\d\7\4\5\d\c\-\1\2\3\b\-\1\1\e\f\-\8\c\9\0\-\4\5\8\5\f\0\c\f\a\b\0\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@86 -- # cat 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 bdev_register:aio_disk 00:06:01.645 Expected events matched: 00:06:01.645 bdev_register:Malloc0 00:06:01.645 bdev_register:Malloc0p0 00:06:01.645 bdev_register:Malloc0p1 00:06:01.645 bdev_register:Malloc0p2 00:06:01.645 bdev_register:Malloc1 00:06:01.645 bdev_register:Malloc3 00:06:01.645 bdev_register:Null0 00:06:01.645 bdev_register:Nvme0n1 00:06:01.645 bdev_register:Nvme0n1p0 00:06:01.645 bdev_register:Nvme0n1p1 00:06:01.645 bdev_register:PTBdevFromMalloc3 00:06:01.645 bdev_register:a2621450-123b-11ef-8c90-4585f0cfab08 00:06:01.645 bdev_register:a28b95df-123b-11ef-8c90-4585f0cfab08 00:06:01.645 bdev_register:a2adc43a-123b-11ef-8c90-4585f0cfab08 00:06:01.645 bdev_register:a2d745dc-123b-11ef-8c90-4585f0cfab08 00:06:01.645 bdev_register:aio_disk 00:06:01.645 21:48:01 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:01.645 21:48:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.645 21:48:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:01.645 21:48:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.645 21:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:01.645 21:48:02 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.645 21:48:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.903 MallocBdevForConfigChangeCheck 00:06:01.903 21:48:02 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:01.903 21:48:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.903 21:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.903 21:48:02 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:01.903 21:48:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.161 INFO: shutting down applications... 00:06:02.161 21:48:02 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:02.161 21:48:02 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:02.161 21:48:02 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:02.161 21:48:02 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:02.161 21:48:02 json_config -- json_config/json_config.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:02.420 [2024-05-14 21:48:02.896315] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:02.679 Calling clear_iscsi_subsystem 00:06:02.679 Calling clear_nvmf_subsystem 00:06:02.679 Calling clear_bdev_subsystem 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:02.679 21:48:03 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:02.937 21:48:03 json_config -- json_config/json_config.sh@345 -- # break 00:06:02.937 21:48:03 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:02.937 21:48:03 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:02.937 21:48:03 json_config -- json_config/common.sh@31 -- # local app=target 00:06:02.937 21:48:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.937 21:48:03 json_config -- json_config/common.sh@35 -- # [[ -n 46253 ]] 00:06:02.937 21:48:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46253 00:06:02.937 21:48:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.937 21:48:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.937 21:48:03 json_config -- json_config/common.sh@41 -- # kill -0 46253 00:06:02.937 21:48:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.507 21:48:03 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.507 21:48:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.507 21:48:03 json_config -- json_config/common.sh@41 -- # kill -0 46253 00:06:03.507 21:48:03 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.507 21:48:03 json_config -- json_config/common.sh@43 -- # break 00:06:03.507 21:48:03 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.507 SPDK target shutdown done 00:06:03.507 21:48:03 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.507 INFO: relaunching applications... 00:06:03.507 21:48:03 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:03.507 21:48:03 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.507 21:48:03 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.507 21:48:03 json_config -- json_config/common.sh@10 -- # shift 00:06:03.507 21:48:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.507 21:48:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.507 21:48:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.507 21:48:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.507 21:48:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.507 21:48:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46441 00:06:03.507 Waiting for target to run... 00:06:03.507 21:48:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.507 21:48:03 json_config -- json_config/common.sh@25 -- # waitforlisten 46441 /var/tmp/spdk_tgt.sock 00:06:03.508 21:48:03 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@827 -- # '[' -z 46441 ']' 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.508 21:48:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.508 [2024-05-14 21:48:03.981508] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:03.508 [2024-05-14 21:48:03.981765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:03.766 EAL: TSC is not safe to use in SMP mode 00:06:03.766 EAL: TSC is not invariant 00:06:03.766 [2024-05-14 21:48:04.255280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.766 [2024-05-14 21:48:04.347716] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:03.766 [2024-05-14 21:48:04.350042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.025 [2024-05-14 21:48:04.485551] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:04.025 [2024-05-14 21:48:04.485606] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:04.025 [2024-05-14 21:48:04.493539] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:04.025 [2024-05-14 21:48:04.493571] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:04.025 [2024-05-14 21:48:04.501556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:04.025 [2024-05-14 21:48:04.501585] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:04.025 [2024-05-14 21:48:04.501593] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:04.025 [2024-05-14 21:48:04.509553] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:06:04.025 [2024-05-14 21:48:04.578806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:04.025 [2024-05-14 21:48:04.578845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.025 [2024-05-14 21:48:04.578872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ce2a500 00:06:04.025 [2024-05-14 21:48:04.578880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.025 [2024-05-14 21:48:04.578947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.025 [2024-05-14 21:48:04.578959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:04.592 21:48:05 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.592 21:48:05 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:04.592 00:06:04.592 21:48:05 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.592 21:48:05 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:04.592 INFO: Checking if target configuration is the same... 00:06:04.592 21:48:05 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:04.592 21:48:05 json_config -- json_config/json_config.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.Ncxi5z /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:04.592 + '[' 2 -ne 2 ']' 00:06:04.592 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:04.592 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:04.592 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:06:04.592 +++ basename /tmp//sh-np.Ncxi5z 00:06:04.592 ++ mktemp /tmp/sh-np.Ncxi5z.XXX 00:06:04.592 + tmp_file_1=/tmp/sh-np.Ncxi5z.XPG 00:06:04.592 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:04.592 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.592 + tmp_file_2=/tmp/spdk_tgt_config.json.m7s 00:06:04.592 + ret=0 00:06:04.592 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:04.592 21:48:05 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:04.592 21:48:05 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.158 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:05.158 + diff -u /tmp/sh-np.Ncxi5z.XPG /tmp/spdk_tgt_config.json.m7s 00:06:05.158 + echo 'INFO: JSON config files are the same' 00:06:05.158 INFO: JSON config files are the same 00:06:05.158 + rm /tmp/sh-np.Ncxi5z.XPG /tmp/spdk_tgt_config.json.m7s 00:06:05.158 + exit 0 00:06:05.158 21:48:05 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:05.158 INFO: changing configuration and checking if this can be detected... 00:06:05.158 21:48:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:05.158 21:48:05 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.158 21:48:05 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.417 21:48:05 json_config -- json_config/json_config.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.BtnV04 /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:05.417 + '[' 2 -ne 2 ']' 00:06:05.417 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:05.417 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:05.417 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:06:05.417 +++ basename /tmp//sh-np.BtnV04 00:06:05.417 ++ mktemp /tmp/sh-np.BtnV04.XXX 00:06:05.417 + tmp_file_1=/tmp/sh-np.BtnV04.eXR 00:06:05.417 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:05.417 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.417 + tmp_file_2=/tmp/spdk_tgt_config.json.6g5 00:06:05.417 + ret=0 00:06:05.417 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:05.417 21:48:05 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:05.417 21:48:05 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.985 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:05.985 + diff -u /tmp/sh-np.BtnV04.eXR /tmp/spdk_tgt_config.json.6g5 00:06:05.985 + ret=1 00:06:05.985 + echo '=== Start of file: /tmp/sh-np.BtnV04.eXR ===' 00:06:05.985 + cat /tmp/sh-np.BtnV04.eXR 00:06:05.985 + echo '=== End of file: /tmp/sh-np.BtnV04.eXR ===' 00:06:05.985 + echo '' 00:06:05.985 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6g5 ===' 00:06:05.985 + cat /tmp/spdk_tgt_config.json.6g5 00:06:05.985 + echo '=== End of file: /tmp/spdk_tgt_config.json.6g5 ===' 00:06:05.985 + echo '' 00:06:05.985 + rm /tmp/sh-np.BtnV04.eXR /tmp/spdk_tgt_config.json.6g5 00:06:05.985 + exit 1 00:06:05.985 INFO: configuration change detected. 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:05.985 21:48:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:05.985 21:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@317 -- # [[ -n 46441 ]] 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:05.985 21:48:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:05.985 21:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:05.985 21:48:06 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:05.985 21:48:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:06.244 21:48:06 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:06.244 21:48:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:06.502 21:48:06 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:06.502 21:48:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:06.761 21:48:07 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:06.761 21:48:07 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:07.020 21:48:07 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:07.020 21:48:07 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:06:07.020 21:48:07 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:07.020 21:48:07 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.020 21:48:07 json_config -- json_config/json_config.sh@323 -- # killprocess 46441 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@946 -- # '[' -z 46441 ']' 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@950 -- # kill -0 46441 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@951 -- # uname 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@954 -- # ps -c -o command 46441 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@954 -- # tail -1 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:06:07.020 killing process with pid 46441 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46441' 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@965 -- # kill 46441 00:06:07.020 21:48:07 json_config -- common/autotest_common.sh@970 -- # wait 46441 00:06:07.326 21:48:07 json_config -- json_config/json_config.sh@326 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.326 21:48:07 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:07.326 21:48:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.326 21:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.326 21:48:07 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:07.326 INFO: Success 00:06:07.326 21:48:07 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:07.326 00:06:07.326 real 0m12.181s 00:06:07.326 user 0m19.299s 00:06:07.326 sys 0m2.043s 00:06:07.326 21:48:07 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.326 21:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.326 ************************************ 00:06:07.326 END TEST json_config 00:06:07.326 ************************************ 00:06:07.326 21:48:07 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.326 21:48:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.326 21:48:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.326 21:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:07.326 ************************************ 00:06:07.326 START TEST json_config_extra_key 00:06:07.326 ************************************ 00:06:07.326 21:48:07 json_config_extra_key -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.585 21:48:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:07.585 21:48:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:06:07.585 21:48:07 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.585 INFO: launching applications... 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:07.585 21:48:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46572 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.585 Waiting for target to run... 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46572 /var/tmp/spdk_tgt.sock 00:06:07.585 21:48:07 json_config_extra_key -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.585 21:48:07 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 46572 ']' 00:06:07.585 21:48:07 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.586 21:48:07 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.586 21:48:07 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.586 21:48:07 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.586 21:48:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.586 [2024-05-14 21:48:08.000924] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:07.586 [2024-05-14 21:48:08.001199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:07.844 EAL: TSC is not safe to use in SMP mode 00:06:07.844 EAL: TSC is not invariant 00:06:07.844 [2024-05-14 21:48:08.301603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.844 [2024-05-14 21:48:08.403306] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:07.844 [2024-05-14 21:48:08.406376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.780 21:48:09 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.780 00:06:08.780 21:48:09 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.780 INFO: shutting down applications... 00:06:08.780 21:48:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.780 21:48:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46572 ]] 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46572 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46572 00:06:08.780 21:48:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.038 21:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46572 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:09.039 SPDK target shutdown done 00:06:09.039 21:48:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:09.039 Success 00:06:09.039 21:48:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:09.039 00:06:09.039 real 0m1.784s 00:06:09.039 user 0m1.592s 00:06:09.039 sys 0m0.527s 00:06:09.039 21:48:09 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.039 21:48:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:09.039 ************************************ 00:06:09.039 END TEST json_config_extra_key 00:06:09.039 ************************************ 00:06:09.298 21:48:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.298 21:48:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.298 21:48:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.298 21:48:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.298 ************************************ 00:06:09.298 START TEST alias_rpc 00:06:09.298 ************************************ 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.298 * Looking for test storage... 00:06:09.298 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:09.298 21:48:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.298 21:48:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46630 00:06:09.298 21:48:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46630 00:06:09.298 21:48:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 46630 ']' 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.298 21:48:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.298 [2024-05-14 21:48:09.845246] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:09.298 [2024-05-14 21:48:09.845451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:09.867 EAL: TSC is not safe to use in SMP mode 00:06:09.867 EAL: TSC is not invariant 00:06:09.867 [2024-05-14 21:48:10.390154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.125 [2024-05-14 21:48:10.492301] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:10.126 [2024-05-14 21:48:10.495095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.693 21:48:10 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.693 21:48:10 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.693 21:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:10.693 21:48:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46630 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 46630 ']' 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 46630 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46630 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@954 -- # tail -1 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:06:10.693 killing process with pid 46630 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46630' 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@965 -- # kill 46630 00:06:10.693 21:48:11 alias_rpc -- common/autotest_common.sh@970 -- # wait 46630 00:06:11.262 00:06:11.262 real 0m1.885s 00:06:11.262 user 0m2.015s 00:06:11.262 sys 0m0.801s 00:06:11.262 21:48:11 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.262 21:48:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 END TEST alias_rpc 00:06:11.262 ************************************ 00:06:11.262 21:48:11 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:11.262 21:48:11 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:11.262 21:48:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.262 21:48:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.262 21:48:11 -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 START TEST spdkcli_tcp 00:06:11.262 ************************************ 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:11.262 * Looking for test storage... 00:06:11.262 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46695 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46695 00:06:11.262 21:48:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 46695 ']' 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.262 21:48:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 [2024-05-14 21:48:11.775422] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:11.262 [2024-05-14 21:48:11.775623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:11.827 EAL: TSC is not safe to use in SMP mode 00:06:11.827 EAL: TSC is not invariant 00:06:11.827 [2024-05-14 21:48:12.344724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.085 [2024-05-14 21:48:12.446499] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:12.085 [2024-05-14 21:48:12.446564] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:12.085 [2024-05-14 21:48:12.449908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.085 [2024-05-14 21:48:12.449898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.343 21:48:12 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.343 21:48:12 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:12.343 21:48:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46703 00:06:12.343 21:48:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:12.343 21:48:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:12.602 [ 00:06:12.602 "spdk_get_version", 00:06:12.602 "rpc_get_methods", 00:06:12.602 "env_dpdk_get_mem_stats", 00:06:12.602 "trace_get_info", 00:06:12.602 "trace_get_tpoint_group_mask", 00:06:12.602 "trace_disable_tpoint_group", 00:06:12.602 "trace_enable_tpoint_group", 00:06:12.602 "trace_clear_tpoint_mask", 00:06:12.602 "trace_set_tpoint_mask", 00:06:12.602 "notify_get_notifications", 00:06:12.602 "notify_get_types", 00:06:12.602 "accel_get_stats", 00:06:12.602 "accel_set_options", 00:06:12.602 "accel_set_driver", 00:06:12.602 "accel_crypto_key_destroy", 00:06:12.602 "accel_crypto_keys_get", 00:06:12.602 "accel_crypto_key_create", 00:06:12.602 "accel_assign_opc", 00:06:12.602 "accel_get_module_info", 00:06:12.602 "accel_get_opc_assignments", 00:06:12.602 "bdev_get_histogram", 00:06:12.602 "bdev_enable_histogram", 00:06:12.602 "bdev_set_qos_limit", 00:06:12.602 "bdev_set_qd_sampling_period", 00:06:12.602 "bdev_get_bdevs", 00:06:12.602 "bdev_reset_iostat", 00:06:12.602 "bdev_get_iostat", 00:06:12.602 "bdev_examine", 00:06:12.602 "bdev_wait_for_examine", 00:06:12.602 "bdev_set_options", 00:06:12.602 "keyring_get_keys", 00:06:12.602 "framework_get_pci_devices", 00:06:12.602 "framework_get_config", 00:06:12.602 "framework_get_subsystems", 00:06:12.602 "sock_get_default_impl", 00:06:12.602 "sock_set_default_impl", 00:06:12.602 "sock_impl_set_options", 00:06:12.602 "sock_impl_get_options", 00:06:12.602 "thread_set_cpumask", 00:06:12.602 "framework_get_scheduler", 00:06:12.602 "framework_set_scheduler", 00:06:12.602 "framework_get_reactors", 00:06:12.602 "thread_get_io_channels", 00:06:12.602 "thread_get_pollers", 00:06:12.602 "thread_get_stats", 00:06:12.602 "framework_monitor_context_switch", 00:06:12.602 "spdk_kill_instance", 00:06:12.602 "log_enable_timestamps", 00:06:12.602 "log_get_flags", 00:06:12.602 "log_clear_flag", 00:06:12.602 "log_set_flag", 00:06:12.602 "log_get_level", 00:06:12.602 "log_set_level", 00:06:12.602 "log_get_print_level", 00:06:12.602 "log_set_print_level", 00:06:12.602 "framework_enable_cpumask_locks", 00:06:12.602 "framework_disable_cpumask_locks", 00:06:12.602 "framework_wait_init", 00:06:12.602 "framework_start_init", 00:06:12.602 "iobuf_get_stats", 00:06:12.602 "iobuf_set_options", 00:06:12.602 "vmd_rescan", 00:06:12.602 "vmd_remove_device", 00:06:12.602 "vmd_enable", 00:06:12.602 "nvmf_subsystem_get_listeners", 00:06:12.602 "nvmf_subsystem_get_qpairs", 00:06:12.602 "nvmf_subsystem_get_controllers", 00:06:12.602 "nvmf_get_stats", 00:06:12.602 "nvmf_get_transports", 00:06:12.602 "nvmf_create_transport", 00:06:12.602 "nvmf_get_targets", 00:06:12.602 "nvmf_delete_target", 00:06:12.602 "nvmf_create_target", 00:06:12.602 "nvmf_subsystem_allow_any_host", 00:06:12.602 "nvmf_subsystem_remove_host", 00:06:12.602 "nvmf_subsystem_add_host", 00:06:12.602 "nvmf_ns_remove_host", 00:06:12.602 "nvmf_ns_add_host", 00:06:12.602 "nvmf_subsystem_remove_ns", 00:06:12.602 "nvmf_subsystem_add_ns", 00:06:12.602 "nvmf_subsystem_listener_set_ana_state", 00:06:12.602 "nvmf_discovery_get_referrals", 00:06:12.602 "nvmf_discovery_remove_referral", 00:06:12.602 "nvmf_discovery_add_referral", 00:06:12.602 "nvmf_subsystem_remove_listener", 00:06:12.602 "nvmf_subsystem_add_listener", 00:06:12.602 "nvmf_delete_subsystem", 00:06:12.602 "nvmf_create_subsystem", 00:06:12.602 "nvmf_get_subsystems", 00:06:12.602 "nvmf_set_crdt", 00:06:12.602 "nvmf_set_config", 00:06:12.602 "nvmf_set_max_subsystems", 00:06:12.602 "scsi_get_devices", 00:06:12.602 "iscsi_get_histogram", 00:06:12.602 "iscsi_enable_histogram", 00:06:12.602 "iscsi_set_options", 00:06:12.602 "iscsi_get_auth_groups", 00:06:12.602 "iscsi_auth_group_remove_secret", 00:06:12.602 "iscsi_auth_group_add_secret", 00:06:12.602 "iscsi_delete_auth_group", 00:06:12.602 "iscsi_create_auth_group", 00:06:12.602 "iscsi_set_discovery_auth", 00:06:12.602 "iscsi_get_options", 00:06:12.602 "iscsi_target_node_request_logout", 00:06:12.602 "iscsi_target_node_set_redirect", 00:06:12.602 "iscsi_target_node_set_auth", 00:06:12.602 "iscsi_target_node_add_lun", 00:06:12.602 "iscsi_get_stats", 00:06:12.602 "iscsi_get_connections", 00:06:12.602 "iscsi_portal_group_set_auth", 00:06:12.602 "iscsi_start_portal_group", 00:06:12.602 "iscsi_delete_portal_group", 00:06:12.602 "iscsi_create_portal_group", 00:06:12.602 "iscsi_get_portal_groups", 00:06:12.602 "iscsi_delete_target_node", 00:06:12.602 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.602 "iscsi_target_node_add_pg_ig_maps", 00:06:12.602 "iscsi_create_target_node", 00:06:12.602 "iscsi_get_target_nodes", 00:06:12.602 "iscsi_delete_initiator_group", 00:06:12.602 "iscsi_initiator_group_remove_initiators", 00:06:12.602 "iscsi_initiator_group_add_initiators", 00:06:12.602 "iscsi_create_initiator_group", 00:06:12.602 "iscsi_get_initiator_groups", 00:06:12.602 "keyring_file_remove_key", 00:06:12.602 "keyring_file_add_key", 00:06:12.602 "iaa_scan_accel_module", 00:06:12.602 "dsa_scan_accel_module", 00:06:12.602 "ioat_scan_accel_module", 00:06:12.602 "accel_error_inject_error", 00:06:12.602 "bdev_aio_delete", 00:06:12.603 "bdev_aio_rescan", 00:06:12.603 "bdev_aio_create", 00:06:12.603 "blobfs_create", 00:06:12.603 "blobfs_detect", 00:06:12.603 "blobfs_set_cache_size", 00:06:12.603 "bdev_zone_block_delete", 00:06:12.603 "bdev_zone_block_create", 00:06:12.603 "bdev_delay_delete", 00:06:12.603 "bdev_delay_create", 00:06:12.603 "bdev_delay_update_latency", 00:06:12.603 "bdev_split_delete", 00:06:12.603 "bdev_split_create", 00:06:12.603 "bdev_error_inject_error", 00:06:12.603 "bdev_error_delete", 00:06:12.603 "bdev_error_create", 00:06:12.603 "bdev_raid_set_options", 00:06:12.603 "bdev_raid_remove_base_bdev", 00:06:12.603 "bdev_raid_add_base_bdev", 00:06:12.603 "bdev_raid_delete", 00:06:12.603 "bdev_raid_create", 00:06:12.603 "bdev_raid_get_bdevs", 00:06:12.603 "bdev_lvol_check_shallow_copy", 00:06:12.603 "bdev_lvol_start_shallow_copy", 00:06:12.603 "bdev_lvol_grow_lvstore", 00:06:12.603 "bdev_lvol_get_lvols", 00:06:12.603 "bdev_lvol_get_lvstores", 00:06:12.603 "bdev_lvol_delete", 00:06:12.603 "bdev_lvol_set_read_only", 00:06:12.603 "bdev_lvol_resize", 00:06:12.603 "bdev_lvol_decouple_parent", 00:06:12.603 "bdev_lvol_inflate", 00:06:12.603 "bdev_lvol_rename", 00:06:12.603 "bdev_lvol_clone_bdev", 00:06:12.603 "bdev_lvol_clone", 00:06:12.603 "bdev_lvol_snapshot", 00:06:12.603 "bdev_lvol_create", 00:06:12.603 "bdev_lvol_delete_lvstore", 00:06:12.603 "bdev_lvol_rename_lvstore", 00:06:12.603 "bdev_lvol_create_lvstore", 00:06:12.603 "bdev_passthru_delete", 00:06:12.603 "bdev_passthru_create", 00:06:12.603 "bdev_nvme_send_cmd", 00:06:12.603 "bdev_nvme_get_path_iostat", 00:06:12.603 "bdev_nvme_get_mdns_discovery_info", 00:06:12.603 "bdev_nvme_stop_mdns_discovery", 00:06:12.603 "bdev_nvme_start_mdns_discovery", 00:06:12.603 "bdev_nvme_set_multipath_policy", 00:06:12.603 "bdev_nvme_set_preferred_path", 00:06:12.603 "bdev_nvme_get_io_paths", 00:06:12.603 "bdev_nvme_remove_error_injection", 00:06:12.603 "bdev_nvme_add_error_injection", 00:06:12.603 "bdev_nvme_get_discovery_info", 00:06:12.603 "bdev_nvme_stop_discovery", 00:06:12.603 "bdev_nvme_start_discovery", 00:06:12.603 "bdev_nvme_get_controller_health_info", 00:06:12.603 "bdev_nvme_disable_controller", 00:06:12.603 "bdev_nvme_enable_controller", 00:06:12.603 "bdev_nvme_reset_controller", 00:06:12.603 "bdev_nvme_get_transport_statistics", 00:06:12.603 "bdev_nvme_apply_firmware", 00:06:12.603 "bdev_nvme_detach_controller", 00:06:12.603 "bdev_nvme_get_controllers", 00:06:12.603 "bdev_nvme_attach_controller", 00:06:12.603 "bdev_nvme_set_hotplug", 00:06:12.603 "bdev_nvme_set_options", 00:06:12.603 "bdev_null_resize", 00:06:12.603 "bdev_null_delete", 00:06:12.603 "bdev_null_create", 00:06:12.603 "bdev_malloc_delete", 00:06:12.603 "bdev_malloc_create" 00:06:12.603 ] 00:06:12.603 21:48:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.603 21:48:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.603 21:48:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.861 21:48:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.861 21:48:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46695 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 46695 ']' 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 46695 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps -c -o command 46695 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # tail -1 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:06:12.861 killing process with pid 46695 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46695' 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 46695 00:06:12.861 21:48:13 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 46695 00:06:13.120 00:06:13.120 real 0m1.874s 00:06:13.120 user 0m2.926s 00:06:13.120 sys 0m0.843s 00:06:13.120 ************************************ 00:06:13.120 END TEST spdkcli_tcp 00:06:13.120 ************************************ 00:06:13.120 21:48:13 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.120 21:48:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.120 21:48:13 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.120 21:48:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.120 21:48:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.120 21:48:13 -- common/autotest_common.sh@10 -- # set +x 00:06:13.120 ************************************ 00:06:13.120 START TEST dpdk_mem_utility 00:06:13.120 ************************************ 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.120 * Looking for test storage... 00:06:13.120 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:13.120 21:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:13.120 21:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46774 00:06:13.120 21:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46774 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 46774 ']' 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.120 21:48:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.120 21:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.120 [2024-05-14 21:48:13.680338] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:13.120 [2024-05-14 21:48:13.680554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:13.688 EAL: TSC is not safe to use in SMP mode 00:06:13.688 EAL: TSC is not invariant 00:06:13.946 [2024-05-14 21:48:14.278471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.946 [2024-05-14 21:48:14.377947] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:13.946 [2024-05-14 21:48:14.380771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.523 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.523 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:14.523 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.523 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.523 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.523 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.523 { 00:06:14.523 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.523 } 00:06:14.523 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.523 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:14.523 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:06:14.523 1 heaps totaling size 2048.000000 MiB 00:06:14.523 size: 2048.000000 MiB heap id: 0 00:06:14.523 end heaps---------- 00:06:14.523 8 mempools totaling size 592.563660 MiB 00:06:14.523 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:06:14.523 size: 153.489014 MiB name: PDU_data_out_Pool 00:06:14.523 size: 84.500549 MiB name: bdev_io_46774 00:06:14.523 size: 51.008362 MiB name: evtpool_46774 00:06:14.523 size: 50.000549 MiB name: msgpool_46774 00:06:14.523 size: 21.758911 MiB name: PDU_Pool 00:06:14.523 size: 19.508911 MiB name: SCSI_TASK_Pool 00:06:14.523 size: 0.026123 MiB name: Session_Pool 00:06:14.523 end mempools------- 00:06:14.523 6 memzones totaling size 4.142822 MiB 00:06:14.523 size: 1.000366 MiB name: RG_ring_0_46774 00:06:14.523 size: 1.000366 MiB name: RG_ring_1_46774 00:06:14.523 size: 1.000366 MiB name: RG_ring_4_46774 00:06:14.523 size: 1.000366 MiB name: RG_ring_5_46774 00:06:14.523 size: 0.125366 MiB name: RG_ring_2_46774 00:06:14.523 size: 0.015991 MiB name: RG_ring_3_46774 00:06:14.523 end memzones------- 00:06:14.523 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.523 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:06:14.523 list of free elements. size: 1254.071899 MiB 00:06:14.523 element at address: 0x1060000000 with size: 1254.001099 MiB 00:06:14.523 element at address: 0x10c8000000 with size: 0.070129 MiB 00:06:14.523 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:06:14.523 list of standard malloc elements. size: 197.217957 MiB 00:06:14.523 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:06:14.523 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:06:14.523 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:06:14.523 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:06:14.523 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:06:14.523 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:06:14.523 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:06:14.523 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:06:14.523 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:06:14.523 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:06:14.523 list of memzone associated elements. size: 596.710144 MiB 00:06:14.523 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:06:14.523 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.523 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:06:14.523 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:06:14.523 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:06:14.523 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46774_0 00:06:14.523 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:06:14.523 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46774_0 00:06:14.523 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:06:14.523 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46774_0 00:06:14.523 element at address: 0x10c683d780 with size: 20.250671 MiB 00:06:14.523 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:06:14.523 element at address: 0x10ae700680 with size: 18.000671 MiB 00:06:14.523 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.523 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:06:14.523 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46774 00:06:14.523 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:06:14.523 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46774 00:06:14.524 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:06:14.524 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46774 00:06:14.524 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:06:14.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.524 element at address: 0x10c673b640 with size: 1.008118 MiB 00:06:14.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.524 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:06:14.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.524 element at address: 0x10af980b40 with size: 1.008118 MiB 00:06:14.524 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.524 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:06:14.524 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46774 00:06:14.524 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:06:14.524 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46774 00:06:14.524 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:06:14.524 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46774 00:06:14.524 element at address: 0x10ae600480 with size: 1.000488 MiB 00:06:14.524 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46774 00:06:14.524 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:06:14.524 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46774 00:06:14.524 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:06:14.524 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.524 element at address: 0x10af900940 with size: 0.500488 MiB 00:06:14.524 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.524 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:06:14.524 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.524 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:06:14.524 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46774 00:06:14.524 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:06:14.524 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.524 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:06:14.524 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.524 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:06:14.524 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46774 00:06:14.524 element at address: 0x10c8018080 with size: 0.002441 MiB 00:06:14.524 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.524 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:06:14.524 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46774 00:06:14.524 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:06:14.524 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46774 00:06:14.524 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:06:14.524 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.524 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.524 21:48:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46774 00:06:14.524 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 46774 ']' 00:06:14.524 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 46774 00:06:14.524 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:14.524 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:06:14.524 21:48:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps -c -o command 46774 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # tail -1 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:06:14.524 killing process with pid 46774 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46774' 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 46774 00:06:14.524 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 46774 00:06:14.790 00:06:14.790 real 0m1.753s 00:06:14.790 user 0m1.771s 00:06:14.790 sys 0m0.863s 00:06:14.790 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.790 21:48:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.790 ************************************ 00:06:14.790 END TEST dpdk_mem_utility 00:06:14.790 ************************************ 00:06:14.790 21:48:15 -- spdk/autotest.sh@177 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:14.790 21:48:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.790 21:48:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.790 21:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.790 ************************************ 00:06:14.790 START TEST event 00:06:14.790 ************************************ 00:06:14.790 21:48:15 event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:15.050 * Looking for test storage... 00:06:15.050 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:06:15.050 21:48:15 event -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:15.050 21:48:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:15.050 21:48:15 event -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.050 21:48:15 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:15.050 21:48:15 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.050 21:48:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.050 ************************************ 00:06:15.050 START TEST event_perf 00:06:15.050 ************************************ 00:06:15.050 21:48:15 event.event_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.050 Running I/O for 1 seconds...[2024-05-14 21:48:15.499771] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:15.050 [2024-05-14 21:48:15.500023] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:15.617 EAL: TSC is not safe to use in SMP mode 00:06:15.617 EAL: TSC is not invariant 00:06:15.617 [2024-05-14 21:48:16.069820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.617 [2024-05-14 21:48:16.173219] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:15.617 [2024-05-14 21:48:16.173297] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:15.617 [2024-05-14 21:48:16.173309] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:06:15.617 [2024-05-14 21:48:16.173318] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:06:15.617 [2024-05-14 21:48:16.178383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.617 [2024-05-14 21:48:16.178618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.617 [2024-05-14 21:48:16.178493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.617 Running I/O for 1 seconds...[2024-05-14 21:48:16.178610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.993 00:06:16.993 lcore 0: 2286769 00:06:16.993 lcore 1: 2286770 00:06:16.993 lcore 2: 2286771 00:06:16.993 lcore 3: 2286769 00:06:16.993 done. 00:06:16.993 00:06:16.993 real 0m1.815s 00:06:16.993 user 0m4.213s 00:06:16.993 sys 0m0.597s 00:06:16.993 ************************************ 00:06:16.993 END TEST event_perf 00:06:16.993 ************************************ 00:06:16.993 21:48:17 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.993 21:48:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.993 21:48:17 event -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.993 21:48:17 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:16.993 21:48:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.993 21:48:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.993 ************************************ 00:06:16.993 START TEST event_reactor 00:06:16.993 ************************************ 00:06:16.993 21:48:17 event.event_reactor -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.993 [2024-05-14 21:48:17.355935] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:16.993 [2024-05-14 21:48:17.356144] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:17.560 EAL: TSC is not safe to use in SMP mode 00:06:17.560 EAL: TSC is not invariant 00:06:17.560 [2024-05-14 21:48:17.930236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.560 [2024-05-14 21:48:18.022811] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:17.560 [2024-05-14 21:48:18.025103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.934 test_start 00:06:18.934 oneshot 00:06:18.934 tick 100 00:06:18.934 tick 100 00:06:18.934 tick 250 00:06:18.934 tick 100 00:06:18.934 tick 100 00:06:18.934 tick 100 00:06:18.934 tick 250 00:06:18.934 tick 500 00:06:18.934 tick 100 00:06:18.934 tick 100 00:06:18.934 tick 250 00:06:18.934 tick 100 00:06:18.934 tick 100 00:06:18.934 test_end 00:06:18.934 00:06:18.934 real 0m1.802s 00:06:18.934 user 0m1.194s 00:06:18.934 sys 0m0.606s 00:06:18.934 21:48:19 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.934 21:48:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:18.934 ************************************ 00:06:18.934 END TEST event_reactor 00:06:18.934 ************************************ 00:06:18.935 21:48:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.935 21:48:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:18.935 21:48:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.935 21:48:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.935 ************************************ 00:06:18.935 START TEST event_reactor_perf 00:06:18.935 ************************************ 00:06:18.935 21:48:19 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.935 [2024-05-14 21:48:19.202158] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:18.935 [2024-05-14 21:48:19.202433] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:19.193 EAL: TSC is not safe to use in SMP mode 00:06:19.193 EAL: TSC is not invariant 00:06:19.193 [2024-05-14 21:48:19.755947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.451 [2024-05-14 21:48:19.841852] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:19.452 [2024-05-14 21:48:19.844143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.388 test_start 00:06:20.389 test_end 00:06:20.389 Performance: 3256161 events per second 00:06:20.389 00:06:20.389 real 0m1.763s 00:06:20.389 user 0m1.177s 00:06:20.389 sys 0m0.582s 00:06:20.389 21:48:20 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.389 21:48:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.389 ************************************ 00:06:20.389 END TEST event_reactor_perf 00:06:20.389 ************************************ 00:06:20.647 21:48:20 event -- event/event.sh@49 -- # uname -s 00:06:20.647 21:48:20 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:06:20.647 00:06:20.647 real 0m5.677s 00:06:20.647 user 0m6.729s 00:06:20.647 sys 0m1.986s 00:06:20.647 21:48:20 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.647 21:48:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 END TEST event 00:06:20.647 ************************************ 00:06:20.647 21:48:21 -- spdk/autotest.sh@178 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.647 21:48:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.647 21:48:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.647 21:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 START TEST thread 00:06:20.647 ************************************ 00:06:20.647 21:48:21 thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.647 * Looking for test storage... 00:06:20.647 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:06:20.647 21:48:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.647 21:48:21 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:20.647 21:48:21 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.647 21:48:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.648 ************************************ 00:06:20.648 START TEST thread_poller_perf 00:06:20.648 ************************************ 00:06:20.648 21:48:21 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.648 [2024-05-14 21:48:21.210846] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:20.648 [2024-05-14 21:48:21.211090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:21.276 EAL: TSC is not safe to use in SMP mode 00:06:21.276 EAL: TSC is not invariant 00:06:21.276 [2024-05-14 21:48:21.761477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.277 [2024-05-14 21:48:21.854197] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:21.277 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:21.277 [2024-05-14 21:48:21.856517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.653 ====================================== 00:06:22.653 busy:2201676629 (cyc) 00:06:22.653 total_run_count: 5174000 00:06:22.653 tsc_hz: 2200008650 (cyc) 00:06:22.653 ====================================== 00:06:22.653 poller_cost: 425 (cyc), 193 (nsec) 00:06:22.653 00:06:22.653 real 0m1.774s 00:06:22.653 user 0m1.190s 00:06:22.653 sys 0m0.583s 00:06:22.653 21:48:22 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.653 ************************************ 00:06:22.653 END TEST thread_poller_perf 00:06:22.653 ************************************ 00:06:22.653 21:48:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.653 21:48:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.653 21:48:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:22.653 21:48:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.653 21:48:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.653 ************************************ 00:06:22.653 START TEST thread_poller_perf 00:06:22.653 ************************************ 00:06:22.653 21:48:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.653 [2024-05-14 21:48:23.025654] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:22.653 [2024-05-14 21:48:23.025892] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:23.220 EAL: TSC is not safe to use in SMP mode 00:06:23.220 EAL: TSC is not invariant 00:06:23.220 [2024-05-14 21:48:23.550990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.220 [2024-05-14 21:48:23.638840] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:23.220 [2024-05-14 21:48:23.641110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.220 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.597 ====================================== 00:06:24.597 busy:2201000312 (cyc) 00:06:24.597 total_run_count: 71045000 00:06:24.597 tsc_hz: 2200008650 (cyc) 00:06:24.597 ====================================== 00:06:24.597 poller_cost: 30 (cyc), 13 (nsec) 00:06:24.597 00:06:24.597 real 0m1.742s 00:06:24.597 user 0m1.166s 00:06:24.597 sys 0m0.572s 00:06:24.597 21:48:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.597 21:48:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.597 ************************************ 00:06:24.597 END TEST thread_poller_perf 00:06:24.597 ************************************ 00:06:24.597 21:48:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:24.597 21:48:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:06:24.597 21:48:24 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.597 21:48:24 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.597 21:48:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.597 ************************************ 00:06:24.597 START TEST thread_spdk_lock 00:06:24.597 ************************************ 00:06:24.598 21:48:24 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:06:24.598 [2024-05-14 21:48:24.807116] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:24.598 [2024-05-14 21:48:24.807491] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:24.856 EAL: TSC is not safe to use in SMP mode 00:06:24.856 EAL: TSC is not invariant 00:06:24.856 [2024-05-14 21:48:25.345925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.856 [2024-05-14 21:48:25.440980] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:24.856 [2024-05-14 21:48:25.441048] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:24.856 [2024-05-14 21:48:25.444097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.856 [2024-05-14 21:48:25.444088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.422 [2024-05-14 21:48:25.887334] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.422 [2024-05-14 21:48:25.887395] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:25.422 [2024-05-14 21:48:25.887404] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x311ce0 00:06:25.422 [2024-05-14 21:48:25.887873] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.422 [2024-05-14 21:48:25.887973] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.422 [2024-05-14 21:48:25.887983] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:25.422 Starting test contend 00:06:25.422 Worker Delay Wait us Hold us Total us 00:06:25.422 0 3 264354 164707 429061 00:06:25.422 1 5 163401 266789 430191 00:06:25.422 PASS test contend 00:06:25.422 Starting test hold_by_poller 00:06:25.422 PASS test hold_by_poller 00:06:25.422 Starting test hold_by_message 00:06:25.422 PASS test hold_by_message 00:06:25.422 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:06:25.422 100014 assertions passed 00:06:25.422 0 assertions failed 00:06:25.422 00:06:25.422 real 0m1.208s 00:06:25.422 user 0m1.081s 00:06:25.422 sys 0m0.568s 00:06:25.422 21:48:26 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.422 21:48:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 ************************************ 00:06:25.422 END TEST thread_spdk_lock 00:06:25.422 ************************************ 00:06:25.681 00:06:25.681 real 0m4.998s 00:06:25.681 user 0m3.551s 00:06:25.681 sys 0m1.934s 00:06:25.681 ************************************ 00:06:25.681 END TEST thread 00:06:25.681 ************************************ 00:06:25.681 21:48:26 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.681 21:48:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.681 21:48:26 -- spdk/autotest.sh@179 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:25.681 21:48:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.681 21:48:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.681 21:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.681 ************************************ 00:06:25.681 START TEST accel 00:06:25.681 ************************************ 00:06:25.681 21:48:26 accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:25.681 * Looking for test storage... 00:06:25.681 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:06:25.682 21:48:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:25.682 21:48:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:25.682 21:48:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.682 21:48:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=47078 00:06:25.682 21:48:26 accel -- accel/accel.sh@63 -- # waitforlisten 47078 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@827 -- # '[' -z 47078 ']' 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.682 21:48:26 accel -- accel/accel.sh@61 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.TAUjqS 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.682 21:48:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.682 [2024-05-14 21:48:26.256313] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:25.682 [2024-05-14 21:48:26.256626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:26.269 EAL: TSC is not safe to use in SMP mode 00:06:26.269 EAL: TSC is not invariant 00:06:26.269 [2024-05-14 21:48:26.803781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.527 [2024-05-14 21:48:26.892952] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:26.527 21:48:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:26.527 21:48:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.527 21:48:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.527 21:48:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.527 21:48:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.527 21:48:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.527 21:48:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:26.527 21:48:26 accel -- accel/accel.sh@41 -- # jq -r . 00:06:26.527 [2024-05-14 21:48:26.902929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.785 21:48:27 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.785 21:48:27 accel -- common/autotest_common.sh@860 -- # return 0 00:06:26.785 21:48:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:26.785 21:48:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:26.785 21:48:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:26.785 21:48:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:26.785 21:48:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:26.785 21:48:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:26.785 21:48:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:26.785 21:48:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.785 21:48:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 21:48:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.785 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.785 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.785 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.786 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.786 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.786 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.786 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.786 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.786 21:48:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.786 21:48:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.786 21:48:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.786 21:48:27 accel -- accel/accel.sh@75 -- # killprocess 47078 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@946 -- # '[' -z 47078 ']' 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@950 -- # kill -0 47078 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@951 -- # uname 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@954 -- # tail -1 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@954 -- # ps -c -o command 47078 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:06:26.786 killing process with pid 47078 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47078' 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@965 -- # kill 47078 00:06:26.786 21:48:27 accel -- common/autotest_common.sh@970 -- # wait 47078 00:06:27.044 21:48:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:27.044 21:48:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:27.044 21:48:27 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:27.044 21:48:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.044 21:48:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.044 21:48:27 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:27.044 21:48:27 accel.accel_help -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.KxMSBt -h 00:06:27.044 21:48:27 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.044 21:48:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:27.302 21:48:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:27.302 21:48:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:27.302 21:48:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.302 21:48:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.303 ************************************ 00:06:27.303 START TEST accel_missing_filename 00:06:27.303 ************************************ 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.303 21:48:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:27.303 21:48:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.5RHSF5 -t 1 -w compress 00:06:27.303 [2024-05-14 21:48:27.650750] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:27.303 [2024-05-14 21:48:27.650980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:27.868 EAL: TSC is not safe to use in SMP mode 00:06:27.868 EAL: TSC is not invariant 00:06:27.868 [2024-05-14 21:48:28.206304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.868 [2024-05-14 21:48:28.296053] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:27.868 21:48:28 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:27.868 [2024-05-14 21:48:28.306719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.868 [2024-05-14 21:48:28.309123] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.868 [2024-05-14 21:48:28.344931] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:28.127 A filename is required. 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.127 00:06:28.127 real 0m0.826s 00:06:28.127 user 0m0.237s 00:06:28.127 sys 0m0.590s 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.127 21:48:28 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:28.127 ************************************ 00:06:28.127 END TEST accel_missing_filename 00:06:28.127 ************************************ 00:06:28.127 21:48:28 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.127 21:48:28 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:28.127 21:48:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.127 21:48:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.127 ************************************ 00:06:28.127 START TEST accel_compress_verify 00:06:28.127 ************************************ 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.127 21:48:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.127 21:48:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mS33G6 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.127 [2024-05-14 21:48:28.528334] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:28.127 [2024-05-14 21:48:28.528585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:28.696 EAL: TSC is not safe to use in SMP mode 00:06:28.697 EAL: TSC is not invariant 00:06:28.697 [2024-05-14 21:48:29.054707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.697 [2024-05-14 21:48:29.142242] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:28.697 21:48:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:28.697 [2024-05-14 21:48:29.149930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.697 [2024-05-14 21:48:29.152321] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.697 [2024-05-14 21:48:29.187410] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:28.956 00:06:28.956 Compression does not support the verify option, aborting. 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.956 00:06:28.956 real 0m0.794s 00:06:28.956 user 0m0.217s 00:06:28.956 sys 0m0.576s 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.956 21:48:29 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 ************************************ 00:06:28.956 END TEST accel_compress_verify 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 ************************************ 00:06:28.956 START TEST accel_wrong_workload 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:28.956 21:48:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8oeZcU -t 1 -w foobar 00:06:28.956 Unsupported workload type: foobar 00:06:28.956 [2024-05-14 21:48:29.363127] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:28.956 accel_perf options: 00:06:28.956 [-h help message] 00:06:28.956 [-q queue depth per core] 00:06:28.956 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.956 [-T number of threads per core 00:06:28.956 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.956 [-t time in seconds] 00:06:28.956 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.956 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.956 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.956 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.956 [-S for crc32c workload, use this seed value (default 0) 00:06:28.956 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.956 [-f for fill workload, use this BYTE value (default 255) 00:06:28.956 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.956 [-y verify result if this switch is on] 00:06:28.956 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.956 Can be used to spread operations across a wider range of memory. 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.956 00:06:28.956 real 0m0.009s 00:06:28.956 user 0m0.002s 00:06:28.956 sys 0m0.008s 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.956 21:48:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 ************************************ 00:06:28.956 END TEST accel_wrong_workload 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 ************************************ 00:06:28.956 START TEST accel_negative_buffers 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:28.956 21:48:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.U6cWAS -t 1 -w xor -y -x -1 00:06:28.956 -x option must be non-negative. 00:06:28.956 [2024-05-14 21:48:29.415176] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:28.956 accel_perf options: 00:06:28.956 [-h help message] 00:06:28.956 [-q queue depth per core] 00:06:28.956 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.956 [-T number of threads per core 00:06:28.956 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.956 [-t time in seconds] 00:06:28.956 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.956 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.956 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.956 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.956 [-S for crc32c workload, use this seed value (default 0) 00:06:28.956 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.956 [-f for fill workload, use this BYTE value (default 255) 00:06:28.956 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.956 [-y verify result if this switch is on] 00:06:28.956 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.956 Can be used to spread operations across a wider range of memory. 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.956 00:06:28.956 real 0m0.010s 00:06:28.956 user 0m0.013s 00:06:28.956 sys 0m0.000s 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 END TEST accel_negative_buffers 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.956 21:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.956 ************************************ 00:06:28.956 START TEST accel_crc32c 00:06:28.956 ************************************ 00:06:28.956 21:48:29 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.956 21:48:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.n9QMYd -t 1 -w crc32c -S 32 -y 00:06:28.956 [2024-05-14 21:48:29.464854] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:28.956 [2024-05-14 21:48:29.465016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:29.525 EAL: TSC is not safe to use in SMP mode 00:06:29.525 EAL: TSC is not invariant 00:06:29.525 [2024-05-14 21:48:30.003425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.525 [2024-05-14 21:48:30.092042] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:29.525 [2024-05-14 21:48:30.102549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.525 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.526 21:48:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.526 21:48:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.526 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.526 21:48:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:30.907 21:48:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.907 00:06:30.907 real 0m1.805s 00:06:30.907 user 0m1.238s 00:06:30.907 sys 0m0.574s 00:06:30.907 ************************************ 00:06:30.907 END TEST accel_crc32c 00:06:30.907 ************************************ 00:06:30.907 21:48:31 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.907 21:48:31 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:30.907 21:48:31 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:30.907 21:48:31 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:30.907 21:48:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.907 21:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.907 ************************************ 00:06:30.907 START TEST accel_crc32c_C2 00:06:30.907 ************************************ 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.907 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kNG7Lx -t 1 -w crc32c -y -C 2 00:06:30.908 [2024-05-14 21:48:31.312430] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:30.908 [2024-05-14 21:48:31.312694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:31.475 EAL: TSC is not safe to use in SMP mode 00:06:31.475 EAL: TSC is not invariant 00:06:31.475 [2024-05-14 21:48:31.854590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.475 [2024-05-14 21:48:31.952277] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:31.475 [2024-05-14 21:48:31.960877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.475 21:48:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.849 00:06:32.849 real 0m1.822s 00:06:32.849 user 0m1.247s 00:06:32.849 sys 0m0.582s 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.849 21:48:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:32.849 ************************************ 00:06:32.849 END TEST accel_crc32c_C2 00:06:32.849 ************************************ 00:06:32.849 21:48:33 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:32.849 21:48:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:32.849 21:48:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.849 21:48:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.849 ************************************ 00:06:32.849 START TEST accel_copy 00:06:32.849 ************************************ 00:06:32.849 21:48:33 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:32.849 21:48:33 accel.accel_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wViPRg -t 1 -w copy -y 00:06:32.849 [2024-05-14 21:48:33.182806] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:32.849 [2024-05-14 21:48:33.183100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:33.413 EAL: TSC is not safe to use in SMP mode 00:06:33.413 EAL: TSC is not invariant 00:06:33.413 [2024-05-14 21:48:33.737846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.413 [2024-05-14 21:48:33.822942] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.413 [2024-05-14 21:48:33.835807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.413 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.414 21:48:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:34.843 21:48:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.843 00:06:34.843 real 0m1.823s 00:06:34.843 user 0m1.240s 00:06:34.843 sys 0m0.593s 00:06:34.843 21:48:34 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.843 21:48:34 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.843 ************************************ 00:06:34.843 END TEST accel_copy 00:06:34.843 ************************************ 00:06:34.843 21:48:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.843 21:48:35 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:34.843 21:48:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.843 21:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.843 ************************************ 00:06:34.843 START TEST accel_fill 00:06:34.843 ************************************ 00:06:34.843 21:48:35 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.843 21:48:35 accel.accel_fill -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xDoL1r -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.843 [2024-05-14 21:48:35.051119] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:34.843 [2024-05-14 21:48:35.051305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:35.101 EAL: TSC is not safe to use in SMP mode 00:06:35.101 EAL: TSC is not invariant 00:06:35.101 [2024-05-14 21:48:35.571425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.101 [2024-05-14 21:48:35.656615] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:35.101 [2024-05-14 21:48:35.668142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.101 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.102 21:48:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:36.475 21:48:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.475 00:06:36.475 real 0m1.788s 00:06:36.475 user 0m1.223s 00:06:36.475 sys 0m0.572s 00:06:36.475 21:48:36 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.475 21:48:36 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:36.475 ************************************ 00:06:36.475 END TEST accel_fill 00:06:36.475 ************************************ 00:06:36.475 21:48:36 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:36.475 21:48:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.475 21:48:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.475 21:48:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.475 ************************************ 00:06:36.475 START TEST accel_copy_crc32c 00:06:36.475 ************************************ 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.475 21:48:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kRJ3dC -t 1 -w copy_crc32c -y 00:06:36.475 [2024-05-14 21:48:36.879459] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:36.475 [2024-05-14 21:48:36.879691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:37.040 EAL: TSC is not safe to use in SMP mode 00:06:37.040 EAL: TSC is not invariant 00:06:37.040 [2024-05-14 21:48:37.419151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.040 [2024-05-14 21:48:37.509326] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:37.040 [2024-05-14 21:48:37.520171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.040 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.041 21:48:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.413 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.414 00:06:38.414 real 0m1.814s 00:06:38.414 user 0m1.248s 00:06:38.414 sys 0m0.579s 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.414 21:48:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:38.414 ************************************ 00:06:38.414 END TEST accel_copy_crc32c 00:06:38.414 ************************************ 00:06:38.414 21:48:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.414 21:48:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:38.414 21:48:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.414 21:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.414 ************************************ 00:06:38.414 START TEST accel_copy_crc32c_C2 00:06:38.414 ************************************ 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:38.414 21:48:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.goishC -t 1 -w copy_crc32c -y -C 2 00:06:38.414 [2024-05-14 21:48:38.732615] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:38.414 [2024-05-14 21:48:38.732862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:38.979 EAL: TSC is not safe to use in SMP mode 00:06:38.979 EAL: TSC is not invariant 00:06:38.979 [2024-05-14 21:48:39.281769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.979 [2024-05-14 21:48:39.384031] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:38.979 [2024-05-14 21:48:39.396452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 21:48:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.352 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.353 00:06:40.353 real 0m1.840s 00:06:40.353 user 0m1.239s 00:06:40.353 sys 0m0.608s 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.353 21:48:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:40.353 ************************************ 00:06:40.353 END TEST accel_copy_crc32c_C2 00:06:40.353 ************************************ 00:06:40.353 21:48:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:40.353 21:48:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:40.353 21:48:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.353 21:48:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.353 ************************************ 00:06:40.353 START TEST accel_dualcast 00:06:40.353 ************************************ 00:06:40.353 21:48:40 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.353 21:48:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.P3Kbnh -t 1 -w dualcast -y 00:06:40.353 [2024-05-14 21:48:40.624036] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:40.353 [2024-05-14 21:48:40.624341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:40.921 EAL: TSC is not safe to use in SMP mode 00:06:40.921 EAL: TSC is not invariant 00:06:40.921 [2024-05-14 21:48:41.220074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.921 [2024-05-14 21:48:41.311527] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:40.921 [2024-05-14 21:48:41.320497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 21:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:42.296 21:48:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.296 00:06:42.296 real 0m1.884s 00:06:42.296 user 0m1.242s 00:06:42.296 sys 0m0.653s 00:06:42.296 21:48:42 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.296 21:48:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:42.296 ************************************ 00:06:42.296 END TEST accel_dualcast 00:06:42.296 ************************************ 00:06:42.296 21:48:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:42.296 21:48:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.296 21:48:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.296 21:48:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.296 ************************************ 00:06:42.296 START TEST accel_compare 00:06:42.296 ************************************ 00:06:42.296 21:48:42 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:42.296 21:48:42 accel.accel_compare -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZNzEIp -t 1 -w compare -y 00:06:42.296 [2024-05-14 21:48:42.551700] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:42.296 [2024-05-14 21:48:42.551951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:42.555 EAL: TSC is not safe to use in SMP mode 00:06:42.555 EAL: TSC is not invariant 00:06:42.555 [2024-05-14 21:48:43.121393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.814 [2024-05-14 21:48:43.209696] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:42.814 [2024-05-14 21:48:43.218046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.814 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.815 21:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:44.191 21:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.191 00:06:44.191 real 0m1.849s 00:06:44.191 user 0m1.242s 00:06:44.191 sys 0m0.613s 00:06:44.191 21:48:44 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.191 21:48:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:44.191 ************************************ 00:06:44.191 END TEST accel_compare 00:06:44.191 ************************************ 00:06:44.191 21:48:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:44.191 21:48:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.191 21:48:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.191 21:48:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.191 ************************************ 00:06:44.191 START TEST accel_xor 00:06:44.191 ************************************ 00:06:44.191 21:48:44 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:44.191 21:48:44 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VqNUSC -t 1 -w xor -y 00:06:44.191 [2024-05-14 21:48:44.446944] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:44.191 [2024-05-14 21:48:44.447166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:44.450 EAL: TSC is not safe to use in SMP mode 00:06:44.450 EAL: TSC is not invariant 00:06:44.450 [2024-05-14 21:48:44.999521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.708 [2024-05-14 21:48:45.091397] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:44.708 [2024-05-14 21:48:45.099627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.708 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.709 21:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.082 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.082 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.082 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.082 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.083 00:06:46.083 real 0m1.826s 00:06:46.083 user 0m1.231s 00:06:46.083 sys 0m0.603s 00:06:46.083 21:48:46 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.083 21:48:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:46.083 ************************************ 00:06:46.083 END TEST accel_xor 00:06:46.083 ************************************ 00:06:46.083 21:48:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:46.083 21:48:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.083 21:48:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.083 21:48:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.083 ************************************ 00:06:46.083 START TEST accel_xor 00:06:46.083 ************************************ 00:06:46.083 21:48:46 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.083 21:48:46 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.lMURKh -t 1 -w xor -y -x 3 00:06:46.083 [2024-05-14 21:48:46.319835] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:46.083 [2024-05-14 21:48:46.320068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:46.341 EAL: TSC is not safe to use in SMP mode 00:06:46.341 EAL: TSC is not invariant 00:06:46.341 [2024-05-14 21:48:46.880038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.599 [2024-05-14 21:48:46.979030] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:46.599 [2024-05-14 21:48:46.990482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.599 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.600 21:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.600 21:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.600 21:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.600 21:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.578 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:47.579 21:48:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.579 00:06:47.579 real 0m1.837s 00:06:47.579 user 0m1.252s 00:06:47.579 sys 0m0.597s 00:06:47.579 21:48:48 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.579 21:48:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 ************************************ 00:06:47.579 END TEST accel_xor 00:06:47.579 ************************************ 00:06:47.837 21:48:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.837 21:48:48 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:47.837 21:48:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.837 21:48:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.837 ************************************ 00:06:47.837 START TEST accel_dif_verify 00:06:47.837 ************************************ 00:06:47.837 21:48:48 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.837 21:48:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.4j89Tb -t 1 -w dif_verify 00:06:47.837 [2024-05-14 21:48:48.206117] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:47.837 [2024-05-14 21:48:48.206294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:48.405 EAL: TSC is not safe to use in SMP mode 00:06:48.405 EAL: TSC is not invariant 00:06:48.405 [2024-05-14 21:48:48.748417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.405 [2024-05-14 21:48:48.838858] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.405 [2024-05-14 21:48:48.847466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:48.405 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.406 21:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:49.781 21:48:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.781 00:06:49.781 real 0m1.815s 00:06:49.781 user 0m1.225s 00:06:49.781 sys 0m0.604s 00:06:49.781 21:48:50 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.781 ************************************ 00:06:49.781 END TEST accel_dif_verify 00:06:49.781 ************************************ 00:06:49.781 21:48:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.781 21:48:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:49.781 21:48:50 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:49.781 21:48:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.781 21:48:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.781 ************************************ 00:06:49.781 START TEST accel_dif_generate 00:06:49.781 ************************************ 00:06:49.781 21:48:50 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.781 21:48:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.LVPgvv -t 1 -w dif_generate 00:06:49.781 [2024-05-14 21:48:50.072116] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:49.781 [2024-05-14 21:48:50.072447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:50.349 EAL: TSC is not safe to use in SMP mode 00:06:50.349 EAL: TSC is not invariant 00:06:50.349 [2024-05-14 21:48:50.652798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.349 [2024-05-14 21:48:50.744656] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:50.349 [2024-05-14 21:48:50.756143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.349 21:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.350 21:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.350 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.350 21:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:51.731 21:48:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.731 00:06:51.731 real 0m1.868s 00:06:51.731 user 0m1.244s 00:06:51.731 sys 0m0.631s 00:06:51.731 21:48:51 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.731 21:48:51 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:51.731 ************************************ 00:06:51.731 END TEST accel_dif_generate 00:06:51.731 ************************************ 00:06:51.731 21:48:51 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:51.731 21:48:51 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:51.731 21:48:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.731 21:48:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.731 ************************************ 00:06:51.731 START TEST accel_dif_generate_copy 00:06:51.731 ************************************ 00:06:51.731 21:48:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:51.731 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.732 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.732 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.732 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.732 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:51.732 21:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Yhvtol -t 1 -w dif_generate_copy 00:06:51.732 [2024-05-14 21:48:51.988198] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:51.732 [2024-05-14 21:48:51.988478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:51.992 EAL: TSC is not safe to use in SMP mode 00:06:51.992 EAL: TSC is not invariant 00:06:51.992 [2024-05-14 21:48:52.543694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.251 [2024-05-14 21:48:52.640488] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:52.251 [2024-05-14 21:48:52.648563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.251 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.252 21:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.626 00:06:53.626 real 0m1.839s 00:06:53.626 user 0m1.232s 00:06:53.626 sys 0m0.608s 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.626 21:48:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.626 ************************************ 00:06:53.626 END TEST accel_dif_generate_copy 00:06:53.626 ************************************ 00:06:53.626 21:48:53 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:53.626 21:48:53 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.626 21:48:53 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:53.626 21:48:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.626 21:48:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.626 ************************************ 00:06:53.626 START TEST accel_comp 00:06:53.626 ************************************ 00:06:53.626 21:48:53 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.626 21:48:53 accel.accel_comp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.97gwjB -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.626 [2024-05-14 21:48:53.872484] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:53.626 [2024-05-14 21:48:53.872799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:53.884 EAL: TSC is not safe to use in SMP mode 00:06:53.884 EAL: TSC is not invariant 00:06:53.884 [2024-05-14 21:48:54.429256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.142 [2024-05-14 21:48:54.539103] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:54.142 [2024-05-14 21:48:54.551111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.142 21:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:55.519 21:48:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.519 00:06:55.519 real 0m1.854s 00:06:55.519 user 0m1.256s 00:06:55.519 sys 0m0.608s 00:06:55.519 21:48:55 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.519 21:48:55 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:55.519 ************************************ 00:06:55.519 END TEST accel_comp 00:06:55.519 ************************************ 00:06:55.519 21:48:55 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.519 21:48:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:55.519 21:48:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.519 21:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.519 ************************************ 00:06:55.519 START TEST accel_decomp 00:06:55.519 ************************************ 00:06:55.519 21:48:55 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.519 21:48:55 accel.accel_decomp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bNJI6W -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.519 [2024-05-14 21:48:55.768393] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:55.519 [2024-05-14 21:48:55.768656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:55.778 EAL: TSC is not safe to use in SMP mode 00:06:55.778 EAL: TSC is not invariant 00:06:55.778 [2024-05-14 21:48:56.320916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.087 [2024-05-14 21:48:56.419653] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:56.087 [2024-05-14 21:48:56.432252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.087 21:48:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.024 21:48:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.024 00:06:57.024 real 0m1.838s 00:06:57.024 user 0m1.254s 00:06:57.024 sys 0m0.593s 00:06:57.024 21:48:57 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.024 21:48:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:57.024 ************************************ 00:06:57.024 END TEST accel_decomp 00:06:57.024 ************************************ 00:06:57.283 21:48:57 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.283 21:48:57 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:57.283 21:48:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.283 21:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.283 ************************************ 00:06:57.283 START TEST accel_decmop_full 00:06:57.283 ************************************ 00:06:57.283 21:48:57 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.283 21:48:57 accel.accel_decmop_full -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RIWt3r -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.283 [2024-05-14 21:48:57.642755] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:57.283 [2024-05-14 21:48:57.642987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:57.850 EAL: TSC is not safe to use in SMP mode 00:06:57.850 EAL: TSC is not invariant 00:06:57.850 [2024-05-14 21:48:58.240907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.850 [2024-05-14 21:48:58.328737] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:57.850 [2024-05-14 21:48:58.338170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.850 21:48:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.226 21:48:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.226 00:06:59.226 real 0m1.878s 00:06:59.226 user 0m1.240s 00:06:59.226 sys 0m0.646s 00:06:59.226 21:48:59 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.226 21:48:59 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:59.226 ************************************ 00:06:59.226 END TEST accel_decmop_full 00:06:59.226 ************************************ 00:06:59.226 21:48:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.226 21:48:59 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:59.226 21:48:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.226 21:48:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.226 ************************************ 00:06:59.226 START TEST accel_decomp_mcore 00:06:59.226 ************************************ 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.226 21:48:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.1VeqCi -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.226 [2024-05-14 21:48:59.562319] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:59.226 [2024-05-14 21:48:59.562544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:59.793 EAL: TSC is not safe to use in SMP mode 00:06:59.793 EAL: TSC is not invariant 00:06:59.793 [2024-05-14 21:49:00.123976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.793 [2024-05-14 21:49:00.214180] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:59.793 [2024-05-14 21:49:00.214273] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:59.793 [2024-05-14 21:49:00.214288] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:06:59.793 [2024-05-14 21:49:00.214300] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:59.793 [2024-05-14 21:49:00.226360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.793 [2024-05-14 21:49:00.226410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.793 [2024-05-14 21:49:00.226690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.793 [2024-05-14 21:49:00.226684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.793 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.794 21:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.211 00:07:01.211 real 0m1.840s 00:07:01.211 user 0m4.352s 00:07:01.211 sys 0m0.617s 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.211 21:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:01.211 ************************************ 00:07:01.211 END TEST accel_decomp_mcore 00:07:01.211 ************************************ 00:07:01.211 21:49:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.211 21:49:01 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.211 21:49:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.211 21:49:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.211 ************************************ 00:07:01.211 START TEST accel_decomp_full_mcore 00:07:01.211 ************************************ 00:07:01.211 21:49:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.211 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:01.212 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:01.212 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.212 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.212 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.212 21:49:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3aXnWB -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.212 [2024-05-14 21:49:01.444804] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:01.212 [2024-05-14 21:49:01.445039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:01.473 EAL: TSC is not safe to use in SMP mode 00:07:01.473 EAL: TSC is not invariant 00:07:01.473 [2024-05-14 21:49:02.030302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.732 [2024-05-14 21:49:02.143870] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:01.732 [2024-05-14 21:49:02.143942] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:01.732 [2024-05-14 21:49:02.143954] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:01.732 [2024-05-14 21:49:02.143964] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:01.732 [2024-05-14 21:49:02.157653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.732 [2024-05-14 21:49:02.157800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.732 [2024-05-14 21:49:02.157718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.732 [2024-05-14 21:49:02.157791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.732 21:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.109 00:07:03.109 real 0m1.904s 00:07:03.109 user 0m4.436s 00:07:03.109 sys 0m0.636s 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.109 21:49:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:03.109 ************************************ 00:07:03.109 END TEST accel_decomp_full_mcore 00:07:03.109 ************************************ 00:07:03.109 21:49:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.109 21:49:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:03.109 21:49:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.109 21:49:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.109 ************************************ 00:07:03.109 START TEST accel_decomp_mthread 00:07:03.109 ************************************ 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.109 21:49:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.83rSs4 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.109 [2024-05-14 21:49:03.389114] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:03.109 [2024-05-14 21:49:03.389328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:03.367 EAL: TSC is not safe to use in SMP mode 00:07:03.367 EAL: TSC is not invariant 00:07:03.367 [2024-05-14 21:49:03.934375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.627 [2024-05-14 21:49:04.026398] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:03.627 [2024-05-14 21:49:04.037221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.627 21:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.005 00:07:05.005 real 0m1.829s 00:07:05.005 user 0m1.250s 00:07:05.005 sys 0m0.592s 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.005 21:49:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.005 ************************************ 00:07:05.005 END TEST accel_decomp_mthread 00:07:05.005 ************************************ 00:07:05.005 21:49:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.005 21:49:05 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:05.005 21:49:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.005 21:49:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.005 ************************************ 00:07:05.005 START TEST accel_decomp_full_mthread 00:07:05.005 ************************************ 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.005 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.d9yc5t -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.005 [2024-05-14 21:49:05.264389] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:05.005 [2024-05-14 21:49:05.264570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:05.307 EAL: TSC is not safe to use in SMP mode 00:07:05.307 EAL: TSC is not invariant 00:07:05.307 [2024-05-14 21:49:05.850350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.565 [2024-05-14 21:49:05.958562] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.565 [2024-05-14 21:49:05.967721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.565 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.566 21:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.943 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.944 00:07:06.944 real 0m1.918s 00:07:06.944 user 0m1.304s 00:07:06.944 sys 0m0.630s 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.944 ************************************ 00:07:06.944 END TEST accel_decomp_full_mthread 00:07:06.944 21:49:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:06.944 ************************************ 00:07:06.944 21:49:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:06.944 21:49:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.j0sltV 00:07:06.944 21:49:07 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:06.944 21:49:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.944 21:49:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.944 ************************************ 00:07:06.944 START TEST accel_dif_functional_tests 00:07:06.944 ************************************ 00:07:06.944 21:49:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.j0sltV 00:07:06.944 [2024-05-14 21:49:07.233286] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:06.944 [2024-05-14 21:49:07.233588] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:07.203 EAL: TSC is not safe to use in SMP mode 00:07:07.203 EAL: TSC is not invariant 00:07:07.203 [2024-05-14 21:49:07.792501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.462 [2024-05-14 21:49:07.883600] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:07.463 [2024-05-14 21:49:07.883673] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:07.463 [2024-05-14 21:49:07.883684] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:07.463 21:49:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:07.463 21:49:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.463 21:49:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.463 21:49:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.463 21:49:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.463 21:49:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.463 21:49:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.463 21:49:07 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.463 [2024-05-14 21:49:07.893148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.463 [2024-05-14 21:49:07.893083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.463 [2024-05-14 21:49:07.893142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.463 00:07:07.463 00:07:07.463 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.463 http://cunit.sourceforge.net/ 00:07:07.463 00:07:07.463 00:07:07.463 Suite: accel_dif 00:07:07.463 Test: verify: DIF generated, GUARD check ...passed 00:07:07.463 Test: verify: DIF generated, APPTAG check ...passed 00:07:07.463 Test: verify: DIF generated, REFTAG check ...passed 00:07:07.463 Test: verify: DIF not generated, GUARD check ...passed 00:07:07.463 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 21:49:07.909632] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.463 [2024-05-14 21:49:07.909702] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.463 [2024-05-14 21:49:07.909754] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.463 passed 00:07:07.463 Test: verify: DIF not generated, REFTAG check ...passed 00:07:07.463 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:07.463 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:07.463 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:07.463 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-05-14 21:49:07.909785] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.463 [2024-05-14 21:49:07.909817] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.463 [2024-05-14 21:49:07.909850] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.463 [2024-05-14 21:49:07.909895] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:07.463 passed 00:07:07.463 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:07.463 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:07.463 Test: generate copy: DIF generated, GUARD check ...[2024-05-14 21:49:07.909987] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:07.463 passed 00:07:07.463 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:07.463 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:07.463 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:07.463 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:07.463 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:07.463 Test: generate copy: iovecs-len validate ...passed 00:07:07.463 Test: generate copy: buffer alignment validate ...passed 00:07:07.463 00:07:07.463 [2024-05-14 21:49:07.910166] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:07.463 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.463 suites 1 1 n/a 0 0 00:07:07.463 tests 20 20 20 0 0 00:07:07.463 asserts 204 204 204 0 n/a 00:07:07.463 00:07:07.463 Elapsed time = 0.000 seconds 00:07:07.722 00:07:07.722 real 0m0.877s 00:07:07.722 user 0m0.423s 00:07:07.722 sys 0m0.602s 00:07:07.722 21:49:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.722 21:49:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 END TEST accel_dif_functional_tests 00:07:07.722 ************************************ 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:07.722 00:07:07.722 real 0m42.054s 00:07:07.722 user 0m33.766s 00:07:07.722 sys 0m15.201s 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:07.722 21:49:08 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.722 21:49:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.722 21:49:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:07.722 ************************************ 00:07:07.722 END TEST accel 00:07:07.722 ************************************ 00:07:07.722 21:49:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:07.722 21:49:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:07.722 21:49:08 -- spdk/autotest.sh@180 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:07.722 21:49:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.722 21:49:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.722 21:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 START TEST accel_rpc 00:07:07.722 ************************************ 00:07:07.722 21:49:08 accel_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:07.981 * Looking for test storage... 00:07:07.981 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:07:07.981 21:49:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.981 21:49:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47836 00:07:07.981 21:49:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47836 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 47836 ']' 00:07:07.981 21:49:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.981 21:49:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.981 [2024-05-14 21:49:08.328509] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:07.981 [2024-05-14 21:49:08.328744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:08.546 EAL: TSC is not safe to use in SMP mode 00:07:08.546 EAL: TSC is not invariant 00:07:08.546 [2024-05-14 21:49:08.878567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.546 [2024-05-14 21:49:08.968670] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:08.546 [2024-05-14 21:49:08.970933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.804 21:49:09 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.804 21:49:09 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:08.804 21:49:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:08.804 21:49:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:08.804 21:49:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:08.804 21:49:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:08.804 21:49:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:08.804 21:49:09 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.804 21:49:09 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.804 21:49:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.804 ************************************ 00:07:08.804 START TEST accel_assign_opcode 00:07:08.804 ************************************ 00:07:08.804 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:08.804 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.804 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.804 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.804 [2024-05-14 21:49:09.391258] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.062 [2024-05-14 21:49:09.399246] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.062 software 00:07:09.062 00:07:09.062 real 0m0.073s 00:07:09.062 user 0m0.013s 00:07:09.062 sys 0m0.003s 00:07:09.062 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.063 21:49:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.063 ************************************ 00:07:09.063 END TEST accel_assign_opcode 00:07:09.063 ************************************ 00:07:09.063 21:49:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47836 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 47836 ']' 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 47836 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 47836 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@954 -- # tail -1 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:07:09.063 killing process with pid 47836 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47836' 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@965 -- # kill 47836 00:07:09.063 21:49:09 accel_rpc -- common/autotest_common.sh@970 -- # wait 47836 00:07:09.320 00:07:09.320 real 0m1.616s 00:07:09.320 user 0m1.485s 00:07:09.320 sys 0m0.784s 00:07:09.320 ************************************ 00:07:09.320 END TEST accel_rpc 00:07:09.320 ************************************ 00:07:09.320 21:49:09 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.320 21:49:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.320 21:49:09 -- spdk/autotest.sh@181 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.320 21:49:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.320 21:49:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.320 21:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.320 ************************************ 00:07:09.320 START TEST app_cmdline 00:07:09.320 ************************************ 00:07:09.320 21:49:09 app_cmdline -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.578 * Looking for test storage... 00:07:09.578 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:07:09.578 21:49:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.578 21:49:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47918 00:07:09.578 21:49:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47918 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 47918 ']' 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.578 21:49:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.578 21:49:09 app_cmdline -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.578 [2024-05-14 21:49:09.973928] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:09.578 [2024-05-14 21:49:09.974175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:10.144 EAL: TSC is not safe to use in SMP mode 00:07:10.144 EAL: TSC is not invariant 00:07:10.144 [2024-05-14 21:49:10.511949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.144 [2024-05-14 21:49:10.597452] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:10.144 [2024-05-14 21:49:10.599713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:10.710 { 00:07:10.710 "version": "SPDK v24.05-pre git sha1 52939f252", 00:07:10.710 "fields": { 00:07:10.710 "major": 24, 00:07:10.710 "minor": 5, 00:07:10.710 "patch": 0, 00:07:10.710 "suffix": "-pre", 00:07:10.710 "commit": "52939f252" 00:07:10.710 } 00:07:10.710 } 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:10.710 21:49:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:10.710 21:49:11 app_cmdline -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.968 request: 00:07:10.968 { 00:07:10.968 "method": "env_dpdk_get_mem_stats", 00:07:10.968 "req_id": 1 00:07:10.968 } 00:07:10.968 Got JSON-RPC error response 00:07:10.968 response: 00:07:10.968 { 00:07:10.968 "code": -32601, 00:07:10.968 "message": "Method not found" 00:07:10.968 } 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.968 21:49:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47918 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 47918 ']' 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 47918 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@954 -- # ps -c -o command 47918 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@954 -- # tail -1 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:07:10.968 killing process with pid 47918 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47918' 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@965 -- # kill 47918 00:07:10.968 21:49:11 app_cmdline -- common/autotest_common.sh@970 -- # wait 47918 00:07:11.225 00:07:11.225 real 0m1.953s 00:07:11.225 user 0m2.289s 00:07:11.225 sys 0m0.745s 00:07:11.225 21:49:11 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.225 ************************************ 00:07:11.225 END TEST app_cmdline 00:07:11.225 ************************************ 00:07:11.225 21:49:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.483 21:49:11 -- spdk/autotest.sh@182 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:11.483 21:49:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.483 21:49:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.483 21:49:11 -- common/autotest_common.sh@10 -- # set +x 00:07:11.483 ************************************ 00:07:11.483 START TEST version 00:07:11.483 ************************************ 00:07:11.483 21:49:11 version -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:11.483 * Looking for test storage... 00:07:11.483 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:07:11.483 21:49:11 version -- app/version.sh@17 -- # get_header_version major 00:07:11.483 21:49:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.483 21:49:11 version -- app/version.sh@14 -- # cut -f2 00:07:11.483 21:49:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.483 21:49:11 version -- app/version.sh@17 -- # major=24 00:07:11.483 21:49:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:11.483 21:49:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.483 21:49:11 version -- app/version.sh@14 -- # cut -f2 00:07:11.483 21:49:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.483 21:49:11 version -- app/version.sh@18 -- # minor=5 00:07:11.483 21:49:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:11.483 21:49:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.484 21:49:11 version -- app/version.sh@14 -- # cut -f2 00:07:11.484 21:49:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.484 21:49:11 version -- app/version.sh@19 -- # patch=0 00:07:11.484 21:49:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:11.484 21:49:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.484 21:49:11 version -- app/version.sh@14 -- # cut -f2 00:07:11.484 21:49:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.484 21:49:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:11.484 21:49:11 version -- app/version.sh@22 -- # version=24.5 00:07:11.484 21:49:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:11.484 21:49:12 version -- app/version.sh@28 -- # version=24.5rc0 00:07:11.484 21:49:12 version -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:07:11.484 21:49:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:11.484 21:49:12 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:11.484 21:49:12 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:11.484 00:07:11.484 real 0m0.206s 00:07:11.484 user 0m0.146s 00:07:11.484 sys 0m0.151s 00:07:11.484 21:49:12 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.484 21:49:12 version -- common/autotest_common.sh@10 -- # set +x 00:07:11.484 ************************************ 00:07:11.484 END TEST version 00:07:11.484 ************************************ 00:07:11.484 21:49:12 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:07:11.484 21:49:12 -- spdk/autotest.sh@185 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:07:11.484 21:49:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.484 21:49:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.484 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:07:11.742 ************************************ 00:07:11.742 START TEST blockdev_general 00:07:11.742 ************************************ 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:07:11.742 * Looking for test storage... 00:07:11.742 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:11.742 21:49:12 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48053 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 48053 00:07:11.742 21:49:12 blockdev_general -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 48053 ']' 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.742 21:49:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:11.742 [2024-05-14 21:49:12.295011] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:11.742 [2024-05-14 21:49:12.295263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:12.309 EAL: TSC is not safe to use in SMP mode 00:07:12.309 EAL: TSC is not invariant 00:07:12.309 [2024-05-14 21:49:12.872961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.567 [2024-05-14 21:49:12.960141] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:12.567 [2024-05-14 21:49:12.962415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 [2024-05-14 21:49:13.527830] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:13.135 [2024-05-14 21:49:13.527882] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:13.135 00:07:13.135 [2024-05-14 21:49:13.535818] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:13.135 [2024-05-14 21:49:13.535847] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:13.135 00:07:13.135 Malloc0 00:07:13.135 Malloc1 00:07:13.135 Malloc2 00:07:13.135 Malloc3 00:07:13.135 Malloc4 00:07:13.135 Malloc5 00:07:13.135 Malloc6 00:07:13.135 Malloc7 00:07:13.135 Malloc8 00:07:13.135 Malloc9 00:07:13.135 [2024-05-14 21:49:13.623825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:13.135 [2024-05-14 21:49:13.623863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.135 [2024-05-14 21:49:13.623880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a70700 00:07:13.135 [2024-05-14 21:49:13.623889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.135 [2024-05-14 21:49:13.624250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.135 [2024-05-14 21:49:13.624279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:13.135 TestPT 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:13.135 5000+0 records in 00:07:13.135 5000+0 records out 00:07:13.135 10240000 bytes transferred in 0.030263 secs (338364098 bytes/sec) 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 AIO0 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:13.135 21:49:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.395 21:49:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:13.395 21:49:13 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:13.396 21:49:13 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "cdb68e22-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdb68e22-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48da91eb-b1da-c15b-95b3-2a81b0846075"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48da91eb-b1da-c15b-95b3-2a81b0846075",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3f9bbd76-e241-f657-8686-34e625ea09ba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3f9bbd76-e241-f657-8686-34e625ea09ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "070eda0b-1661-5753-88ef-7995afe01c53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "070eda0b-1661-5753-88ef-7995afe01c53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "567a8cda-7395-7459-9bc7-40ca067b23d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "567a8cda-7395-7459-9bc7-40ca067b23d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "444069eb-d319-7a5a-a493-deee636f13d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "444069eb-d319-7a5a-a493-deee636f13d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "87296283-f2f5-fd54-8a70-588d7e0add4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "87296283-f2f5-fd54-8a70-588d7e0add4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cef63e78-3ba4-9f55-a7da-890f60c4d41b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cef63e78-3ba4-9f55-a7da-890f60c4d41b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2fd57684-9cc4-5e54-a00c-d7617e94c9c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd57684-9cc4-5e54-a00c-d7617e94c9c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0d472ade-98c0-1956-805a-d44b980c1a55"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d472ade-98c0-1956-805a-d44b980c1a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "01fd9c76-1caa-4a55-90d3-a62a74500935"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "01fd9c76-1caa-4a55-90d3-a62a74500935",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "cdc407de-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cdbb6faf-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "cdbca829-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "cdc53496-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "cdbde0b3-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cdbf1932-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cdc66d0c-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "cdc051b4-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "cdc18a31-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cdcf958d-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cdcf958d-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:13.396 21:49:13 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:13.396 21:49:13 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:13.396 21:49:13 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:13.396 21:49:13 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 48053 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 48053 ']' 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 48053 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@954 -- # ps -c -o command 48053 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@954 -- # tail -1 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:07:13.396 killing process with pid 48053 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48053' 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@965 -- # kill 48053 00:07:13.396 21:49:13 blockdev_general -- common/autotest_common.sh@970 -- # wait 48053 00:07:13.963 21:49:14 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:13.963 21:49:14 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:13.963 21:49:14 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:13.963 21:49:14 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.963 21:49:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:13.963 ************************************ 00:07:13.963 START TEST bdev_hello_world 00:07:13.963 ************************************ 00:07:13.963 21:49:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:13.963 [2024-05-14 21:49:14.282577] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:13.963 [2024-05-14 21:49:14.282833] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:14.529 EAL: TSC is not safe to use in SMP mode 00:07:14.529 EAL: TSC is not invariant 00:07:14.529 [2024-05-14 21:49:14.831346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.529 [2024-05-14 21:49:14.928998] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:14.529 [2024-05-14 21:49:14.931765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.529 [2024-05-14 21:49:14.996757] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:14.529 [2024-05-14 21:49:14.996839] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:14.529 [2024-05-14 21:49:15.004732] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:14.529 [2024-05-14 21:49:15.004808] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:14.529 [2024-05-14 21:49:15.012753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:14.529 [2024-05-14 21:49:15.012826] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:14.529 [2024-05-14 21:49:15.012853] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:14.529 [2024-05-14 21:49:15.060741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:14.529 [2024-05-14 21:49:15.060792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.529 [2024-05-14 21:49:15.060810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd1a800 00:07:14.529 [2024-05-14 21:49:15.060820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.529 [2024-05-14 21:49:15.061179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.529 [2024-05-14 21:49:15.061210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:14.786 [2024-05-14 21:49:15.160854] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:14.786 [2024-05-14 21:49:15.160905] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:14.786 [2024-05-14 21:49:15.160920] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:14.786 [2024-05-14 21:49:15.160947] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:14.786 [2024-05-14 21:49:15.160961] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:14.786 [2024-05-14 21:49:15.160970] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:14.786 [2024-05-14 21:49:15.160986] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:14.786 00:07:14.786 [2024-05-14 21:49:15.160996] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:15.044 00:07:15.044 real 0m1.123s 00:07:15.044 user 0m0.549s 00:07:15.044 sys 0m0.572s 00:07:15.044 21:49:15 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.044 21:49:15 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:15.044 ************************************ 00:07:15.044 END TEST bdev_hello_world 00:07:15.044 ************************************ 00:07:15.044 21:49:15 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:15.044 21:49:15 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:15.044 21:49:15 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.044 21:49:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:15.044 ************************************ 00:07:15.044 START TEST bdev_bounds 00:07:15.044 ************************************ 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=48105 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.044 Process bdevio pid: 48105 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 48105' 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 48105 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 48105 ']' 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.044 21:49:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:15.044 [2024-05-14 21:49:15.449504] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:15.044 [2024-05-14 21:49:15.449760] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:15.609 EAL: TSC is not safe to use in SMP mode 00:07:15.610 EAL: TSC is not invariant 00:07:15.610 [2024-05-14 21:49:15.994253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.610 [2024-05-14 21:49:16.077885] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:15.610 [2024-05-14 21:49:16.077953] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:15.610 [2024-05-14 21:49:16.077964] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:07:15.610 [2024-05-14 21:49:16.081524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.610 [2024-05-14 21:49:16.081635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.610 [2024-05-14 21:49:16.081628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.610 [2024-05-14 21:49:16.140365] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:15.610 [2024-05-14 21:49:16.140418] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:15.610 [2024-05-14 21:49:16.148347] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:15.610 [2024-05-14 21:49:16.148382] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:15.610 [2024-05-14 21:49:16.156361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:15.610 [2024-05-14 21:49:16.156396] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:15.610 [2024-05-14 21:49:16.156406] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:15.867 [2024-05-14 21:49:16.204377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:15.867 [2024-05-14 21:49:16.204430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.867 [2024-05-14 21:49:16.204449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dafd800 00:07:15.867 [2024-05-14 21:49:16.204458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.867 [2024-05-14 21:49:16.204854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.867 [2024-05-14 21:49:16.204887] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:16.167 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.167 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:07:16.167 21:49:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:16.167 I/O targets: 00:07:16.167 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:16.167 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:16.167 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:16.167 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:16.167 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:16.167 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:16.167 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:16.167 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:16.167 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:16.167 00:07:16.167 00:07:16.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.167 http://cunit.sourceforge.net/ 00:07:16.167 00:07:16.167 00:07:16.167 Suite: bdevio tests on: AIO0 00:07:16.167 Test: blockdev write read block ...passed 00:07:16.167 Test: blockdev write zeroes read block ...passed 00:07:16.167 Test: blockdev write zeroes read no split ...passed 00:07:16.167 Test: blockdev write zeroes read split ...passed 00:07:16.167 Test: blockdev write zeroes read split partial ...passed 00:07:16.167 Test: blockdev reset ...passed 00:07:16.167 Test: blockdev write read 8 blocks ...passed 00:07:16.167 Test: blockdev write read size > 128k ...passed 00:07:16.167 Test: blockdev write read invalid size ...passed 00:07:16.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.167 Test: blockdev write read max offset ...passed 00:07:16.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.167 Test: blockdev writev readv 8 blocks ...passed 00:07:16.167 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.167 Test: blockdev writev readv block ...passed 00:07:16.167 Test: blockdev writev readv size > 128k ...passed 00:07:16.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.167 Test: blockdev comparev and writev ...passed 00:07:16.167 Test: blockdev nvme passthru rw ...passed 00:07:16.167 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.167 Test: blockdev nvme admin passthru ...passed 00:07:16.167 Test: blockdev copy ...passed 00:07:16.167 Suite: bdevio tests on: raid1 00:07:16.167 Test: blockdev write read block ...passed 00:07:16.167 Test: blockdev write zeroes read block ...passed 00:07:16.167 Test: blockdev write zeroes read no split ...passed 00:07:16.167 Test: blockdev write zeroes read split ...passed 00:07:16.167 Test: blockdev write zeroes read split partial ...passed 00:07:16.167 Test: blockdev reset ...passed 00:07:16.167 Test: blockdev write read 8 blocks ...passed 00:07:16.167 Test: blockdev write read size > 128k ...passed 00:07:16.167 Test: blockdev write read invalid size ...passed 00:07:16.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.167 Test: blockdev write read max offset ...passed 00:07:16.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.167 Test: blockdev writev readv 8 blocks ...passed 00:07:16.167 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.167 Test: blockdev writev readv block ...passed 00:07:16.167 Test: blockdev writev readv size > 128k ...passed 00:07:16.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.167 Test: blockdev comparev and writev ...passed 00:07:16.167 Test: blockdev nvme passthru rw ...passed 00:07:16.167 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.167 Test: blockdev nvme admin passthru ...passed 00:07:16.167 Test: blockdev copy ...passed 00:07:16.167 Suite: bdevio tests on: concat0 00:07:16.167 Test: blockdev write read block ...passed 00:07:16.167 Test: blockdev write zeroes read block ...passed 00:07:16.167 Test: blockdev write zeroes read no split ...passed 00:07:16.167 Test: blockdev write zeroes read split ...passed 00:07:16.167 Test: blockdev write zeroes read split partial ...passed 00:07:16.167 Test: blockdev reset ...passed 00:07:16.167 Test: blockdev write read 8 blocks ...passed 00:07:16.167 Test: blockdev write read size > 128k ...passed 00:07:16.167 Test: blockdev write read invalid size ...passed 00:07:16.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.167 Test: blockdev write read max offset ...passed 00:07:16.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.167 Test: blockdev writev readv 8 blocks ...passed 00:07:16.167 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.167 Test: blockdev writev readv block ...passed 00:07:16.167 Test: blockdev writev readv size > 128k ...passed 00:07:16.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.167 Test: blockdev comparev and writev ...passed 00:07:16.167 Test: blockdev nvme passthru rw ...passed 00:07:16.167 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.167 Test: blockdev nvme admin passthru ...passed 00:07:16.167 Test: blockdev copy ...passed 00:07:16.167 Suite: bdevio tests on: raid0 00:07:16.167 Test: blockdev write read block ...passed 00:07:16.167 Test: blockdev write zeroes read block ...passed 00:07:16.167 Test: blockdev write zeroes read no split ...passed 00:07:16.167 Test: blockdev write zeroes read split ...passed 00:07:16.167 Test: blockdev write zeroes read split partial ...passed 00:07:16.167 Test: blockdev reset ...passed 00:07:16.167 Test: blockdev write read 8 blocks ...passed 00:07:16.167 Test: blockdev write read size > 128k ...passed 00:07:16.167 Test: blockdev write read invalid size ...passed 00:07:16.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.167 Test: blockdev write read max offset ...passed 00:07:16.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.167 Test: blockdev writev readv 8 blocks ...passed 00:07:16.167 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.167 Test: blockdev writev readv block ...passed 00:07:16.167 Test: blockdev writev readv size > 128k ...passed 00:07:16.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.167 Test: blockdev comparev and writev ...passed 00:07:16.167 Test: blockdev nvme passthru rw ...passed 00:07:16.167 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.167 Test: blockdev nvme admin passthru ...passed 00:07:16.167 Test: blockdev copy ...passed 00:07:16.167 Suite: bdevio tests on: TestPT 00:07:16.167 Test: blockdev write read block ...passed 00:07:16.167 Test: blockdev write zeroes read block ...passed 00:07:16.167 Test: blockdev write zeroes read no split ...passed 00:07:16.167 Test: blockdev write zeroes read split ...passed 00:07:16.427 Test: blockdev write zeroes read split partial ...passed 00:07:16.427 Test: blockdev reset ...passed 00:07:16.427 Test: blockdev write read 8 blocks ...passed 00:07:16.427 Test: blockdev write read size > 128k ...passed 00:07:16.427 Test: blockdev write read invalid size ...passed 00:07:16.427 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.427 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.427 Test: blockdev write read max offset ...passed 00:07:16.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.427 Test: blockdev writev readv 8 blocks ...passed 00:07:16.427 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.427 Test: blockdev writev readv block ...passed 00:07:16.427 Test: blockdev writev readv size > 128k ...passed 00:07:16.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.427 Test: blockdev comparev and writev ...passed 00:07:16.427 Test: blockdev nvme passthru rw ...passed 00:07:16.427 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.427 Test: blockdev nvme admin passthru ...passed 00:07:16.427 Test: blockdev copy ...passed 00:07:16.427 Suite: bdevio tests on: Malloc2p7 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p6 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p5 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p4 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p3 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p2 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.428 Test: blockdev write read invalid size ...passed 00:07:16.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.428 Test: blockdev write read max offset ...passed 00:07:16.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.428 Test: blockdev writev readv 8 blocks ...passed 00:07:16.428 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.428 Test: blockdev writev readv block ...passed 00:07:16.428 Test: blockdev writev readv size > 128k ...passed 00:07:16.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.428 Test: blockdev comparev and writev ...passed 00:07:16.428 Test: blockdev nvme passthru rw ...passed 00:07:16.428 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.428 Test: blockdev nvme admin passthru ...passed 00:07:16.428 Test: blockdev copy ...passed 00:07:16.428 Suite: bdevio tests on: Malloc2p1 00:07:16.428 Test: blockdev write read block ...passed 00:07:16.428 Test: blockdev write zeroes read block ...passed 00:07:16.428 Test: blockdev write zeroes read no split ...passed 00:07:16.428 Test: blockdev write zeroes read split ...passed 00:07:16.428 Test: blockdev write zeroes read split partial ...passed 00:07:16.428 Test: blockdev reset ...passed 00:07:16.428 Test: blockdev write read 8 blocks ...passed 00:07:16.428 Test: blockdev write read size > 128k ...passed 00:07:16.429 Test: blockdev write read invalid size ...passed 00:07:16.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.429 Test: blockdev write read max offset ...passed 00:07:16.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.429 Test: blockdev writev readv 8 blocks ...passed 00:07:16.429 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.429 Test: blockdev writev readv block ...passed 00:07:16.429 Test: blockdev writev readv size > 128k ...passed 00:07:16.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.429 Test: blockdev comparev and writev ...passed 00:07:16.429 Test: blockdev nvme passthru rw ...passed 00:07:16.429 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.429 Test: blockdev nvme admin passthru ...passed 00:07:16.429 Test: blockdev copy ...passed 00:07:16.429 Suite: bdevio tests on: Malloc2p0 00:07:16.429 Test: blockdev write read block ...passed 00:07:16.429 Test: blockdev write zeroes read block ...passed 00:07:16.429 Test: blockdev write zeroes read no split ...passed 00:07:16.429 Test: blockdev write zeroes read split ...passed 00:07:16.429 Test: blockdev write zeroes read split partial ...passed 00:07:16.429 Test: blockdev reset ...passed 00:07:16.429 Test: blockdev write read 8 blocks ...passed 00:07:16.429 Test: blockdev write read size > 128k ...passed 00:07:16.429 Test: blockdev write read invalid size ...passed 00:07:16.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.429 Test: blockdev write read max offset ...passed 00:07:16.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.429 Test: blockdev writev readv 8 blocks ...passed 00:07:16.429 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.429 Test: blockdev writev readv block ...passed 00:07:16.429 Test: blockdev writev readv size > 128k ...passed 00:07:16.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.429 Test: blockdev comparev and writev ...passed 00:07:16.429 Test: blockdev nvme passthru rw ...passed 00:07:16.429 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.429 Test: blockdev nvme admin passthru ...passed 00:07:16.429 Test: blockdev copy ...passed 00:07:16.429 Suite: bdevio tests on: Malloc1p1 00:07:16.429 Test: blockdev write read block ...passed 00:07:16.429 Test: blockdev write zeroes read block ...passed 00:07:16.429 Test: blockdev write zeroes read no split ...passed 00:07:16.429 Test: blockdev write zeroes read split ...passed 00:07:16.429 Test: blockdev write zeroes read split partial ...passed 00:07:16.429 Test: blockdev reset ...passed 00:07:16.429 Test: blockdev write read 8 blocks ...passed 00:07:16.429 Test: blockdev write read size > 128k ...passed 00:07:16.429 Test: blockdev write read invalid size ...passed 00:07:16.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.429 Test: blockdev write read max offset ...passed 00:07:16.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.429 Test: blockdev writev readv 8 blocks ...passed 00:07:16.429 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.429 Test: blockdev writev readv block ...passed 00:07:16.429 Test: blockdev writev readv size > 128k ...passed 00:07:16.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.429 Test: blockdev comparev and writev ...passed 00:07:16.429 Test: blockdev nvme passthru rw ...passed 00:07:16.429 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.429 Test: blockdev nvme admin passthru ...passed 00:07:16.429 Test: blockdev copy ...passed 00:07:16.429 Suite: bdevio tests on: Malloc1p0 00:07:16.429 Test: blockdev write read block ...passed 00:07:16.429 Test: blockdev write zeroes read block ...passed 00:07:16.429 Test: blockdev write zeroes read no split ...passed 00:07:16.429 Test: blockdev write zeroes read split ...passed 00:07:16.429 Test: blockdev write zeroes read split partial ...passed 00:07:16.429 Test: blockdev reset ...passed 00:07:16.429 Test: blockdev write read 8 blocks ...passed 00:07:16.429 Test: blockdev write read size > 128k ...passed 00:07:16.429 Test: blockdev write read invalid size ...passed 00:07:16.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.429 Test: blockdev write read max offset ...passed 00:07:16.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.429 Test: blockdev writev readv 8 blocks ...passed 00:07:16.429 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.429 Test: blockdev writev readv block ...passed 00:07:16.429 Test: blockdev writev readv size > 128k ...passed 00:07:16.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.429 Test: blockdev comparev and writev ...passed 00:07:16.429 Test: blockdev nvme passthru rw ...passed 00:07:16.429 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.429 Test: blockdev nvme admin passthru ...passed 00:07:16.429 Test: blockdev copy ...passed 00:07:16.429 Suite: bdevio tests on: Malloc0 00:07:16.429 Test: blockdev write read block ...passed 00:07:16.429 Test: blockdev write zeroes read block ...passed 00:07:16.429 Test: blockdev write zeroes read no split ...passed 00:07:16.429 Test: blockdev write zeroes read split ...passed 00:07:16.429 Test: blockdev write zeroes read split partial ...passed 00:07:16.429 Test: blockdev reset ...passed 00:07:16.429 Test: blockdev write read 8 blocks ...passed 00:07:16.429 Test: blockdev write read size > 128k ...passed 00:07:16.429 Test: blockdev write read invalid size ...passed 00:07:16.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.429 Test: blockdev write read max offset ...passed 00:07:16.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.429 Test: blockdev writev readv 8 blocks ...passed 00:07:16.429 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.429 Test: blockdev writev readv block ...passed 00:07:16.429 Test: blockdev writev readv size > 128k ...passed 00:07:16.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.429 Test: blockdev comparev and writev ...passed 00:07:16.429 Test: blockdev nvme passthru rw ...passed 00:07:16.429 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.429 Test: blockdev nvme admin passthru ...passed 00:07:16.429 Test: blockdev copy ...passed 00:07:16.429 00:07:16.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.429 suites 16 16 n/a 0 0 00:07:16.429 tests 368 368 368 0 0 00:07:16.429 asserts 2224 2224 2224 0 n/a 00:07:16.429 00:07:16.429 Elapsed time = 0.531 seconds 00:07:16.429 0 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 48105 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 48105 ']' 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 48105 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 48105 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:07:16.429 killing process with pid 48105 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48105' 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 48105 00:07:16.429 21:49:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 48105 00:07:16.688 21:49:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:16.688 00:07:16.688 real 0m1.703s 00:07:16.688 user 0m3.392s 00:07:16.688 sys 0m0.717s 00:07:16.689 21:49:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.689 21:49:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 ************************************ 00:07:16.689 END TEST bdev_bounds 00:07:16.689 ************************************ 00:07:16.689 21:49:17 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 ************************************ 00:07:16.689 START TEST bdev_nbd 00:07:16.689 ************************************ 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:07:16.689 00:07:16.689 real 0m0.004s 00:07:16.689 user 0m0.000s 00:07:16.689 sys 0m0.007s 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.689 ************************************ 00:07:16.689 END TEST bdev_nbd 00:07:16.689 ************************************ 00:07:16.689 21:49:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 21:49:17 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:07:16.689 21:49:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:07:16.689 21:49:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:07:16.689 21:49:17 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.689 21:49:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 ************************************ 00:07:16.689 START TEST bdev_fio 00:07:16.689 ************************************ 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:07:16.689 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:07:16.689 21:49:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:07:17.624 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.625 21:49:18 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:17.625 ************************************ 00:07:17.625 START TEST bdev_fio_rw_verify 00:07:17.625 ************************************ 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:07:17.625 21:49:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:17.625 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:17.625 fio-3.35 00:07:17.884 Starting 16 threads 00:07:18.450 EAL: TSC is not safe to use in SMP mode 00:07:18.450 EAL: TSC is not invariant 00:07:30.652 00:07:30.652 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102675: Tue May 14 21:49:29 2024 00:07:30.652 read: IOPS=235k, BW=918MiB/s (963MB/s)(9185MiB/10003msec) 00:07:30.652 slat (nsec): min=282, max=754504k, avg=3688.58, stdev=649594.09 00:07:30.652 clat (nsec): min=808, max=756315k, avg=46041.45, stdev=1881630.96 00:07:30.652 lat (usec): min=2, max=756320, avg=49.73, stdev=2010.52 00:07:30.652 clat percentiles (usec): 00:07:30.652 | 50.000th=[ 9], 99.000th=[ 725], 99.900th=[ 824], 00:07:30.652 | 99.990th=[ 92799], 99.999th=[141558] 00:07:30.652 write: IOPS=386k, BW=1509MiB/s (1583MB/s)(14.7GiB/10002msec); 0 zone resets 00:07:30.652 slat (nsec): min=548, max=1772.4M, avg=21802.15, stdev=1343569.69 00:07:30.652 clat (nsec): min=777, max=1780.2M, avg=105651.13, stdev=4004212.22 00:07:30.652 lat (usec): min=12, max=1780.2k, avg=127.45, stdev=4224.57 00:07:30.652 clat percentiles (usec): 00:07:30.652 | 50.000th=[ 50], 99.000th=[ 709], 99.900th=[ 2147], 00:07:30.652 | 99.990th=[ 94897], 99.999th=[240124] 00:07:30.652 bw ( MiB/s): min= 574, max= 2599, per=100.00%, avg=1519.61, stdev=39.95, samples=291 00:07:30.652 iops : min=147100, max=665388, avg=389016.50, stdev=10226.47, samples=291 00:07:30.652 lat (nsec) : 1000=0.01% 00:07:30.652 lat (usec) : 2=0.05%, 4=10.82%, 10=18.51%, 20=21.18%, 50=17.25% 00:07:30.652 lat (usec) : 100=28.20%, 250=2.30%, 500=0.12%, 750=0.74%, 1000=0.72% 00:07:30.652 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01% 00:07:30.652 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:07:30.652 lat (msec) : 2000=0.01% 00:07:30.652 cpu : usr=56.24%, sys=2.99%, ctx=1146975, majf=0, minf=625 00:07:30.652 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:30.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:30.653 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:30.653 issued rwts: total=2351395,3864487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:30.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:07:30.653 00:07:30.653 Run status group 0 (all jobs): 00:07:30.653 READ: bw=918MiB/s (963MB/s), 918MiB/s-918MiB/s (963MB/s-963MB/s), io=9185MiB (9631MB), run=10003-10003msec 00:07:30.653 WRITE: bw=1509MiB/s (1583MB/s), 1509MiB/s-1509MiB/s (1583MB/s-1583MB/s), io=14.7GiB (15.8GB), run=10002-10002msec 00:07:30.653 00:07:30.653 real 0m12.716s 00:07:30.653 user 1m34.616s 00:07:30.653 sys 0m8.602s 00:07:30.653 21:49:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.653 21:49:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.653 ************************************ 00:07:30.653 END TEST bdev_fio_rw_verify 00:07:30.653 ************************************ 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:07:30.653 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:07:30.654 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "cdb68e22-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdb68e22-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48da91eb-b1da-c15b-95b3-2a81b0846075"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48da91eb-b1da-c15b-95b3-2a81b0846075",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3f9bbd76-e241-f657-8686-34e625ea09ba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3f9bbd76-e241-f657-8686-34e625ea09ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "070eda0b-1661-5753-88ef-7995afe01c53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "070eda0b-1661-5753-88ef-7995afe01c53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "567a8cda-7395-7459-9bc7-40ca067b23d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "567a8cda-7395-7459-9bc7-40ca067b23d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "444069eb-d319-7a5a-a493-deee636f13d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "444069eb-d319-7a5a-a493-deee636f13d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "87296283-f2f5-fd54-8a70-588d7e0add4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "87296283-f2f5-fd54-8a70-588d7e0add4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cef63e78-3ba4-9f55-a7da-890f60c4d41b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cef63e78-3ba4-9f55-a7da-890f60c4d41b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2fd57684-9cc4-5e54-a00c-d7617e94c9c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd57684-9cc4-5e54-a00c-d7617e94c9c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0d472ade-98c0-1956-805a-d44b980c1a55"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d472ade-98c0-1956-805a-d44b980c1a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "01fd9c76-1caa-4a55-90d3-a62a74500935"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "01fd9c76-1caa-4a55-90d3-a62a74500935",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "cdc407de-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cdbb6faf-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "cdbca829-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "cdc53496-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "cdbde0b3-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cdbf1932-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cdc66d0c-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "cdc051b4-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "cdc18a31-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cdcf958d-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cdcf958d-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:30.654 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:07:30.654 Malloc1p0 00:07:30.654 Malloc1p1 00:07:30.654 Malloc2p0 00:07:30.654 Malloc2p1 00:07:30.654 Malloc2p2 00:07:30.654 Malloc2p3 00:07:30.654 Malloc2p4 00:07:30.654 Malloc2p5 00:07:30.654 Malloc2p6 00:07:30.654 Malloc2p7 00:07:30.654 TestPT 00:07:30.654 raid0 00:07:30.654 concat0 ]] 00:07:30.654 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "cdb68e22-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdb68e22-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48da91eb-b1da-c15b-95b3-2a81b0846075"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48da91eb-b1da-c15b-95b3-2a81b0846075",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3f9bbd76-e241-f657-8686-34e625ea09ba"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3f9bbd76-e241-f657-8686-34e625ea09ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "070eda0b-1661-5753-88ef-7995afe01c53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "070eda0b-1661-5753-88ef-7995afe01c53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "567a8cda-7395-7459-9bc7-40ca067b23d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "567a8cda-7395-7459-9bc7-40ca067b23d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "444069eb-d319-7a5a-a493-deee636f13d4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "444069eb-d319-7a5a-a493-deee636f13d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93a8a563-e85d-5f5d-bfc8-94c67b52c8a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "87296283-f2f5-fd54-8a70-588d7e0add4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "87296283-f2f5-fd54-8a70-588d7e0add4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cef63e78-3ba4-9f55-a7da-890f60c4d41b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cef63e78-3ba4-9f55-a7da-890f60c4d41b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2fd57684-9cc4-5e54-a00c-d7617e94c9c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd57684-9cc4-5e54-a00c-d7617e94c9c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0d472ade-98c0-1956-805a-d44b980c1a55"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d472ade-98c0-1956-805a-d44b980c1a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "01fd9c76-1caa-4a55-90d3-a62a74500935"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "01fd9c76-1caa-4a55-90d3-a62a74500935",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "cdc407de-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc407de-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cdbb6faf-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "cdbca829-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "cdc53496-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc53496-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "cdbde0b3-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "cdbf1932-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cdc66d0c-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cdc66d0c-123b-11ef-8c90-4585f0cfab08",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "cdc051b4-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "cdc18a31-123b-11ef-8c90-4585f0cfab08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "cdcf958d-123b-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "cdcf958d-123b-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.655 21:49:30 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:30.655 ************************************ 00:07:30.655 START TEST bdev_fio_trim 00:07:30.655 ************************************ 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:30.655 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:07:30.656 21:49:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:07:30.656 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:07:30.656 fio-3.35 00:07:30.656 Starting 14 threads 00:07:31.223 EAL: TSC is not safe to use in SMP mode 00:07:31.223 EAL: TSC is not invariant 00:07:43.425 00:07:43.425 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102694: Tue May 14 21:49:42 2024 00:07:43.425 write: IOPS=2342k, BW=9148MiB/s (9593MB/s)(89.3GiB/10001msec); 0 zone resets 00:07:43.425 slat (nsec): min=280, max=2015.8M, avg=1629.54, stdev=567312.20 00:07:43.425 clat (nsec): min=1400, max=1496.1M, avg=16053.23, stdev=983052.39 00:07:43.425 lat (usec): min=2, max=2015.8k, avg=17.68, stdev=1135.00 00:07:43.425 clat percentiles (usec): 00:07:43.425 | 50.000th=[ 7], 99.000th=[ 19], 99.900th=[ 955], 99.990th=[ 7898], 00:07:43.425 | 99.999th=[94897] 00:07:43.425 bw ( MiB/s): min= 3293, max=14800, per=100.00%, avg=9520.28, stdev=280.77, samples=257 00:07:43.425 iops : min=843038, max=3788853, avg=2437187.40, stdev=71876.65, samples=257 00:07:43.425 trim: IOPS=2342k, BW=9148MiB/s (9593MB/s)(89.3GiB/10001msec); 0 zone resets 00:07:43.425 slat (nsec): min=571, max=1188.0M, avg=1453.51, stdev=309990.68 00:07:43.425 clat (nsec): min=491, max=2015.8M, avg=11595.45, stdev=1022498.84 00:07:43.425 lat (nsec): min=1754, max=2015.8M, avg=13048.96, stdev=1068461.25 00:07:43.425 clat percentiles (usec): 00:07:43.425 | 50.000th=[ 8], 99.000th=[ 21], 99.900th=[ 28], 99.990th=[ 50], 00:07:43.425 | 99.999th=[94897] 00:07:43.425 bw ( MiB/s): min= 3293, max=14800, per=100.00%, avg=9520.29, stdev=280.77, samples=257 00:07:43.425 iops : min=843036, max=3788837, avg=2437189.40, stdev=71876.67, samples=257 00:07:43.425 lat (nsec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:07:43.425 lat (usec) : 2=0.09%, 4=22.29%, 10=57.39%, 20=19.26%, 50=0.73% 00:07:43.425 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.20% 00:07:43.426 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:07:43.426 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:07:43.426 lat (msec) : 2000=0.01%, >=2000=0.01% 00:07:43.426 cpu : usr=63.37%, sys=4.59%, ctx=1232339, majf=0, minf=0 00:07:43.426 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:43.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.426 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.426 issued rwts: total=0,23422418,23422425,0 short=0,0,0,0 dropped=0,0,0,0 00:07:43.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:07:43.426 00:07:43.426 Run status group 0 (all jobs): 00:07:43.426 WRITE: bw=9148MiB/s (9593MB/s), 9148MiB/s-9148MiB/s (9593MB/s-9593MB/s), io=89.3GiB (95.9GB), run=10001-10001msec 00:07:43.426 TRIM: bw=9148MiB/s (9593MB/s), 9148MiB/s-9148MiB/s (9593MB/s-9593MB/s), io=89.3GiB (95.9GB), run=10001-10001msec 00:07:43.426 00:07:43.426 real 0m12.519s 00:07:43.426 user 1m34.641s 00:07:43.426 sys 0m9.473s 00:07:43.426 21:49:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.426 21:49:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:07:43.426 ************************************ 00:07:43.426 END TEST bdev_fio_trim 00:07:43.426 ************************************ 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:07:43.426 /usr/home/vagrant/spdk_repo/spdk 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:07:43.426 00:07:43.426 real 0m26.271s 00:07:43.426 user 3m9.552s 00:07:43.426 sys 0m18.766s 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.426 ************************************ 00:07:43.426 END TEST bdev_fio 00:07:43.426 ************************************ 00:07:43.426 21:49:43 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:07:43.426 21:49:43 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:43.426 21:49:43 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:43.426 21:49:43 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:43.426 21:49:43 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.426 21:49:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:43.426 ************************************ 00:07:43.426 START TEST bdev_verify 00:07:43.426 ************************************ 00:07:43.426 21:49:43 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:43.426 [2024-05-14 21:49:43.549395] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:43.426 [2024-05-14 21:49:43.549562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:43.684 EAL: TSC is not safe to use in SMP mode 00:07:43.684 EAL: TSC is not invariant 00:07:43.684 [2024-05-14 21:49:44.108975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.684 [2024-05-14 21:49:44.214613] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:43.684 [2024-05-14 21:49:44.214698] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:43.684 [2024-05-14 21:49:44.218099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.684 [2024-05-14 21:49:44.218094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.982 [2024-05-14 21:49:44.279697] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:43.982 [2024-05-14 21:49:44.279777] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:43.982 [2024-05-14 21:49:44.287677] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:43.982 [2024-05-14 21:49:44.287718] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:43.982 [2024-05-14 21:49:44.295693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:43.982 [2024-05-14 21:49:44.295740] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:43.982 [2024-05-14 21:49:44.295754] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:43.982 [2024-05-14 21:49:44.343716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:43.982 [2024-05-14 21:49:44.343794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.982 [2024-05-14 21:49:44.343818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c1bd800 00:07:43.982 [2024-05-14 21:49:44.343830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.982 [2024-05-14 21:49:44.344281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.982 [2024-05-14 21:49:44.344320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:43.982 Running I/O for 5 seconds... 00:07:49.244 00:07:49.244 Latency(us) 00:07:49.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.244 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x1000 00:07:49.244 Malloc0 : 5.03 6900.21 26.95 0.00 0.00 18542.86 67.49 49330.54 00:07:49.244 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x1000 length 0x1000 00:07:49.244 Malloc0 : 5.03 161.59 0.63 0.00 0.00 791901.20 132.19 1082888.78 00:07:49.244 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x800 00:07:49.244 Malloc1p0 : 5.02 5608.78 21.91 0.00 0.00 22807.48 279.27 25618.52 00:07:49.244 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x800 length 0x800 00:07:49.244 Malloc1p0 : 5.01 6128.83 23.94 0.00 0.00 20871.62 381.67 23592.87 00:07:49.244 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x800 00:07:49.244 Malloc1p1 : 5.02 5608.43 21.91 0.00 0.00 22803.49 281.13 25022.74 00:07:49.244 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x800 length 0x800 00:07:49.244 Malloc1p1 : 5.01 6128.47 23.94 0.00 0.00 20867.27 383.53 22997.09 00:07:49.244 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p0 : 5.02 5608.13 21.91 0.00 0.00 22800.57 286.72 23473.71 00:07:49.244 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p0 : 5.01 6126.86 23.93 0.00 0.00 20865.64 379.81 20852.28 00:07:49.244 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p1 : 5.02 5607.83 21.91 0.00 0.00 22796.91 297.89 22758.78 00:07:49.244 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p1 : 5.01 6126.40 23.93 0.00 0.00 20864.08 381.67 20137.35 00:07:49.244 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p2 : 5.02 5607.54 21.90 0.00 0.00 22793.16 286.72 22163.00 00:07:49.244 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p2 : 5.01 6126.07 23.93 0.00 0.00 20859.82 404.01 19422.41 00:07:49.244 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p3 : 5.02 5607.18 21.90 0.00 0.00 22789.71 283.00 21805.53 00:07:49.244 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p3 : 5.01 6125.74 23.93 0.00 0.00 20855.75 284.86 17396.76 00:07:49.244 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p4 : 5.02 5606.88 21.90 0.00 0.00 22786.65 294.17 17873.38 00:07:49.244 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p4 : 5.02 6125.40 23.93 0.00 0.00 20853.04 281.13 16800.98 00:07:49.244 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p5 : 5.02 5606.59 21.90 0.00 0.00 22783.03 286.72 18588.32 00:07:49.244 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p5 : 5.03 6134.69 23.96 0.00 0.00 20817.09 284.86 17396.76 00:07:49.244 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x0 length 0x200 00:07:49.244 Malloc2p6 : 5.02 5606.30 21.90 0.00 0.00 22779.62 290.44 20494.81 00:07:49.244 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.244 Verification LBA range: start 0x200 length 0x200 00:07:49.244 Malloc2p6 : 5.03 6134.42 23.96 0.00 0.00 20813.77 353.74 18230.85 00:07:49.244 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x200 00:07:49.245 Malloc2p7 : 5.02 5606.00 21.90 0.00 0.00 22775.73 286.72 22520.46 00:07:49.245 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x200 length 0x200 00:07:49.245 Malloc2p7 : 5.03 6134.11 23.96 0.00 0.00 20810.21 288.58 19064.94 00:07:49.245 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x1000 00:07:49.245 TestPT : 5.02 5584.19 21.81 0.00 0.00 22855.59 781.96 22877.93 00:07:49.245 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x1000 length 0x1000 00:07:49.245 TestPT : 5.03 5319.40 20.78 0.00 0.00 23993.29 2398.01 73876.65 00:07:49.245 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x2000 00:07:49.245 raid0 : 5.02 5605.37 21.90 0.00 0.00 22769.56 292.30 24069.49 00:07:49.245 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x2000 length 0x2000 00:07:49.245 raid0 : 5.03 6133.58 23.96 0.00 0.00 20804.31 290.44 20256.50 00:07:49.245 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x2000 00:07:49.245 concat0 : 5.02 5605.08 21.89 0.00 0.00 22766.28 283.00 24426.96 00:07:49.245 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x2000 length 0x2000 00:07:49.245 concat0 : 5.03 6133.32 23.96 0.00 0.00 20801.24 312.78 21209.75 00:07:49.245 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x1000 00:07:49.245 raid1 : 5.02 5604.78 21.89 0.00 0.00 22762.24 355.61 24903.58 00:07:49.245 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x1000 length 0x1000 00:07:49.245 raid1 : 5.03 6133.05 23.96 0.00 0.00 20797.41 554.82 22639.62 00:07:49.245 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x0 length 0x4e2 00:07:49.245 AIO0 : 5.14 892.49 3.49 0.00 0.00 142109.85 1385.19 186836.44 00:07:49.245 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:49.245 Verification LBA range: start 0x4e2 length 0x4e2 00:07:49.245 AIO0 : 5.14 898.65 3.51 0.00 0.00 141326.78 19422.41 186836.44 00:07:49.245 =================================================================================================================== 00:07:49.245 Total : 172336.33 673.19 0.00 0.00 23742.30 67.49 1082888.78 00:07:49.503 00:07:49.503 real 0m6.341s 00:07:49.503 user 0m9.987s 00:07:49.503 sys 0m0.778s 00:07:49.503 21:49:49 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.503 ************************************ 00:07:49.503 END TEST bdev_verify 00:07:49.503 ************************************ 00:07:49.503 21:49:49 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 21:49:49 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:49.503 21:49:49 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:49.504 21:49:49 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.504 21:49:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.504 ************************************ 00:07:49.504 START TEST bdev_verify_big_io 00:07:49.504 ************************************ 00:07:49.504 21:49:49 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:49.504 [2024-05-14 21:49:49.926131] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:49.504 [2024-05-14 21:49:49.926384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:50.070 EAL: TSC is not safe to use in SMP mode 00:07:50.070 EAL: TSC is not invariant 00:07:50.070 [2024-05-14 21:49:50.479454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.070 [2024-05-14 21:49:50.573241] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:50.070 [2024-05-14 21:49:50.573300] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:50.070 [2024-05-14 21:49:50.576201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.070 [2024-05-14 21:49:50.576201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.070 [2024-05-14 21:49:50.636391] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.070 [2024-05-14 21:49:50.636456] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.070 [2024-05-14 21:49:50.644371] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.070 [2024-05-14 21:49:50.644414] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.070 [2024-05-14 21:49:50.652395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.070 [2024-05-14 21:49:50.652452] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.070 [2024-05-14 21:49:50.652471] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.333 [2024-05-14 21:49:50.700392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.333 [2024-05-14 21:49:50.700461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.333 [2024-05-14 21:49:50.700477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a5b3800 00:07:50.333 [2024-05-14 21:49:50.700486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.333 [2024-05-14 21:49:50.700849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.333 [2024-05-14 21:49:50.700874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:50.333 [2024-05-14 21:49:50.802406] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.802678] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.802887] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.803109] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.803321] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.803535] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.803761] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.803981] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.804193] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.804403] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.804633] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.804843] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.805040] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.805244] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.805453] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.805656] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:07:50.333 [2024-05-14 21:49:50.807911] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:07:50.333 [2024-05-14 21:49:50.808212] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:07:50.333 Running I/O for 5 seconds... 00:07:55.616 00:07:55.616 Latency(us) 00:07:55.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.616 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x100 00:07:55.616 Malloc0 : 5.07 3992.59 249.54 0.00 0.00 31963.18 94.02 102474.07 00:07:55.616 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x100 length 0x100 00:07:55.616 Malloc0 : 5.07 3607.85 225.49 0.00 0.00 35379.03 93.09 110576.67 00:07:55.616 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x80 00:07:55.616 Malloc1p0 : 5.11 516.89 32.31 0.00 0.00 246162.69 495.24 312665.07 00:07:55.616 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x80 length 0x80 00:07:55.616 Malloc1p0 : 5.08 1701.53 106.35 0.00 0.00 74635.19 1057.51 136314.34 00:07:55.616 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x80 00:07:55.616 Malloc1p1 : 5.11 516.86 32.30 0.00 0.00 245678.12 521.31 305039.09 00:07:55.616 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x80 length 0x80 00:07:55.616 Malloc1p1 : 5.10 474.01 29.63 0.00 0.00 268287.34 389.12 301226.10 00:07:55.616 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p0 : 5.08 500.83 31.30 0.00 0.00 63379.32 301.61 118679.27 00:07:55.616 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p0 : 5.08 456.34 28.52 0.00 0.00 69618.27 255.07 99614.33 00:07:55.616 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p1 : 5.08 500.80 31.30 0.00 0.00 63345.02 297.89 117249.40 00:07:55.616 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p1 : 5.08 456.31 28.52 0.00 0.00 69572.36 251.34 98661.08 00:07:55.616 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p2 : 5.08 500.77 31.30 0.00 0.00 63302.96 296.03 115819.53 00:07:55.616 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p2 : 5.08 456.28 28.52 0.00 0.00 69553.10 262.52 97707.83 00:07:55.616 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p3 : 5.08 500.73 31.30 0.00 0.00 63284.28 323.96 114389.66 00:07:55.616 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p3 : 5.09 456.24 28.52 0.00 0.00 69516.48 279.27 96754.59 00:07:55.616 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p4 : 5.08 500.70 31.29 0.00 0.00 63256.03 310.92 113436.41 00:07:55.616 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p4 : 5.09 456.21 28.51 0.00 0.00 69490.40 251.34 96277.96 00:07:55.616 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p5 : 5.08 500.66 31.29 0.00 0.00 63222.89 309.06 112006.54 00:07:55.616 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p5 : 5.09 456.19 28.51 0.00 0.00 69469.79 266.24 95324.72 00:07:55.616 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p6 : 5.08 500.63 31.29 0.00 0.00 63200.51 290.44 110576.67 00:07:55.616 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p6 : 5.09 456.15 28.51 0.00 0.00 69432.47 256.93 94371.47 00:07:55.616 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x20 00:07:55.616 Malloc2p7 : 5.08 500.59 31.29 0.00 0.00 63176.82 296.03 109146.80 00:07:55.616 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x20 length 0x20 00:07:55.616 Malloc2p7 : 5.09 456.13 28.51 0.00 0.00 69417.92 264.38 93418.22 00:07:55.616 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x100 00:07:55.616 TestPT : 5.14 514.05 32.13 0.00 0.00 244496.03 3187.42 238311.79 00:07:55.616 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x100 length 0x100 00:07:55.616 TestPT : 5.20 290.96 18.18 0.00 0.00 431510.01 6374.84 472810.59 00:07:55.616 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x200 00:07:55.616 raid0 : 5.10 520.40 32.53 0.00 0.00 242157.40 551.10 276441.68 00:07:55.616 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x200 length 0x200 00:07:55.616 raid0 : 5.10 473.44 29.59 0.00 0.00 266854.61 418.91 282161.16 00:07:55.616 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x200 00:07:55.616 concat0 : 5.11 519.87 32.49 0.00 0.00 242039.78 495.24 268815.70 00:07:55.616 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x200 length 0x200 00:07:55.616 concat0 : 5.10 473.42 29.59 0.00 0.00 266425.94 402.15 274535.18 00:07:55.616 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x100 00:07:55.616 raid1 : 5.11 522.93 32.68 0.00 0.00 240186.87 644.19 255470.24 00:07:55.616 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x100 length 0x100 00:07:55.616 raid1 : 5.10 476.75 29.80 0.00 0.00 264212.94 471.04 268815.70 00:07:55.616 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x0 length 0x4e 00:07:55.616 AIO0 : 5.11 519.85 32.49 0.00 0.00 147061.24 580.88 158239.03 00:07:55.616 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:07:55.616 Verification LBA range: start 0x4e length 0x4e 00:07:55.616 AIO0 : 5.10 466.92 29.18 0.00 0.00 164075.39 465.45 159192.28 00:07:55.616 =================================================================================================================== 00:07:55.616 Total : 23243.87 1452.74 0.00 0.00 104908.72 93.09 472810.59 00:07:55.875 00:07:55.875 real 0m6.404s 00:07:55.875 user 0m11.225s 00:07:55.875 sys 0m0.674s 00:07:55.875 21:49:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.875 ************************************ 00:07:55.875 END TEST bdev_verify_big_io 00:07:55.875 ************************************ 00:07:55.875 21:49:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:55.875 21:49:56 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.875 21:49:56 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:55.875 21:49:56 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.875 21:49:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:55.875 ************************************ 00:07:55.875 START TEST bdev_write_zeroes 00:07:55.875 ************************************ 00:07:55.875 21:49:56 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.875 [2024-05-14 21:49:56.366917] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:55.875 [2024-05-14 21:49:56.367100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:56.442 EAL: TSC is not safe to use in SMP mode 00:07:56.442 EAL: TSC is not invariant 00:07:56.442 [2024-05-14 21:49:56.909480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.442 [2024-05-14 21:49:57.005551] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:56.442 [2024-05-14 21:49:57.007925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.701 [2024-05-14 21:49:57.067802] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:56.701 [2024-05-14 21:49:57.067874] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:56.701 [2024-05-14 21:49:57.075791] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:56.701 [2024-05-14 21:49:57.075837] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:56.701 [2024-05-14 21:49:57.083818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:56.701 [2024-05-14 21:49:57.083872] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:56.701 [2024-05-14 21:49:57.083882] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:56.701 [2024-05-14 21:49:57.131826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:56.701 [2024-05-14 21:49:57.131910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.701 [2024-05-14 21:49:57.131932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9b3800 00:07:56.701 [2024-05-14 21:49:57.131941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.701 [2024-05-14 21:49:57.132529] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.701 [2024-05-14 21:49:57.132550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:56.701 Running I/O for 1 seconds... 00:07:58.078 00:07:58.078 Latency(us) 00:07:58.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.078 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc0 : 1.01 25076.47 97.95 0.00 0.00 5103.60 164.77 8281.33 00:07:58.078 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc1p0 : 1.01 25066.91 97.92 0.00 0.00 5103.00 189.90 8043.02 00:07:58.078 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc1p1 : 1.01 25063.27 97.90 0.00 0.00 5101.34 186.18 7804.71 00:07:58.078 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p0 : 1.01 25058.78 97.89 0.00 0.00 5099.93 185.25 7685.56 00:07:58.078 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p1 : 1.01 25055.26 97.87 0.00 0.00 5098.61 183.39 7447.24 00:07:58.078 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p2 : 1.01 25051.10 97.86 0.00 0.00 5097.26 202.01 7298.30 00:07:58.078 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p3 : 1.01 25047.88 97.84 0.00 0.00 5095.20 185.25 7119.56 00:07:58.078 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p4 : 1.01 25044.07 97.83 0.00 0.00 5093.79 184.32 7268.51 00:07:58.078 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p5 : 1.01 25041.13 97.82 0.00 0.00 5092.01 188.97 7119.56 00:07:58.078 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p6 : 1.01 25038.16 97.81 0.00 0.00 5090.47 189.90 6881.25 00:07:58.078 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 Malloc2p7 : 1.01 25034.67 97.79 0.00 0.00 5089.11 197.35 6702.52 00:07:58.078 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 TestPT : 1.01 25031.75 97.78 0.00 0.00 5086.94 193.63 6494.00 00:07:58.078 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 raid0 : 1.01 25028.00 97.77 0.00 0.00 5084.51 271.82 6285.47 00:07:58.078 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 concat0 : 1.01 25023.28 97.75 0.00 0.00 5082.91 284.86 6285.47 00:07:58.078 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 raid1 : 1.01 25018.21 97.73 0.00 0.00 5079.90 456.14 6255.68 00:07:58.078 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:58.078 AIO0 : 1.05 2920.18 11.41 0.00 0.00 42623.94 618.12 172537.74 00:07:58.078 =================================================================================================================== 00:07:58.078 Total : 378599.13 1478.90 0.00 0.00 5394.57 164.77 172537.74 00:07:58.078 00:07:58.078 real 0m2.215s 00:07:58.078 user 0m1.495s 00:07:58.078 sys 0m0.590s 00:07:58.078 21:49:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.078 ************************************ 00:07:58.078 END TEST bdev_write_zeroes 00:07:58.078 ************************************ 00:07:58.078 21:49:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:58.078 21:49:58 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.078 21:49:58 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:58.078 21:49:58 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.078 21:49:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:58.078 ************************************ 00:07:58.078 START TEST bdev_json_nonenclosed 00:07:58.078 ************************************ 00:07:58.078 21:49:58 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.078 [2024-05-14 21:49:58.621999] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:58.078 [2024-05-14 21:49:58.622212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:58.645 EAL: TSC is not safe to use in SMP mode 00:07:58.645 EAL: TSC is not invariant 00:07:58.645 [2024-05-14 21:49:59.191563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.903 [2024-05-14 21:49:59.288676] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:58.903 [2024-05-14 21:49:59.290998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.903 [2024-05-14 21:49:59.291048] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:58.903 [2024-05-14 21:49:59.291060] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:58.903 [2024-05-14 21:49:59.291069] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.903 00:07:58.903 real 0m0.831s 00:07:58.903 user 0m0.208s 00:07:58.903 sys 0m0.622s 00:07:58.903 ************************************ 00:07:58.903 END TEST bdev_json_nonenclosed 00:07:58.903 ************************************ 00:07:58.903 21:49:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.903 21:49:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:58.903 21:49:59 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.903 21:49:59 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:58.903 21:49:59 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.903 21:49:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:58.903 ************************************ 00:07:58.903 START TEST bdev_json_nonarray 00:07:58.903 ************************************ 00:07:58.903 21:49:59 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:59.162 [2024-05-14 21:49:59.496065] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:59.162 [2024-05-14 21:49:59.496230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:59.727 EAL: TSC is not safe to use in SMP mode 00:07:59.727 EAL: TSC is not invariant 00:07:59.727 [2024-05-14 21:50:00.032217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.727 [2024-05-14 21:50:00.132737] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:59.727 [2024-05-14 21:50:00.135158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.727 [2024-05-14 21:50:00.135217] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:59.727 [2024-05-14 21:50:00.135228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:59.727 [2024-05-14 21:50:00.135238] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.727 00:07:59.727 real 0m0.782s 00:07:59.727 user 0m0.206s 00:07:59.727 sys 0m0.574s 00:07:59.727 ************************************ 00:07:59.727 END TEST bdev_json_nonarray 00:07:59.727 ************************************ 00:07:59.727 21:50:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.727 21:50:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 21:50:00 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:07:59.727 21:50:00 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:07:59.727 21:50:00 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:59.727 21:50:00 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.727 21:50:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:59.985 ************************************ 00:07:59.985 START TEST bdev_qos 00:07:59.985 ************************************ 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48518 00:07:59.985 Process qos testing pid: 48518 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48518' 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48518 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 48518 ']' 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.985 21:50:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:59.985 [2024-05-14 21:50:00.329712] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:59.985 [2024-05-14 21:50:00.329930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:00.552 EAL: TSC is not safe to use in SMP mode 00:08:00.552 EAL: TSC is not invariant 00:08:00.552 [2024-05-14 21:50:00.925915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.552 [2024-05-14 21:50:01.056159] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:00.552 [2024-05-14 21:50:01.059342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 [ 00:08:01.118 { 00:08:01.118 "name": "Malloc_0", 00:08:01.118 "aliases": [ 00:08:01.118 "ea3f028c-123b-11ef-8c90-4585f0cfab08" 00:08:01.118 ], 00:08:01.118 "product_name": "Malloc disk", 00:08:01.118 "block_size": 512, 00:08:01.118 "num_blocks": 262144, 00:08:01.118 "uuid": "ea3f028c-123b-11ef-8c90-4585f0cfab08", 00:08:01.118 "assigned_rate_limits": { 00:08:01.118 "rw_ios_per_sec": 0, 00:08:01.118 "rw_mbytes_per_sec": 0, 00:08:01.118 "r_mbytes_per_sec": 0, 00:08:01.118 "w_mbytes_per_sec": 0 00:08:01.118 }, 00:08:01.118 "claimed": false, 00:08:01.118 "zoned": false, 00:08:01.118 "supported_io_types": { 00:08:01.118 "read": true, 00:08:01.118 "write": true, 00:08:01.118 "unmap": true, 00:08:01.118 "write_zeroes": true, 00:08:01.118 "flush": true, 00:08:01.118 "reset": true, 00:08:01.118 "compare": false, 00:08:01.118 "compare_and_write": false, 00:08:01.118 "abort": true, 00:08:01.118 "nvme_admin": false, 00:08:01.118 "nvme_io": false 00:08:01.118 }, 00:08:01.118 "memory_domains": [ 00:08:01.118 { 00:08:01.118 "dma_device_id": "system", 00:08:01.118 "dma_device_type": 1 00:08:01.118 }, 00:08:01.118 { 00:08:01.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.118 "dma_device_type": 2 00:08:01.118 } 00:08:01.118 ], 00:08:01.118 "driver_specific": {} 00:08:01.118 } 00:08:01.118 ] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 Null_1 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 [ 00:08:01.118 { 00:08:01.118 "name": "Null_1", 00:08:01.118 "aliases": [ 00:08:01.118 "ea43e316-123b-11ef-8c90-4585f0cfab08" 00:08:01.118 ], 00:08:01.118 "product_name": "Null disk", 00:08:01.118 "block_size": 512, 00:08:01.118 "num_blocks": 262144, 00:08:01.118 "uuid": "ea43e316-123b-11ef-8c90-4585f0cfab08", 00:08:01.118 "assigned_rate_limits": { 00:08:01.118 "rw_ios_per_sec": 0, 00:08:01.118 "rw_mbytes_per_sec": 0, 00:08:01.118 "r_mbytes_per_sec": 0, 00:08:01.118 "w_mbytes_per_sec": 0 00:08:01.118 }, 00:08:01.118 "claimed": false, 00:08:01.118 "zoned": false, 00:08:01.118 "supported_io_types": { 00:08:01.118 "read": true, 00:08:01.118 "write": true, 00:08:01.118 "unmap": false, 00:08:01.118 "write_zeroes": true, 00:08:01.118 "flush": false, 00:08:01.118 "reset": true, 00:08:01.118 "compare": false, 00:08:01.118 "compare_and_write": false, 00:08:01.118 "abort": true, 00:08:01.118 "nvme_admin": false, 00:08:01.118 "nvme_io": false 00:08:01.118 }, 00:08:01.118 "driver_specific": {} 00:08:01.118 } 00:08:01.118 ] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:01.118 21:50:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:01.118 Running I/O for 60 seconds... 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 540428.77 2161715.07 0.00 0.00 2317312.00 0.00 0.00 ' 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=540428.77 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 540428 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=540428 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=135000 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 135000 -gt 1000 ']' 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 135000 Malloc_0 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.673 21:50:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 21:50:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.673 21:50:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 135000 IOPS Malloc_0 00:08:07.673 21:50:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:07.673 21:50:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.674 21:50:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 ************************************ 00:08:07.674 START TEST bdev_qos_iops 00:08:07.674 ************************************ 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 135000 IOPS Malloc_0 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=135000 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:07.674 21:50:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 134949.28 539797.11 0.00 0.00 554580.00 0.00 0.00 ' 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=134949.28 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 134949 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=134949 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=121500 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=148500 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 134949 -lt 121500 ']' 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 134949 -gt 148500 ']' 00:08:11.915 00:08:11.915 real 0m5.471s 00:08:11.915 user 0m0.117s 00:08:11.915 sys 0m0.025s 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.915 ************************************ 00:08:11.915 END TEST bdev_qos_iops 00:08:11.915 ************************************ 00:08:11.915 21:50:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:08:12.173 21:50:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 380054.88 1520219.52 0.00 0.00 1638400.00 0.00 0.00 ' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1638400.00 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1638400 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1638400 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=160 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 160 -lt 2 ']' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 160 Null_1 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 160 BANDWIDTH Null_1 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.729 21:50:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:18.729 ************************************ 00:08:18.729 START TEST bdev_qos_bw 00:08:18.729 ************************************ 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 160 BANDWIDTH Null_1 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=160 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:08:18.729 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:18.730 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:08:18.730 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:18.730 21:50:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 40958.65 163834.58 0.00 0.00 176620.00 0.00 0.00 ' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=176620.00 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 176620 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=176620 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=163840 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=147456 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=180224 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 176620 -lt 147456 ']' 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 176620 -gt 180224 ']' 00:08:24.022 00:08:24.022 real 0m5.521s 00:08:24.022 user 0m0.123s 00:08:24.022 sys 0m0.024s 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 ************************************ 00:08:24.022 END TEST bdev_qos_bw 00:08:24.022 ************************************ 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.022 21:50:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 ************************************ 00:08:24.022 START TEST bdev_qos_ro_bw 00:08:24.022 ************************************ 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:24.022 21:50:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.09 2048.35 0.00 0.00 2156.00 0.00 0.00 ' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2156.00 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2156 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2156 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2156 -lt 1843 ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2156 -gt 2252 ']' 00:08:29.337 00:08:29.337 real 0m5.498s 00:08:29.337 user 0m0.101s 00:08:29.337 sys 0m0.070s 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.337 21:50:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:08:29.337 ************************************ 00:08:29.337 END TEST bdev_qos_ro_bw 00:08:29.337 ************************************ 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:29.337 00:08:29.337 Latency(us) 00:08:29.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.337 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:29.337 Malloc_0 : 28.10 181199.46 707.81 0.00 0.00 1400.23 441.25 503314.50 00:08:29.337 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:29.337 Null_1 : 28.14 272267.05 1063.54 0.00 0.00 939.88 84.71 30742.22 00:08:29.337 =================================================================================================================== 00:08:29.337 Total : 453466.51 1771.35 0.00 0.00 1123.69 84.71 503314.50 00:08:29.337 0 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48518 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 48518 ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 48518 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # tail -1 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps -c -o command 48518 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:08:29.337 killing process with pid 48518 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48518' 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 48518 00:08:29.337 Received shutdown signal, test time was about 28.151704 seconds 00:08:29.337 00:08:29.337 Latency(us) 00:08:29.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.337 =================================================================================================================== 00:08:29.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.337 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 48518 00:08:29.608 21:50:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:08:29.608 00:08:29.608 real 0m29.604s 00:08:29.608 user 0m30.290s 00:08:29.608 sys 0m0.897s 00:08:29.608 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.608 21:50:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:29.608 ************************************ 00:08:29.608 END TEST bdev_qos 00:08:29.608 ************************************ 00:08:29.608 21:50:29 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:08:29.608 21:50:29 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:29.608 21:50:29 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.608 21:50:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:29.608 ************************************ 00:08:29.608 START TEST bdev_qd_sampling 00:08:29.608 ************************************ 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48743 00:08:29.608 Process bdev QD sampling period testing pid: 48743 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48743' 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48743 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 48743 ']' 00:08:29.608 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.609 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:29.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.609 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.609 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:29.609 21:50:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:29.609 [2024-05-14 21:50:29.990747] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:29.609 [2024-05-14 21:50:29.991019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:30.177 EAL: TSC is not safe to use in SMP mode 00:08:30.177 EAL: TSC is not invariant 00:08:30.177 [2024-05-14 21:50:30.571248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.177 [2024-05-14 21:50:30.667070] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:30.177 [2024-05-14 21:50:30.667137] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:30.177 [2024-05-14 21:50:30.670146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.177 [2024-05-14 21:50:30.670139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:30.745 Malloc_QD 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:30.745 [ 00:08:30.745 { 00:08:30.745 "name": "Malloc_QD", 00:08:30.745 "aliases": [ 00:08:30.745 "fbee44df-123b-11ef-8c90-4585f0cfab08" 00:08:30.745 ], 00:08:30.745 "product_name": "Malloc disk", 00:08:30.745 "block_size": 512, 00:08:30.745 "num_blocks": 262144, 00:08:30.745 "uuid": "fbee44df-123b-11ef-8c90-4585f0cfab08", 00:08:30.745 "assigned_rate_limits": { 00:08:30.745 "rw_ios_per_sec": 0, 00:08:30.745 "rw_mbytes_per_sec": 0, 00:08:30.745 "r_mbytes_per_sec": 0, 00:08:30.745 "w_mbytes_per_sec": 0 00:08:30.745 }, 00:08:30.745 "claimed": false, 00:08:30.745 "zoned": false, 00:08:30.745 "supported_io_types": { 00:08:30.745 "read": true, 00:08:30.745 "write": true, 00:08:30.745 "unmap": true, 00:08:30.745 "write_zeroes": true, 00:08:30.745 "flush": true, 00:08:30.745 "reset": true, 00:08:30.745 "compare": false, 00:08:30.745 "compare_and_write": false, 00:08:30.745 "abort": true, 00:08:30.745 "nvme_admin": false, 00:08:30.745 "nvme_io": false 00:08:30.745 }, 00:08:30.745 "memory_domains": [ 00:08:30.745 { 00:08:30.745 "dma_device_id": "system", 00:08:30.745 "dma_device_type": 1 00:08:30.745 }, 00:08:30.745 { 00:08:30.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.745 "dma_device_type": 2 00:08:30.745 } 00:08:30.745 ], 00:08:30.745 "driver_specific": {} 00:08:30.745 } 00:08:30.745 ] 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:08:30.745 21:50:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.745 Running I/O for 5 seconds... 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:08:33.281 "tick_rate": 2200008650, 00:08:33.281 "ticks": 698985815682, 00:08:33.281 "bdevs": [ 00:08:33.281 { 00:08:33.281 "name": "Malloc_QD", 00:08:33.281 "bytes_read": 11253355008, 00:08:33.281 "num_read_ops": 2747395, 00:08:33.281 "bytes_written": 0, 00:08:33.281 "num_write_ops": 0, 00:08:33.281 "bytes_unmapped": 0, 00:08:33.281 "num_unmap_ops": 0, 00:08:33.281 "bytes_copied": 0, 00:08:33.281 "num_copy_ops": 0, 00:08:33.281 "read_latency_ticks": 2301632172062, 00:08:33.281 "max_read_latency_ticks": 2051193, 00:08:33.281 "min_read_latency_ticks": 44157, 00:08:33.281 "write_latency_ticks": 0, 00:08:33.281 "max_write_latency_ticks": 0, 00:08:33.281 "min_write_latency_ticks": 0, 00:08:33.281 "unmap_latency_ticks": 0, 00:08:33.281 "max_unmap_latency_ticks": 0, 00:08:33.281 "min_unmap_latency_ticks": 0, 00:08:33.281 "copy_latency_ticks": 0, 00:08:33.281 "max_copy_latency_ticks": 0, 00:08:33.281 "min_copy_latency_ticks": 0, 00:08:33.281 "io_error": {}, 00:08:33.281 "queue_depth_polling_period": 10, 00:08:33.281 "queue_depth": 512, 00:08:33.281 "io_time": 320, 00:08:33.281 "weighted_io_time": 184320 00:08:33.281 } 00:08:33.281 ] 00:08:33.281 }' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 00:08:33.281 Latency(us) 00:08:33.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.281 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:08:33.281 Malloc_QD : 2.07 706502.98 2759.78 0.00 0.00 362.04 59.11 536.20 00:08:33.281 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:33.281 Malloc_QD : 2.07 635590.16 2482.77 0.00 0.00 402.43 97.75 934.63 00:08:33.281 =================================================================================================================== 00:08:33.281 Total : 1342093.14 5242.55 0.00 0.00 381.17 59.11 934.63 00:08:33.281 0 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48743 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 48743 ']' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 48743 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps -c -o command 48743 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # tail -1 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:08:33.281 killing process with pid 48743 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48743' 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 48743 00:08:33.281 Received shutdown signal, test time was about 2.110437 seconds 00:08:33.281 00:08:33.281 Latency(us) 00:08:33.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.281 =================================================================================================================== 00:08:33.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 48743 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:08:33.281 00:08:33.281 real 0m3.543s 00:08:33.281 user 0m6.260s 00:08:33.281 sys 0m0.741s 00:08:33.281 ************************************ 00:08:33.281 END TEST bdev_qd_sampling 00:08:33.281 ************************************ 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.281 21:50:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 21:50:33 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:08:33.281 21:50:33 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.281 21:50:33 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.281 21:50:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 ************************************ 00:08:33.281 START TEST bdev_error 00:08:33.281 ************************************ 00:08:33.281 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48786 00:08:33.281 Process error testing pid: 48786 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48786' 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48786 00:08:33.281 21:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:08:33.281 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 48786 ']' 00:08:33.281 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.281 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.281 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.282 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.282 21:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:33.282 [2024-05-14 21:50:33.576715] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:33.282 [2024-05-14 21:50:33.576986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:33.548 EAL: TSC is not safe to use in SMP mode 00:08:33.549 EAL: TSC is not invariant 00:08:33.549 [2024-05-14 21:50:34.112343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.808 [2024-05-14 21:50:34.208949] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:33.808 [2024-05-14 21:50:34.211285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:08:34.067 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.067 Dev_1 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.067 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.067 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.067 [ 00:08:34.067 { 00:08:34.067 "name": "Dev_1", 00:08:34.067 "aliases": [ 00:08:34.067 "fe0b26bb-123b-11ef-8c90-4585f0cfab08" 00:08:34.067 ], 00:08:34.067 "product_name": "Malloc disk", 00:08:34.067 "block_size": 512, 00:08:34.067 "num_blocks": 262144, 00:08:34.067 "uuid": "fe0b26bb-123b-11ef-8c90-4585f0cfab08", 00:08:34.067 "assigned_rate_limits": { 00:08:34.067 "rw_ios_per_sec": 0, 00:08:34.067 "rw_mbytes_per_sec": 0, 00:08:34.067 "r_mbytes_per_sec": 0, 00:08:34.067 "w_mbytes_per_sec": 0 00:08:34.067 }, 00:08:34.067 "claimed": false, 00:08:34.067 "zoned": false, 00:08:34.067 "supported_io_types": { 00:08:34.067 "read": true, 00:08:34.067 "write": true, 00:08:34.067 "unmap": true, 00:08:34.067 "write_zeroes": true, 00:08:34.067 "flush": true, 00:08:34.067 "reset": true, 00:08:34.067 "compare": false, 00:08:34.067 "compare_and_write": false, 00:08:34.067 "abort": true, 00:08:34.067 "nvme_admin": false, 00:08:34.067 "nvme_io": false 00:08:34.326 }, 00:08:34.326 "memory_domains": [ 00:08:34.326 { 00:08:34.326 "dma_device_id": "system", 00:08:34.326 "dma_device_type": 1 00:08:34.326 }, 00:08:34.326 { 00:08:34.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.326 "dma_device_type": 2 00:08:34.326 } 00:08:34.326 ], 00:08:34.326 "driver_specific": {} 00:08:34.326 } 00:08:34.326 ] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:08:34.326 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.326 true 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.326 Dev_2 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.326 [ 00:08:34.326 { 00:08:34.326 "name": "Dev_2", 00:08:34.326 "aliases": [ 00:08:34.326 "fe11dc36-123b-11ef-8c90-4585f0cfab08" 00:08:34.326 ], 00:08:34.326 "product_name": "Malloc disk", 00:08:34.326 "block_size": 512, 00:08:34.326 "num_blocks": 262144, 00:08:34.326 "uuid": "fe11dc36-123b-11ef-8c90-4585f0cfab08", 00:08:34.326 "assigned_rate_limits": { 00:08:34.326 "rw_ios_per_sec": 0, 00:08:34.326 "rw_mbytes_per_sec": 0, 00:08:34.326 "r_mbytes_per_sec": 0, 00:08:34.326 "w_mbytes_per_sec": 0 00:08:34.326 }, 00:08:34.326 "claimed": false, 00:08:34.326 "zoned": false, 00:08:34.326 "supported_io_types": { 00:08:34.326 "read": true, 00:08:34.326 "write": true, 00:08:34.326 "unmap": true, 00:08:34.326 "write_zeroes": true, 00:08:34.326 "flush": true, 00:08:34.326 "reset": true, 00:08:34.326 "compare": false, 00:08:34.326 "compare_and_write": false, 00:08:34.326 "abort": true, 00:08:34.326 "nvme_admin": false, 00:08:34.326 "nvme_io": false 00:08:34.326 }, 00:08:34.326 "memory_domains": [ 00:08:34.326 { 00:08:34.326 "dma_device_id": "system", 00:08:34.326 "dma_device_type": 1 00:08:34.326 }, 00:08:34.326 { 00:08:34.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.326 "dma_device_type": 2 00:08:34.326 } 00:08:34.326 ], 00:08:34.326 "driver_specific": {} 00:08:34.326 } 00:08:34.326 ] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.326 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:08:34.327 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:08:34.327 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.327 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:34.327 21:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.327 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:08:34.327 21:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:08:34.327 Running I/O for 5 seconds... 00:08:35.262 21:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48786 00:08:35.262 Process is existed as continue on error is set. Pid: 48786 00:08:35.262 21:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48786' 00:08:35.262 21:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.262 21:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:35.262 21:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.262 21:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:08:35.521 Timeout while waiting for response: 00:08:35.521 00:08:35.521 00:08:39.710 00:08:39.710 Latency(us) 00:08:39.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.710 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:39.710 EE_Dev_1 : 0.94 286803.97 1120.33 5.34 0.00 55.52 25.02 157.32 00:08:39.710 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:39.710 Dev_2 : 5.00 643273.63 2512.79 0.00 0.00 24.62 14.02 27048.39 00:08:39.710 =================================================================================================================== 00:08:39.710 Total : 930077.60 3633.12 5.34 0.00 27.00 14.02 27048.39 00:08:40.647 21:50:40 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48786 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 48786 ']' 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 48786 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps -c -o command 48786 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # tail -1 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:08:40.647 killing process with pid 48786 00:08:40.647 Received shutdown signal, test time was about 5.000000 seconds 00:08:40.647 00:08:40.647 Latency(us) 00:08:40.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.647 =================================================================================================================== 00:08:40.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48786' 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 48786 00:08:40.647 21:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 48786 00:08:40.647 21:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48826 00:08:40.647 21:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:08:40.647 Process error testing pid: 48826 00:08:40.647 21:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48826' 00:08:40.647 21:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48826 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 48826 ']' 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.647 21:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:40.647 [2024-05-14 21:50:41.138138] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:40.647 [2024-05-14 21:50:41.138460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:41.214 EAL: TSC is not safe to use in SMP mode 00:08:41.214 EAL: TSC is not invariant 00:08:41.215 [2024-05-14 21:50:41.678270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.215 [2024-05-14 21:50:41.772395] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:41.215 [2024-05-14 21:50:41.774680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:08:41.783 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 Dev_1 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 [ 00:08:41.783 { 00:08:41.783 "name": "Dev_1", 00:08:41.783 "aliases": [ 00:08:41.783 "028f083c-123c-11ef-8c90-4585f0cfab08" 00:08:41.783 ], 00:08:41.783 "product_name": "Malloc disk", 00:08:41.783 "block_size": 512, 00:08:41.783 "num_blocks": 262144, 00:08:41.783 "uuid": "028f083c-123c-11ef-8c90-4585f0cfab08", 00:08:41.783 "assigned_rate_limits": { 00:08:41.783 "rw_ios_per_sec": 0, 00:08:41.783 "rw_mbytes_per_sec": 0, 00:08:41.783 "r_mbytes_per_sec": 0, 00:08:41.783 "w_mbytes_per_sec": 0 00:08:41.783 }, 00:08:41.783 "claimed": false, 00:08:41.783 "zoned": false, 00:08:41.783 "supported_io_types": { 00:08:41.783 "read": true, 00:08:41.783 "write": true, 00:08:41.783 "unmap": true, 00:08:41.783 "write_zeroes": true, 00:08:41.783 "flush": true, 00:08:41.783 "reset": true, 00:08:41.783 "compare": false, 00:08:41.783 "compare_and_write": false, 00:08:41.783 "abort": true, 00:08:41.783 "nvme_admin": false, 00:08:41.783 "nvme_io": false 00:08:41.783 }, 00:08:41.783 "memory_domains": [ 00:08:41.783 { 00:08:41.783 "dma_device_id": "system", 00:08:41.783 "dma_device_type": 1 00:08:41.783 }, 00:08:41.783 { 00:08:41.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.783 "dma_device_type": 2 00:08:41.783 } 00:08:41.783 ], 00:08:41.783 "driver_specific": {} 00:08:41.783 } 00:08:41.783 ] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:08:41.783 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 true 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 Dev_2 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.783 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.783 [ 00:08:41.783 { 00:08:41.783 "name": "Dev_2", 00:08:41.783 "aliases": [ 00:08:41.783 "0295215c-123c-11ef-8c90-4585f0cfab08" 00:08:41.783 ], 00:08:41.783 "product_name": "Malloc disk", 00:08:41.783 "block_size": 512, 00:08:41.783 "num_blocks": 262144, 00:08:41.783 "uuid": "0295215c-123c-11ef-8c90-4585f0cfab08", 00:08:41.783 "assigned_rate_limits": { 00:08:41.783 "rw_ios_per_sec": 0, 00:08:41.784 "rw_mbytes_per_sec": 0, 00:08:41.784 "r_mbytes_per_sec": 0, 00:08:41.784 "w_mbytes_per_sec": 0 00:08:41.784 }, 00:08:41.784 "claimed": false, 00:08:41.784 "zoned": false, 00:08:41.784 "supported_io_types": { 00:08:41.784 "read": true, 00:08:41.784 "write": true, 00:08:41.784 "unmap": true, 00:08:41.784 "write_zeroes": true, 00:08:41.784 "flush": true, 00:08:41.784 "reset": true, 00:08:41.784 "compare": false, 00:08:41.784 "compare_and_write": false, 00:08:41.784 "abort": true, 00:08:41.784 "nvme_admin": false, 00:08:41.784 "nvme_io": false 00:08:41.784 }, 00:08:41.784 "memory_domains": [ 00:08:41.784 { 00:08:41.784 "dma_device_id": "system", 00:08:41.784 "dma_device_type": 1 00:08:41.784 }, 00:08:41.784 { 00:08:41.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.784 "dma_device_type": 2 00:08:41.784 } 00:08:41.784 ], 00:08:41.784 "driver_specific": {} 00:08:41.784 } 00:08:41.784 ] 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:08:41.784 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.784 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48826 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48826 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:08:41.784 21:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.784 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48826 00:08:42.042 Running I/O for 5 seconds... 00:08:42.042 task offset: 150112 on job bdev=EE_Dev_1 fails 00:08:42.042 00:08:42.042 Latency(us) 00:08:42.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.042 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:42.042 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:08:42.042 EE_Dev_1 : 0.00 157142.86 613.84 35714.29 0.00 67.69 24.55 127.53 00:08:42.042 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:08:42.042 Dev_2 : 0.00 190476.19 744.05 0.00 0.00 40.49 30.95 57.48 00:08:42.042 =================================================================================================================== 00:08:42.043 Total : 347619.05 1357.89 35714.29 0.00 52.94 24.55 127.53 00:08:42.043 [2024-05-14 21:50:42.406388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.043 request: 00:08:42.043 { 00:08:42.043 "method": "perform_tests", 00:08:42.043 "req_id": 1 00:08:42.043 } 00:08:42.043 Got JSON-RPC error response 00:08:42.043 response: 00:08:42.043 { 00:08:42.043 "code": -32603, 00:08:42.043 "message": "bdevperf failed with error Operation not permitted" 00:08:42.043 } 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:08:42.301 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.301 00:08:42.301 real 0m9.071s 00:08:42.301 user 0m9.168s 00:08:42.301 sys 0m1.296s 00:08:42.302 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.302 21:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:08:42.302 ************************************ 00:08:42.302 END TEST bdev_error 00:08:42.302 ************************************ 00:08:42.302 21:50:42 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:08:42.302 21:50:42 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:42.302 21:50:42 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.302 21:50:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:42.302 ************************************ 00:08:42.302 START TEST bdev_stat 00:08:42.302 ************************************ 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48857 00:08:42.302 Process Bdev IO statistics testing pid: 48857 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48857' 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48857 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 48857 ']' 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.302 21:50:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:42.302 [2024-05-14 21:50:42.684756] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:42.302 [2024-05-14 21:50:42.684982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:42.869 EAL: TSC is not safe to use in SMP mode 00:08:42.869 EAL: TSC is not invariant 00:08:42.869 [2024-05-14 21:50:43.228533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.869 [2024-05-14 21:50:43.329033] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:42.869 [2024-05-14 21:50:43.329122] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:08:42.869 [2024-05-14 21:50:43.332606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.869 [2024-05-14 21:50:43.332591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:43.435 Malloc_STAT 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:43.435 [ 00:08:43.435 { 00:08:43.435 "name": "Malloc_STAT", 00:08:43.435 "aliases": [ 00:08:43.435 "037950c5-123c-11ef-8c90-4585f0cfab08" 00:08:43.435 ], 00:08:43.435 "product_name": "Malloc disk", 00:08:43.435 "block_size": 512, 00:08:43.435 "num_blocks": 262144, 00:08:43.435 "uuid": "037950c5-123c-11ef-8c90-4585f0cfab08", 00:08:43.435 "assigned_rate_limits": { 00:08:43.435 "rw_ios_per_sec": 0, 00:08:43.435 "rw_mbytes_per_sec": 0, 00:08:43.435 "r_mbytes_per_sec": 0, 00:08:43.435 "w_mbytes_per_sec": 0 00:08:43.435 }, 00:08:43.435 "claimed": false, 00:08:43.435 "zoned": false, 00:08:43.435 "supported_io_types": { 00:08:43.435 "read": true, 00:08:43.435 "write": true, 00:08:43.435 "unmap": true, 00:08:43.435 "write_zeroes": true, 00:08:43.435 "flush": true, 00:08:43.435 "reset": true, 00:08:43.435 "compare": false, 00:08:43.435 "compare_and_write": false, 00:08:43.435 "abort": true, 00:08:43.435 "nvme_admin": false, 00:08:43.435 "nvme_io": false 00:08:43.435 }, 00:08:43.435 "memory_domains": [ 00:08:43.435 { 00:08:43.435 "dma_device_id": "system", 00:08:43.435 "dma_device_type": 1 00:08:43.435 }, 00:08:43.435 { 00:08:43.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.435 "dma_device_type": 2 00:08:43.435 } 00:08:43.435 ], 00:08:43.435 "driver_specific": {} 00:08:43.435 } 00:08:43.435 ] 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:08:43.435 21:50:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:43.435 Running I/O for 10 seconds... 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:08:45.337 "tick_rate": 2200008650, 00:08:45.337 "ticks": 726577133140, 00:08:45.337 "bdevs": [ 00:08:45.337 { 00:08:45.337 "name": "Malloc_STAT", 00:08:45.337 "bytes_read": 11033154048, 00:08:45.337 "num_read_ops": 2693635, 00:08:45.337 "bytes_written": 0, 00:08:45.337 "num_write_ops": 0, 00:08:45.337 "bytes_unmapped": 0, 00:08:45.337 "num_unmap_ops": 0, 00:08:45.337 "bytes_copied": 0, 00:08:45.337 "num_copy_ops": 0, 00:08:45.337 "read_latency_ticks": 2169175775166, 00:08:45.337 "max_read_latency_ticks": 1822094, 00:08:45.337 "min_read_latency_ticks": 45254, 00:08:45.337 "write_latency_ticks": 0, 00:08:45.337 "max_write_latency_ticks": 0, 00:08:45.337 "min_write_latency_ticks": 0, 00:08:45.337 "unmap_latency_ticks": 0, 00:08:45.337 "max_unmap_latency_ticks": 0, 00:08:45.337 "min_unmap_latency_ticks": 0, 00:08:45.337 "copy_latency_ticks": 0, 00:08:45.337 "max_copy_latency_ticks": 0, 00:08:45.337 "min_copy_latency_ticks": 0, 00:08:45.337 "io_error": {} 00:08:45.337 } 00:08:45.337 ] 00:08:45.337 }' 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=2693635 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:08:45.337 "tick_rate": 2200008650, 00:08:45.337 "ticks": 726633359534, 00:08:45.337 "name": "Malloc_STAT", 00:08:45.337 "channels": [ 00:08:45.337 { 00:08:45.337 "thread_id": 2, 00:08:45.337 "bytes_read": 5647630336, 00:08:45.337 "num_read_ops": 1378816, 00:08:45.337 "bytes_written": 0, 00:08:45.337 "num_write_ops": 0, 00:08:45.337 "bytes_unmapped": 0, 00:08:45.337 "num_unmap_ops": 0, 00:08:45.337 "bytes_copied": 0, 00:08:45.337 "num_copy_ops": 0, 00:08:45.337 "read_latency_ticks": 1098879788239, 00:08:45.337 "max_read_latency_ticks": 1822094, 00:08:45.337 "min_read_latency_ticks": 685320, 00:08:45.337 "write_latency_ticks": 0, 00:08:45.337 "max_write_latency_ticks": 0, 00:08:45.337 "min_write_latency_ticks": 0, 00:08:45.337 "unmap_latency_ticks": 0, 00:08:45.337 "max_unmap_latency_ticks": 0, 00:08:45.337 "min_unmap_latency_ticks": 0, 00:08:45.337 "copy_latency_ticks": 0, 00:08:45.337 "max_copy_latency_ticks": 0, 00:08:45.337 "min_copy_latency_ticks": 0 00:08:45.337 }, 00:08:45.337 { 00:08:45.337 "thread_id": 3, 00:08:45.337 "bytes_read": 5523898368, 00:08:45.337 "num_read_ops": 1348608, 00:08:45.337 "bytes_written": 0, 00:08:45.337 "num_write_ops": 0, 00:08:45.337 "bytes_unmapped": 0, 00:08:45.337 "num_unmap_ops": 0, 00:08:45.337 "bytes_copied": 0, 00:08:45.337 "num_copy_ops": 0, 00:08:45.337 "read_latency_ticks": 1098995886015, 00:08:45.337 "max_read_latency_ticks": 1810996, 00:08:45.337 "min_read_latency_ticks": 707024, 00:08:45.337 "write_latency_ticks": 0, 00:08:45.337 "max_write_latency_ticks": 0, 00:08:45.337 "min_write_latency_ticks": 0, 00:08:45.337 "unmap_latency_ticks": 0, 00:08:45.337 "max_unmap_latency_ticks": 0, 00:08:45.337 "min_unmap_latency_ticks": 0, 00:08:45.337 "copy_latency_ticks": 0, 00:08:45.337 "max_copy_latency_ticks": 0, 00:08:45.337 "min_copy_latency_ticks": 0 00:08:45.337 } 00:08:45.337 ] 00:08:45.337 }' 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1378816 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1378816 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1348608 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=2727424 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.337 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:08:45.337 "tick_rate": 2200008650, 00:08:45.337 "ticks": 726706764578, 00:08:45.337 "bdevs": [ 00:08:45.337 { 00:08:45.337 "name": "Malloc_STAT", 00:08:45.337 "bytes_read": 11358212608, 00:08:45.337 "num_read_ops": 2772995, 00:08:45.337 "bytes_written": 0, 00:08:45.337 "num_write_ops": 0, 00:08:45.337 "bytes_unmapped": 0, 00:08:45.337 "num_unmap_ops": 0, 00:08:45.337 "bytes_copied": 0, 00:08:45.337 "num_copy_ops": 0, 00:08:45.337 "read_latency_ticks": 2235362426697, 00:08:45.337 "max_read_latency_ticks": 1822094, 00:08:45.337 "min_read_latency_ticks": 45254, 00:08:45.337 "write_latency_ticks": 0, 00:08:45.337 "max_write_latency_ticks": 0, 00:08:45.337 "min_write_latency_ticks": 0, 00:08:45.337 "unmap_latency_ticks": 0, 00:08:45.337 "max_unmap_latency_ticks": 0, 00:08:45.337 "min_unmap_latency_ticks": 0, 00:08:45.337 "copy_latency_ticks": 0, 00:08:45.337 "max_copy_latency_ticks": 0, 00:08:45.337 "min_copy_latency_ticks": 0, 00:08:45.337 "io_error": {} 00:08:45.337 } 00:08:45.337 ] 00:08:45.337 }' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=2772995 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 2727424 -lt 2693635 ']' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 2727424 -gt 2772995 ']' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:45.338 00:08:45.338 Latency(us) 00:08:45.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.338 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:08:45.338 Malloc_STAT : 2.01 705241.54 2754.85 0.00 0.00 362.67 78.66 830.37 00:08:45.338 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:08:45.338 Malloc_STAT : 2.01 690205.67 2696.12 0.00 0.00 370.59 74.01 826.64 00:08:45.338 =================================================================================================================== 00:08:45.338 Total : 1395447.21 5450.97 0.00 0.00 366.59 74.01 830.37 00:08:45.338 0 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48857 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 48857 ']' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 48857 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps -c -o command 48857 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # tail -1 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:08:45.338 killing process with pid 48857 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48857' 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 48857 00:08:45.338 Received shutdown signal, test time was about 2.044534 seconds 00:08:45.338 00:08:45.338 Latency(us) 00:08:45.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.338 =================================================================================================================== 00:08:45.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.338 21:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 48857 00:08:45.597 21:50:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:08:45.597 00:08:45.597 real 0m3.433s 00:08:45.597 user 0m6.113s 00:08:45.597 sys 0m0.696s 00:08:45.597 ************************************ 00:08:45.597 END TEST bdev_stat 00:08:45.597 ************************************ 00:08:45.597 21:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.597 21:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:08:45.597 21:50:46 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:08:45.597 00:08:45.597 real 1m34.069s 00:08:45.597 user 4m30.805s 00:08:45.597 sys 0m28.135s 00:08:45.597 21:50:46 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.597 21:50:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:45.597 ************************************ 00:08:45.597 END TEST blockdev_general 00:08:45.597 ************************************ 00:08:45.597 21:50:46 -- spdk/autotest.sh@186 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:45.597 21:50:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:45.597 21:50:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.597 21:50:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.597 ************************************ 00:08:45.597 START TEST bdev_raid 00:08:45.597 ************************************ 00:08:45.597 21:50:46 bdev_raid -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:45.856 * Looking for test storage... 00:08:45.856 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:45.856 21:50:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@800 -- # trap 'on_error_exit;' ERR 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@802 -- # base_blocklen=512 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@804 -- # uname -s 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@804 -- # '[' FreeBSD = Linux ']' 00:08:45.856 21:50:46 bdev_raid -- bdev/bdev_raid.sh@811 -- # run_test raid0_resize_test raid0_resize_test 00:08:45.856 21:50:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:45.856 21:50:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.856 21:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.856 ************************************ 00:08:45.856 START TEST raid0_resize_test 00:08:45.856 ************************************ 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=48957 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 48957' 00:08:45.856 Process raid pid: 48957 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 48957 /var/tmp/spdk-raid.sock 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 48957 ']' 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.856 21:50:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.856 [2024-05-14 21:50:46.368159] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:45.856 [2024-05-14 21:50:46.368351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:46.423 EAL: TSC is not safe to use in SMP mode 00:08:46.423 EAL: TSC is not invariant 00:08:46.423 [2024-05-14 21:50:46.930584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.681 [2024-05-14 21:50:47.031875] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:46.681 [2024-05-14 21:50:47.034241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.681 [2024-05-14 21:50:47.035059] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.681 [2024-05-14 21:50:47.035079] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.938 21:50:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.938 21:50:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:08:46.938 21:50:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:08:47.196 Base_1 00:08:47.196 21:50:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:08:47.453 Base_2 00:08:47.453 21:50:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:08:47.716 [2024-05-14 21:50:48.136465] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:47.716 [2024-05-14 21:50:48.137050] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:47.716 [2024-05-14 21:50:48.137090] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ac3e300 00:08:47.716 [2024-05-14 21:50:48.137095] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:47.716 [2024-05-14 21:50:48.137130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac9ce20 00:08:47.716 [2024-05-14 21:50:48.137196] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ac3e300 00:08:47.716 [2024-05-14 21:50:48.137200] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82ac3e300 00:08:47.716 [2024-05-14 21:50:48.137233] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.716 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:08:47.973 [2024-05-14 21:50:48.396466] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:47.973 [2024-05-14 21:50:48.396494] bdev_raid.c:2230:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:47.973 true 00:08:47.973 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:08:47.973 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:08:48.234 [2024-05-14 21:50:48.632523] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.234 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:08:48.234 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:08:48.234 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:08:48.234 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:08:48.492 [2024-05-14 21:50:48.900497] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:48.492 [2024-05-14 21:50:48.900527] bdev_raid.c:2230:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:48.492 [2024-05-14 21:50:48.900558] bdev_raid.c:2244:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:48.492 true 00:08:48.492 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:08:48.492 21:50:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:08:48.750 [2024-05-14 21:50:49.136528] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 48957 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 48957 ']' 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 48957 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps -c -o command 48957 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # tail -1 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:08:48.750 killing process with pid 48957 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48957' 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 48957 00:08:48.750 [2024-05-14 21:50:49.166752] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.750 [2024-05-14 21:50:49.166781] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.750 [2024-05-14 21:50:49.166793] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.750 [2024-05-14 21:50:49.166798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac3e300 name Raid, state offline 00:08:48.750 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 48957 00:08:48.750 [2024-05-14 21:50:49.166942] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.009 21:50:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:49.009 00:08:49.009 real 0m3.007s 00:08:49.009 user 0m4.481s 00:08:49.009 sys 0m0.758s 00:08:49.009 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.009 21:50:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.009 ************************************ 00:08:49.009 END TEST raid0_resize_test 00:08:49.009 ************************************ 00:08:49.009 21:50:49 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:08:49.009 21:50:49 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:08:49.009 21:50:49 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:49.009 21:50:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:49.009 21:50:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.009 21:50:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.009 ************************************ 00:08:49.009 START TEST raid_state_function_test 00:08:49.009 ************************************ 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=49007 00:08:49.009 Process raid pid: 49007 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49007' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 49007 /var/tmp/spdk-raid.sock 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 49007 ']' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:49.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:49.009 21:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.009 [2024-05-14 21:50:49.420231] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:49.009 [2024-05-14 21:50:49.420521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:49.576 EAL: TSC is not safe to use in SMP mode 00:08:49.576 EAL: TSC is not invariant 00:08:49.576 [2024-05-14 21:50:49.963944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.576 [2024-05-14 21:50:50.068090] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:49.576 [2024-05-14 21:50:50.070788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.576 [2024-05-14 21:50:50.071724] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.576 [2024-05-14 21:50:50.071742] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:50.141 [2024-05-14 21:50:50.685888] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.141 [2024-05-14 21:50:50.685954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.141 [2024-05-14 21:50:50.685960] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.141 [2024-05-14 21:50:50.685970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:50.141 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.142 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.400 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:50.400 "name": "Existed_Raid", 00:08:50.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.400 "strip_size_kb": 64, 00:08:50.400 "state": "configuring", 00:08:50.400 "raid_level": "raid0", 00:08:50.400 "superblock": false, 00:08:50.400 "num_base_bdevs": 2, 00:08:50.400 "num_base_bdevs_discovered": 0, 00:08:50.400 "num_base_bdevs_operational": 2, 00:08:50.400 "base_bdevs_list": [ 00:08:50.400 { 00:08:50.400 "name": "BaseBdev1", 00:08:50.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.400 "is_configured": false, 00:08:50.400 "data_offset": 0, 00:08:50.400 "data_size": 0 00:08:50.400 }, 00:08:50.400 { 00:08:50.400 "name": "BaseBdev2", 00:08:50.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.400 "is_configured": false, 00:08:50.400 "data_offset": 0, 00:08:50.400 "data_size": 0 00:08:50.400 } 00:08:50.400 ] 00:08:50.400 }' 00:08:50.400 21:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:50.400 21:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.011 21:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:51.011 [2024-05-14 21:50:51.589889] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.011 [2024-05-14 21:50:51.589920] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3ab300 name Existed_Raid, state configuring 00:08:51.269 21:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:51.269 [2024-05-14 21:50:51.841913] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.269 [2024-05-14 21:50:51.841978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.269 [2024-05-14 21:50:51.841984] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.269 [2024-05-14 21:50:51.841993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.526 21:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.784 [2024-05-14 21:50:52.130991] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.784 BaseBdev1 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:51.784 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:52.041 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.041 [ 00:08:52.041 { 00:08:52.041 "name": "BaseBdev1", 00:08:52.041 "aliases": [ 00:08:52.041 "087998f8-123c-11ef-8c90-4585f0cfab08" 00:08:52.041 ], 00:08:52.041 "product_name": "Malloc disk", 00:08:52.041 "block_size": 512, 00:08:52.041 "num_blocks": 65536, 00:08:52.041 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:52.041 "assigned_rate_limits": { 00:08:52.041 "rw_ios_per_sec": 0, 00:08:52.041 "rw_mbytes_per_sec": 0, 00:08:52.041 "r_mbytes_per_sec": 0, 00:08:52.041 "w_mbytes_per_sec": 0 00:08:52.041 }, 00:08:52.041 "claimed": true, 00:08:52.041 "claim_type": "exclusive_write", 00:08:52.041 "zoned": false, 00:08:52.041 "supported_io_types": { 00:08:52.041 "read": true, 00:08:52.041 "write": true, 00:08:52.041 "unmap": true, 00:08:52.041 "write_zeroes": true, 00:08:52.041 "flush": true, 00:08:52.041 "reset": true, 00:08:52.041 "compare": false, 00:08:52.041 "compare_and_write": false, 00:08:52.041 "abort": true, 00:08:52.041 "nvme_admin": false, 00:08:52.041 "nvme_io": false 00:08:52.041 }, 00:08:52.041 "memory_domains": [ 00:08:52.041 { 00:08:52.041 "dma_device_id": "system", 00:08:52.041 "dma_device_type": 1 00:08:52.041 }, 00:08:52.041 { 00:08:52.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.041 "dma_device_type": 2 00:08:52.041 } 00:08:52.041 ], 00:08:52.041 "driver_specific": {} 00:08:52.041 } 00:08:52.041 ] 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.299 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.299 "name": "Existed_Raid", 00:08:52.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.300 "strip_size_kb": 64, 00:08:52.300 "state": "configuring", 00:08:52.300 "raid_level": "raid0", 00:08:52.300 "superblock": false, 00:08:52.300 "num_base_bdevs": 2, 00:08:52.300 "num_base_bdevs_discovered": 1, 00:08:52.300 "num_base_bdevs_operational": 2, 00:08:52.300 "base_bdevs_list": [ 00:08:52.300 { 00:08:52.300 "name": "BaseBdev1", 00:08:52.300 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:52.300 "is_configured": true, 00:08:52.300 "data_offset": 0, 00:08:52.300 "data_size": 65536 00:08:52.300 }, 00:08:52.300 { 00:08:52.300 "name": "BaseBdev2", 00:08:52.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.300 "is_configured": false, 00:08:52.300 "data_offset": 0, 00:08:52.300 "data_size": 0 00:08:52.300 } 00:08:52.300 ] 00:08:52.300 }' 00:08:52.300 21:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.300 21:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:52.866 [2024-05-14 21:50:53.425931] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.866 [2024-05-14 21:50:53.425976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3ab300 name Existed_Raid, state configuring 00:08:52.866 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:53.124 [2024-05-14 21:50:53.665949] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.124 [2024-05-14 21:50:53.666822] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.124 [2024-05-14 21:50:53.666874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.124 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.381 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:53.381 "name": "Existed_Raid", 00:08:53.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.381 "strip_size_kb": 64, 00:08:53.381 "state": "configuring", 00:08:53.381 "raid_level": "raid0", 00:08:53.381 "superblock": false, 00:08:53.382 "num_base_bdevs": 2, 00:08:53.382 "num_base_bdevs_discovered": 1, 00:08:53.382 "num_base_bdevs_operational": 2, 00:08:53.382 "base_bdevs_list": [ 00:08:53.382 { 00:08:53.382 "name": "BaseBdev1", 00:08:53.382 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:53.382 "is_configured": true, 00:08:53.382 "data_offset": 0, 00:08:53.382 "data_size": 65536 00:08:53.382 }, 00:08:53.382 { 00:08:53.382 "name": "BaseBdev2", 00:08:53.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.382 "is_configured": false, 00:08:53.382 "data_offset": 0, 00:08:53.382 "data_size": 0 00:08:53.382 } 00:08:53.382 ] 00:08:53.382 }' 00:08:53.382 21:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:53.382 21:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.948 [2024-05-14 21:50:54.478114] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.948 [2024-05-14 21:50:54.478155] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b3ab300 00:08:53.948 [2024-05-14 21:50:54.478160] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:53.948 [2024-05-14 21:50:54.478182] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b409ec0 00:08:53.948 [2024-05-14 21:50:54.478278] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b3ab300 00:08:53.948 [2024-05-14 21:50:54.478283] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b3ab300 00:08:53.948 [2024-05-14 21:50:54.478335] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.948 BaseBdev2 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:53.948 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:54.205 21:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.791 [ 00:08:54.791 { 00:08:54.791 "name": "BaseBdev2", 00:08:54.791 "aliases": [ 00:08:54.791 "09dfe1f3-123c-11ef-8c90-4585f0cfab08" 00:08:54.791 ], 00:08:54.791 "product_name": "Malloc disk", 00:08:54.791 "block_size": 512, 00:08:54.791 "num_blocks": 65536, 00:08:54.791 "uuid": "09dfe1f3-123c-11ef-8c90-4585f0cfab08", 00:08:54.791 "assigned_rate_limits": { 00:08:54.791 "rw_ios_per_sec": 0, 00:08:54.791 "rw_mbytes_per_sec": 0, 00:08:54.791 "r_mbytes_per_sec": 0, 00:08:54.791 "w_mbytes_per_sec": 0 00:08:54.791 }, 00:08:54.791 "claimed": true, 00:08:54.791 "claim_type": "exclusive_write", 00:08:54.791 "zoned": false, 00:08:54.791 "supported_io_types": { 00:08:54.791 "read": true, 00:08:54.791 "write": true, 00:08:54.791 "unmap": true, 00:08:54.791 "write_zeroes": true, 00:08:54.791 "flush": true, 00:08:54.791 "reset": true, 00:08:54.791 "compare": false, 00:08:54.791 "compare_and_write": false, 00:08:54.791 "abort": true, 00:08:54.791 "nvme_admin": false, 00:08:54.791 "nvme_io": false 00:08:54.791 }, 00:08:54.791 "memory_domains": [ 00:08:54.791 { 00:08:54.791 "dma_device_id": "system", 00:08:54.791 "dma_device_type": 1 00:08:54.791 }, 00:08:54.791 { 00:08:54.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.791 "dma_device_type": 2 00:08:54.791 } 00:08:54.791 ], 00:08:54.792 "driver_specific": {} 00:08:54.792 } 00:08:54.792 ] 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:54.792 "name": "Existed_Raid", 00:08:54.792 "uuid": "09dfe95b-123c-11ef-8c90-4585f0cfab08", 00:08:54.792 "strip_size_kb": 64, 00:08:54.792 "state": "online", 00:08:54.792 "raid_level": "raid0", 00:08:54.792 "superblock": false, 00:08:54.792 "num_base_bdevs": 2, 00:08:54.792 "num_base_bdevs_discovered": 2, 00:08:54.792 "num_base_bdevs_operational": 2, 00:08:54.792 "base_bdevs_list": [ 00:08:54.792 { 00:08:54.792 "name": "BaseBdev1", 00:08:54.792 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:54.792 "is_configured": true, 00:08:54.792 "data_offset": 0, 00:08:54.792 "data_size": 65536 00:08:54.792 }, 00:08:54.792 { 00:08:54.792 "name": "BaseBdev2", 00:08:54.792 "uuid": "09dfe1f3-123c-11ef-8c90-4585f0cfab08", 00:08:54.792 "is_configured": true, 00:08:54.792 "data_offset": 0, 00:08:54.792 "data_size": 65536 00:08:54.792 } 00:08:54.792 ] 00:08:54.792 }' 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:54.792 21:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:55.365 [2024-05-14 21:50:55.902033] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.365 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:08:55.365 "name": "Existed_Raid", 00:08:55.365 "aliases": [ 00:08:55.365 "09dfe95b-123c-11ef-8c90-4585f0cfab08" 00:08:55.365 ], 00:08:55.365 "product_name": "Raid Volume", 00:08:55.365 "block_size": 512, 00:08:55.365 "num_blocks": 131072, 00:08:55.365 "uuid": "09dfe95b-123c-11ef-8c90-4585f0cfab08", 00:08:55.365 "assigned_rate_limits": { 00:08:55.365 "rw_ios_per_sec": 0, 00:08:55.365 "rw_mbytes_per_sec": 0, 00:08:55.365 "r_mbytes_per_sec": 0, 00:08:55.365 "w_mbytes_per_sec": 0 00:08:55.365 }, 00:08:55.365 "claimed": false, 00:08:55.365 "zoned": false, 00:08:55.365 "supported_io_types": { 00:08:55.365 "read": true, 00:08:55.365 "write": true, 00:08:55.365 "unmap": true, 00:08:55.365 "write_zeroes": true, 00:08:55.365 "flush": true, 00:08:55.365 "reset": true, 00:08:55.365 "compare": false, 00:08:55.366 "compare_and_write": false, 00:08:55.366 "abort": false, 00:08:55.366 "nvme_admin": false, 00:08:55.366 "nvme_io": false 00:08:55.366 }, 00:08:55.366 "memory_domains": [ 00:08:55.366 { 00:08:55.366 "dma_device_id": "system", 00:08:55.366 "dma_device_type": 1 00:08:55.366 }, 00:08:55.366 { 00:08:55.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.366 "dma_device_type": 2 00:08:55.366 }, 00:08:55.366 { 00:08:55.366 "dma_device_id": "system", 00:08:55.366 "dma_device_type": 1 00:08:55.366 }, 00:08:55.366 { 00:08:55.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.366 "dma_device_type": 2 00:08:55.366 } 00:08:55.366 ], 00:08:55.366 "driver_specific": { 00:08:55.366 "raid": { 00:08:55.366 "uuid": "09dfe95b-123c-11ef-8c90-4585f0cfab08", 00:08:55.366 "strip_size_kb": 64, 00:08:55.366 "state": "online", 00:08:55.366 "raid_level": "raid0", 00:08:55.366 "superblock": false, 00:08:55.366 "num_base_bdevs": 2, 00:08:55.366 "num_base_bdevs_discovered": 2, 00:08:55.366 "num_base_bdevs_operational": 2, 00:08:55.366 "base_bdevs_list": [ 00:08:55.366 { 00:08:55.366 "name": "BaseBdev1", 00:08:55.366 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:55.366 "is_configured": true, 00:08:55.366 "data_offset": 0, 00:08:55.366 "data_size": 65536 00:08:55.366 }, 00:08:55.366 { 00:08:55.366 "name": "BaseBdev2", 00:08:55.366 "uuid": "09dfe1f3-123c-11ef-8c90-4585f0cfab08", 00:08:55.366 "is_configured": true, 00:08:55.366 "data_offset": 0, 00:08:55.366 "data_size": 65536 00:08:55.366 } 00:08:55.366 ] 00:08:55.366 } 00:08:55.366 } 00:08:55.366 }' 00:08:55.366 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.366 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:08:55.366 BaseBdev2' 00:08:55.366 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:08:55.366 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:55.366 21:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:08:55.624 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:08:55.624 "name": "BaseBdev1", 00:08:55.624 "aliases": [ 00:08:55.624 "087998f8-123c-11ef-8c90-4585f0cfab08" 00:08:55.624 ], 00:08:55.624 "product_name": "Malloc disk", 00:08:55.624 "block_size": 512, 00:08:55.624 "num_blocks": 65536, 00:08:55.624 "uuid": "087998f8-123c-11ef-8c90-4585f0cfab08", 00:08:55.624 "assigned_rate_limits": { 00:08:55.624 "rw_ios_per_sec": 0, 00:08:55.624 "rw_mbytes_per_sec": 0, 00:08:55.624 "r_mbytes_per_sec": 0, 00:08:55.624 "w_mbytes_per_sec": 0 00:08:55.624 }, 00:08:55.624 "claimed": true, 00:08:55.624 "claim_type": "exclusive_write", 00:08:55.624 "zoned": false, 00:08:55.624 "supported_io_types": { 00:08:55.624 "read": true, 00:08:55.624 "write": true, 00:08:55.624 "unmap": true, 00:08:55.624 "write_zeroes": true, 00:08:55.624 "flush": true, 00:08:55.624 "reset": true, 00:08:55.624 "compare": false, 00:08:55.624 "compare_and_write": false, 00:08:55.624 "abort": true, 00:08:55.624 "nvme_admin": false, 00:08:55.624 "nvme_io": false 00:08:55.624 }, 00:08:55.624 "memory_domains": [ 00:08:55.624 { 00:08:55.624 "dma_device_id": "system", 00:08:55.624 "dma_device_type": 1 00:08:55.624 }, 00:08:55.624 { 00:08:55.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.624 "dma_device_type": 2 00:08:55.624 } 00:08:55.624 ], 00:08:55.624 "driver_specific": {} 00:08:55.624 }' 00:08:55.624 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:55.883 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:08:56.142 "name": "BaseBdev2", 00:08:56.142 "aliases": [ 00:08:56.142 "09dfe1f3-123c-11ef-8c90-4585f0cfab08" 00:08:56.142 ], 00:08:56.142 "product_name": "Malloc disk", 00:08:56.142 "block_size": 512, 00:08:56.142 "num_blocks": 65536, 00:08:56.142 "uuid": "09dfe1f3-123c-11ef-8c90-4585f0cfab08", 00:08:56.142 "assigned_rate_limits": { 00:08:56.142 "rw_ios_per_sec": 0, 00:08:56.142 "rw_mbytes_per_sec": 0, 00:08:56.142 "r_mbytes_per_sec": 0, 00:08:56.142 "w_mbytes_per_sec": 0 00:08:56.142 }, 00:08:56.142 "claimed": true, 00:08:56.142 "claim_type": "exclusive_write", 00:08:56.142 "zoned": false, 00:08:56.142 "supported_io_types": { 00:08:56.142 "read": true, 00:08:56.142 "write": true, 00:08:56.142 "unmap": true, 00:08:56.142 "write_zeroes": true, 00:08:56.142 "flush": true, 00:08:56.142 "reset": true, 00:08:56.142 "compare": false, 00:08:56.142 "compare_and_write": false, 00:08:56.142 "abort": true, 00:08:56.142 "nvme_admin": false, 00:08:56.142 "nvme_io": false 00:08:56.142 }, 00:08:56.142 "memory_domains": [ 00:08:56.142 { 00:08:56.142 "dma_device_id": "system", 00:08:56.142 "dma_device_type": 1 00:08:56.142 }, 00:08:56.142 { 00:08:56.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.142 "dma_device_type": 2 00:08:56.142 } 00:08:56.142 ], 00:08:56.142 "driver_specific": {} 00:08:56.142 }' 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:08:56.142 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:56.400 [2024-05-14 21:50:56.798027] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.400 [2024-05-14 21:50:56.798070] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.400 [2024-05-14 21:50:56.798094] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.400 21:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.659 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:56.659 "name": "Existed_Raid", 00:08:56.659 "uuid": "09dfe95b-123c-11ef-8c90-4585f0cfab08", 00:08:56.659 "strip_size_kb": 64, 00:08:56.659 "state": "offline", 00:08:56.659 "raid_level": "raid0", 00:08:56.659 "superblock": false, 00:08:56.659 "num_base_bdevs": 2, 00:08:56.659 "num_base_bdevs_discovered": 1, 00:08:56.659 "num_base_bdevs_operational": 1, 00:08:56.659 "base_bdevs_list": [ 00:08:56.659 { 00:08:56.659 "name": null, 00:08:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.659 "is_configured": false, 00:08:56.659 "data_offset": 0, 00:08:56.659 "data_size": 65536 00:08:56.659 }, 00:08:56.659 { 00:08:56.659 "name": "BaseBdev2", 00:08:56.659 "uuid": "09dfe1f3-123c-11ef-8c90-4585f0cfab08", 00:08:56.659 "is_configured": true, 00:08:56.659 "data_offset": 0, 00:08:56.659 "data_size": 65536 00:08:56.659 } 00:08:56.659 ] 00:08:56.659 }' 00:08:56.659 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:56.659 21:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.917 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.917 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.176 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.176 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:08:57.434 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:08:57.434 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.434 21:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:57.692 [2024-05-14 21:50:58.036007] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.692 [2024-05-14 21:50:58.036044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3ab300 name Existed_Raid, state offline 00:08:57.692 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.692 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.692 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.692 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 49007 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 49007 ']' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 49007 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 49007 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:08:57.951 killing process with pid 49007 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49007' 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 49007 00:08:57.951 [2024-05-14 21:50:58.314469] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.951 [2024-05-14 21:50:58.314507] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 49007 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:08:57.951 00:08:57.951 real 0m9.097s 00:08:57.951 user 0m15.803s 00:08:57.951 sys 0m1.615s 00:08:57.951 ************************************ 00:08:57.951 END TEST raid_state_function_test 00:08:57.951 ************************************ 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.951 21:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.951 21:50:58 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:57.951 21:50:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:57.951 21:50:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:57.951 21:50:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.209 ************************************ 00:08:58.209 START TEST raid_state_function_test_sb 00:08:58.209 ************************************ 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:08:58.209 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=49282 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49282' 00:08:58.210 Process raid pid: 49282 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 49282 /var/tmp/spdk-raid.sock 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 49282 ']' 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:58.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:58.210 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.210 [2024-05-14 21:50:58.557552] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:58.210 [2024-05-14 21:50:58.557779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:58.775 EAL: TSC is not safe to use in SMP mode 00:08:58.775 EAL: TSC is not invariant 00:08:58.775 [2024-05-14 21:50:59.083600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.775 [2024-05-14 21:50:59.178607] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:58.775 [2024-05-14 21:50:59.181217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.775 [2024-05-14 21:50:59.182135] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.775 [2024-05-14 21:50:59.182154] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.033 21:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:59.033 21:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:08:59.033 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:59.598 [2024-05-14 21:50:59.884208] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.598 [2024-05-14 21:50:59.884284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.598 [2024-05-14 21:50:59.884290] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.598 [2024-05-14 21:50:59.884309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.598 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.856 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:59.856 "name": "Existed_Raid", 00:08:59.856 "uuid": "0d18ce42-123c-11ef-8c90-4585f0cfab08", 00:08:59.856 "strip_size_kb": 64, 00:08:59.856 "state": "configuring", 00:08:59.856 "raid_level": "raid0", 00:08:59.856 "superblock": true, 00:08:59.856 "num_base_bdevs": 2, 00:08:59.856 "num_base_bdevs_discovered": 0, 00:08:59.856 "num_base_bdevs_operational": 2, 00:08:59.856 "base_bdevs_list": [ 00:08:59.856 { 00:08:59.856 "name": "BaseBdev1", 00:08:59.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.856 "is_configured": false, 00:08:59.856 "data_offset": 0, 00:08:59.856 "data_size": 0 00:08:59.856 }, 00:08:59.856 { 00:08:59.856 "name": "BaseBdev2", 00:08:59.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.856 "is_configured": false, 00:08:59.856 "data_offset": 0, 00:08:59.856 "data_size": 0 00:08:59.856 } 00:08:59.856 ] 00:08:59.856 }' 00:08:59.856 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:59.856 21:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.114 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:00.371 [2024-05-14 21:51:00.832204] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.371 [2024-05-14 21:51:00.832243] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ded9300 name Existed_Raid, state configuring 00:09:00.371 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:00.628 [2024-05-14 21:51:01.140221] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.628 [2024-05-14 21:51:01.140291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.628 [2024-05-14 21:51:01.140297] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.628 [2024-05-14 21:51:01.140306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.628 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.886 [2024-05-14 21:51:01.465273] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.886 BaseBdev1 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:01.144 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.710 [ 00:09:01.710 { 00:09:01.710 "name": "BaseBdev1", 00:09:01.710 "aliases": [ 00:09:01.710 "0e09e696-123c-11ef-8c90-4585f0cfab08" 00:09:01.710 ], 00:09:01.710 "product_name": "Malloc disk", 00:09:01.710 "block_size": 512, 00:09:01.710 "num_blocks": 65536, 00:09:01.710 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:01.710 "assigned_rate_limits": { 00:09:01.710 "rw_ios_per_sec": 0, 00:09:01.710 "rw_mbytes_per_sec": 0, 00:09:01.710 "r_mbytes_per_sec": 0, 00:09:01.710 "w_mbytes_per_sec": 0 00:09:01.710 }, 00:09:01.710 "claimed": true, 00:09:01.710 "claim_type": "exclusive_write", 00:09:01.710 "zoned": false, 00:09:01.710 "supported_io_types": { 00:09:01.710 "read": true, 00:09:01.710 "write": true, 00:09:01.710 "unmap": true, 00:09:01.710 "write_zeroes": true, 00:09:01.710 "flush": true, 00:09:01.710 "reset": true, 00:09:01.710 "compare": false, 00:09:01.710 "compare_and_write": false, 00:09:01.710 "abort": true, 00:09:01.710 "nvme_admin": false, 00:09:01.710 "nvme_io": false 00:09:01.710 }, 00:09:01.710 "memory_domains": [ 00:09:01.710 { 00:09:01.710 "dma_device_id": "system", 00:09:01.710 "dma_device_type": 1 00:09:01.710 }, 00:09:01.710 { 00:09:01.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.710 "dma_device_type": 2 00:09:01.710 } 00:09:01.710 ], 00:09:01.710 "driver_specific": {} 00:09:01.710 } 00:09:01.710 ] 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.710 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.969 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:01.969 "name": "Existed_Raid", 00:09:01.969 "uuid": "0dd87597-123c-11ef-8c90-4585f0cfab08", 00:09:01.969 "strip_size_kb": 64, 00:09:01.969 "state": "configuring", 00:09:01.969 "raid_level": "raid0", 00:09:01.969 "superblock": true, 00:09:01.969 "num_base_bdevs": 2, 00:09:01.969 "num_base_bdevs_discovered": 1, 00:09:01.969 "num_base_bdevs_operational": 2, 00:09:01.969 "base_bdevs_list": [ 00:09:01.969 { 00:09:01.969 "name": "BaseBdev1", 00:09:01.969 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:01.969 "is_configured": true, 00:09:01.969 "data_offset": 2048, 00:09:01.969 "data_size": 63488 00:09:01.969 }, 00:09:01.969 { 00:09:01.969 "name": "BaseBdev2", 00:09:01.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.969 "is_configured": false, 00:09:01.969 "data_offset": 0, 00:09:01.969 "data_size": 0 00:09:01.969 } 00:09:01.969 ] 00:09:01.969 }' 00:09:01.969 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:01.969 21:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.228 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:02.486 [2024-05-14 21:51:02.876248] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.486 [2024-05-14 21:51:02.876289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ded9300 name Existed_Raid, state configuring 00:09:02.486 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:02.744 [2024-05-14 21:51:03.152278] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.744 [2024-05-14 21:51:03.153095] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.744 [2024-05-14 21:51:03.153141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.744 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.002 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:03.002 "name": "Existed_Raid", 00:09:03.002 "uuid": "0f0b7965-123c-11ef-8c90-4585f0cfab08", 00:09:03.002 "strip_size_kb": 64, 00:09:03.002 "state": "configuring", 00:09:03.002 "raid_level": "raid0", 00:09:03.002 "superblock": true, 00:09:03.002 "num_base_bdevs": 2, 00:09:03.002 "num_base_bdevs_discovered": 1, 00:09:03.002 "num_base_bdevs_operational": 2, 00:09:03.002 "base_bdevs_list": [ 00:09:03.002 { 00:09:03.002 "name": "BaseBdev1", 00:09:03.002 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:03.002 "is_configured": true, 00:09:03.002 "data_offset": 2048, 00:09:03.002 "data_size": 63488 00:09:03.002 }, 00:09:03.002 { 00:09:03.002 "name": "BaseBdev2", 00:09:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.002 "is_configured": false, 00:09:03.002 "data_offset": 0, 00:09:03.002 "data_size": 0 00:09:03.002 } 00:09:03.002 ] 00:09:03.002 }' 00:09:03.002 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:03.002 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.260 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.519 [2024-05-14 21:51:04.028442] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.519 [2024-05-14 21:51:04.028538] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ded9300 00:09:03.519 [2024-05-14 21:51:04.028546] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:03.519 [2024-05-14 21:51:04.028569] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82df37ec0 00:09:03.519 [2024-05-14 21:51:04.028617] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ded9300 00:09:03.519 [2024-05-14 21:51:04.028622] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ded9300 00:09:03.519 [2024-05-14 21:51:04.028644] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.519 BaseBdev2 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:03.519 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:03.777 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.036 [ 00:09:04.036 { 00:09:04.036 "name": "BaseBdev2", 00:09:04.036 "aliases": [ 00:09:04.036 "0f912528-123c-11ef-8c90-4585f0cfab08" 00:09:04.036 ], 00:09:04.036 "product_name": "Malloc disk", 00:09:04.036 "block_size": 512, 00:09:04.036 "num_blocks": 65536, 00:09:04.036 "uuid": "0f912528-123c-11ef-8c90-4585f0cfab08", 00:09:04.036 "assigned_rate_limits": { 00:09:04.036 "rw_ios_per_sec": 0, 00:09:04.036 "rw_mbytes_per_sec": 0, 00:09:04.036 "r_mbytes_per_sec": 0, 00:09:04.036 "w_mbytes_per_sec": 0 00:09:04.036 }, 00:09:04.036 "claimed": true, 00:09:04.036 "claim_type": "exclusive_write", 00:09:04.036 "zoned": false, 00:09:04.036 "supported_io_types": { 00:09:04.036 "read": true, 00:09:04.036 "write": true, 00:09:04.036 "unmap": true, 00:09:04.036 "write_zeroes": true, 00:09:04.036 "flush": true, 00:09:04.036 "reset": true, 00:09:04.036 "compare": false, 00:09:04.036 "compare_and_write": false, 00:09:04.036 "abort": true, 00:09:04.036 "nvme_admin": false, 00:09:04.036 "nvme_io": false 00:09:04.036 }, 00:09:04.036 "memory_domains": [ 00:09:04.036 { 00:09:04.036 "dma_device_id": "system", 00:09:04.036 "dma_device_type": 1 00:09:04.036 }, 00:09:04.036 { 00:09:04.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.036 "dma_device_type": 2 00:09:04.036 } 00:09:04.036 ], 00:09:04.036 "driver_specific": {} 00:09:04.036 } 00:09:04.036 ] 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.036 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.294 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:04.294 "name": "Existed_Raid", 00:09:04.294 "uuid": "0f0b7965-123c-11ef-8c90-4585f0cfab08", 00:09:04.294 "strip_size_kb": 64, 00:09:04.294 "state": "online", 00:09:04.294 "raid_level": "raid0", 00:09:04.294 "superblock": true, 00:09:04.294 "num_base_bdevs": 2, 00:09:04.294 "num_base_bdevs_discovered": 2, 00:09:04.294 "num_base_bdevs_operational": 2, 00:09:04.294 "base_bdevs_list": [ 00:09:04.294 { 00:09:04.294 "name": "BaseBdev1", 00:09:04.294 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:04.294 "is_configured": true, 00:09:04.294 "data_offset": 2048, 00:09:04.294 "data_size": 63488 00:09:04.294 }, 00:09:04.294 { 00:09:04.294 "name": "BaseBdev2", 00:09:04.294 "uuid": "0f912528-123c-11ef-8c90-4585f0cfab08", 00:09:04.294 "is_configured": true, 00:09:04.294 "data_offset": 2048, 00:09:04.294 "data_size": 63488 00:09:04.294 } 00:09:04.294 ] 00:09:04.294 }' 00:09:04.294 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:04.294 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:04.860 [2024-05-14 21:51:05.384349] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:04.860 "name": "Existed_Raid", 00:09:04.860 "aliases": [ 00:09:04.860 "0f0b7965-123c-11ef-8c90-4585f0cfab08" 00:09:04.860 ], 00:09:04.860 "product_name": "Raid Volume", 00:09:04.860 "block_size": 512, 00:09:04.860 "num_blocks": 126976, 00:09:04.860 "uuid": "0f0b7965-123c-11ef-8c90-4585f0cfab08", 00:09:04.860 "assigned_rate_limits": { 00:09:04.860 "rw_ios_per_sec": 0, 00:09:04.860 "rw_mbytes_per_sec": 0, 00:09:04.860 "r_mbytes_per_sec": 0, 00:09:04.860 "w_mbytes_per_sec": 0 00:09:04.860 }, 00:09:04.860 "claimed": false, 00:09:04.860 "zoned": false, 00:09:04.860 "supported_io_types": { 00:09:04.860 "read": true, 00:09:04.860 "write": true, 00:09:04.860 "unmap": true, 00:09:04.860 "write_zeroes": true, 00:09:04.860 "flush": true, 00:09:04.860 "reset": true, 00:09:04.860 "compare": false, 00:09:04.860 "compare_and_write": false, 00:09:04.860 "abort": false, 00:09:04.860 "nvme_admin": false, 00:09:04.860 "nvme_io": false 00:09:04.860 }, 00:09:04.860 "memory_domains": [ 00:09:04.860 { 00:09:04.860 "dma_device_id": "system", 00:09:04.860 "dma_device_type": 1 00:09:04.860 }, 00:09:04.860 { 00:09:04.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.860 "dma_device_type": 2 00:09:04.860 }, 00:09:04.860 { 00:09:04.860 "dma_device_id": "system", 00:09:04.860 "dma_device_type": 1 00:09:04.860 }, 00:09:04.860 { 00:09:04.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.860 "dma_device_type": 2 00:09:04.860 } 00:09:04.860 ], 00:09:04.860 "driver_specific": { 00:09:04.860 "raid": { 00:09:04.860 "uuid": "0f0b7965-123c-11ef-8c90-4585f0cfab08", 00:09:04.860 "strip_size_kb": 64, 00:09:04.860 "state": "online", 00:09:04.860 "raid_level": "raid0", 00:09:04.860 "superblock": true, 00:09:04.860 "num_base_bdevs": 2, 00:09:04.860 "num_base_bdevs_discovered": 2, 00:09:04.860 "num_base_bdevs_operational": 2, 00:09:04.860 "base_bdevs_list": [ 00:09:04.860 { 00:09:04.860 "name": "BaseBdev1", 00:09:04.860 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:04.860 "is_configured": true, 00:09:04.860 "data_offset": 2048, 00:09:04.860 "data_size": 63488 00:09:04.860 }, 00:09:04.860 { 00:09:04.860 "name": "BaseBdev2", 00:09:04.860 "uuid": "0f912528-123c-11ef-8c90-4585f0cfab08", 00:09:04.860 "is_configured": true, 00:09:04.860 "data_offset": 2048, 00:09:04.860 "data_size": 63488 00:09:04.860 } 00:09:04.860 ] 00:09:04.860 } 00:09:04.860 } 00:09:04.860 }' 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.860 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:04.860 BaseBdev2' 00:09:04.861 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:04.861 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:04.861 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:05.119 "name": "BaseBdev1", 00:09:05.119 "aliases": [ 00:09:05.119 "0e09e696-123c-11ef-8c90-4585f0cfab08" 00:09:05.119 ], 00:09:05.119 "product_name": "Malloc disk", 00:09:05.119 "block_size": 512, 00:09:05.119 "num_blocks": 65536, 00:09:05.119 "uuid": "0e09e696-123c-11ef-8c90-4585f0cfab08", 00:09:05.119 "assigned_rate_limits": { 00:09:05.119 "rw_ios_per_sec": 0, 00:09:05.119 "rw_mbytes_per_sec": 0, 00:09:05.119 "r_mbytes_per_sec": 0, 00:09:05.119 "w_mbytes_per_sec": 0 00:09:05.119 }, 00:09:05.119 "claimed": true, 00:09:05.119 "claim_type": "exclusive_write", 00:09:05.119 "zoned": false, 00:09:05.119 "supported_io_types": { 00:09:05.119 "read": true, 00:09:05.119 "write": true, 00:09:05.119 "unmap": true, 00:09:05.119 "write_zeroes": true, 00:09:05.119 "flush": true, 00:09:05.119 "reset": true, 00:09:05.119 "compare": false, 00:09:05.119 "compare_and_write": false, 00:09:05.119 "abort": true, 00:09:05.119 "nvme_admin": false, 00:09:05.119 "nvme_io": false 00:09:05.119 }, 00:09:05.119 "memory_domains": [ 00:09:05.119 { 00:09:05.119 "dma_device_id": "system", 00:09:05.119 "dma_device_type": 1 00:09:05.119 }, 00:09:05.119 { 00:09:05.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.119 "dma_device_type": 2 00:09:05.119 } 00:09:05.119 ], 00:09:05.119 "driver_specific": {} 00:09:05.119 }' 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:05.119 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:05.686 "name": "BaseBdev2", 00:09:05.686 "aliases": [ 00:09:05.686 "0f912528-123c-11ef-8c90-4585f0cfab08" 00:09:05.686 ], 00:09:05.686 "product_name": "Malloc disk", 00:09:05.686 "block_size": 512, 00:09:05.686 "num_blocks": 65536, 00:09:05.686 "uuid": "0f912528-123c-11ef-8c90-4585f0cfab08", 00:09:05.686 "assigned_rate_limits": { 00:09:05.686 "rw_ios_per_sec": 0, 00:09:05.686 "rw_mbytes_per_sec": 0, 00:09:05.686 "r_mbytes_per_sec": 0, 00:09:05.686 "w_mbytes_per_sec": 0 00:09:05.686 }, 00:09:05.686 "claimed": true, 00:09:05.686 "claim_type": "exclusive_write", 00:09:05.686 "zoned": false, 00:09:05.686 "supported_io_types": { 00:09:05.686 "read": true, 00:09:05.686 "write": true, 00:09:05.686 "unmap": true, 00:09:05.686 "write_zeroes": true, 00:09:05.686 "flush": true, 00:09:05.686 "reset": true, 00:09:05.686 "compare": false, 00:09:05.686 "compare_and_write": false, 00:09:05.686 "abort": true, 00:09:05.686 "nvme_admin": false, 00:09:05.686 "nvme_io": false 00:09:05.686 }, 00:09:05.686 "memory_domains": [ 00:09:05.686 { 00:09:05.686 "dma_device_id": "system", 00:09:05.686 "dma_device_type": 1 00:09:05.686 }, 00:09:05.686 { 00:09:05.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.686 "dma_device_type": 2 00:09:05.686 } 00:09:05.686 ], 00:09:05.686 "driver_specific": {} 00:09:05.686 }' 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:05.686 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:05.686 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:05.686 [2024-05-14 21:51:06.264340] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.686 [2024-05-14 21:51:06.264371] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.686 [2024-05-14 21:51:06.264386] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.944 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:05.944 "name": "Existed_Raid", 00:09:05.944 "uuid": "0f0b7965-123c-11ef-8c90-4585f0cfab08", 00:09:05.944 "strip_size_kb": 64, 00:09:05.944 "state": "offline", 00:09:05.944 "raid_level": "raid0", 00:09:05.944 "superblock": true, 00:09:05.944 "num_base_bdevs": 2, 00:09:05.944 "num_base_bdevs_discovered": 1, 00:09:05.944 "num_base_bdevs_operational": 1, 00:09:05.944 "base_bdevs_list": [ 00:09:05.944 { 00:09:05.944 "name": null, 00:09:05.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.944 "is_configured": false, 00:09:05.945 "data_offset": 2048, 00:09:05.945 "data_size": 63488 00:09:05.945 }, 00:09:05.945 { 00:09:05.945 "name": "BaseBdev2", 00:09:05.945 "uuid": "0f912528-123c-11ef-8c90-4585f0cfab08", 00:09:05.945 "is_configured": true, 00:09:05.945 "data_offset": 2048, 00:09:05.945 "data_size": 63488 00:09:05.945 } 00:09:05.945 ] 00:09:05.945 }' 00:09:05.945 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:05.945 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.510 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:06.510 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.510 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.510 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:09:06.768 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:09:06.768 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.768 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:06.768 [2024-05-14 21:51:07.342197] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.768 [2024-05-14 21:51:07.342233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ded9300 name Existed_Raid, state offline 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 49282 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 49282 ']' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 49282 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 49282 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:07.026 killing process with pid 49282 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49282' 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 49282 00:09:07.026 [2024-05-14 21:51:07.614410] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.026 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 49282 00:09:07.026 [2024-05-14 21:51:07.614446] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.284 21:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:09:07.284 00:09:07.284 real 0m9.254s 00:09:07.284 user 0m16.172s 00:09:07.284 sys 0m1.549s 00:09:07.284 ************************************ 00:09:07.284 END TEST raid_state_function_test_sb 00:09:07.284 ************************************ 00:09:07.284 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.284 21:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.284 21:51:07 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:07.284 21:51:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:07.284 21:51:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.284 21:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.284 ************************************ 00:09:07.284 START TEST raid_superblock_test 00:09:07.284 ************************************ 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=49556 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 49556 /var/tmp/spdk-raid.sock 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 49556 ']' 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:07.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:07.284 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.284 [2024-05-14 21:51:07.848465] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:07.284 [2024-05-14 21:51:07.848732] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:07.850 EAL: TSC is not safe to use in SMP mode 00:09:07.850 EAL: TSC is not invariant 00:09:07.850 [2024-05-14 21:51:08.373698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.108 [2024-05-14 21:51:08.466524] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:08.108 [2024-05-14 21:51:08.468762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.108 [2024-05-14 21:51:08.469536] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.108 [2024-05-14 21:51:08.469552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.367 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:08.933 malloc1 00:09:08.933 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.933 [2024-05-14 21:51:09.518182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.933 [2024-05-14 21:51:09.518250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.933 [2024-05-14 21:51:09.518876] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7b3780 00:09:08.933 [2024-05-14 21:51:09.518910] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.933 [2024-05-14 21:51:09.519800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.933 [2024-05-14 21:51:09.519827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.933 pt1 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:09.191 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:09.192 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:09.192 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:09.451 malloc2 00:09:09.451 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.710 [2024-05-14 21:51:10.058193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.710 [2024-05-14 21:51:10.058262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.710 [2024-05-14 21:51:10.058290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7b3c80 00:09:09.710 [2024-05-14 21:51:10.058299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.710 [2024-05-14 21:51:10.058970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.710 [2024-05-14 21:51:10.058999] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.710 pt2 00:09:09.710 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:09.710 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:09.710 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:09.968 [2024-05-14 21:51:10.306209] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.968 [2024-05-14 21:51:10.306802] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.968 [2024-05-14 21:51:10.306872] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7b8300 00:09:09.968 [2024-05-14 21:51:10.306879] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:09.968 [2024-05-14 21:51:10.306913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b816e20 00:09:09.968 [2024-05-14 21:51:10.306993] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7b8300 00:09:09.968 [2024-05-14 21:51:10.306997] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7b8300 00:09:09.968 [2024-05-14 21:51:10.307026] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.968 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.226 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:10.226 "name": "raid_bdev1", 00:09:10.226 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:10.226 "strip_size_kb": 64, 00:09:10.226 "state": "online", 00:09:10.226 "raid_level": "raid0", 00:09:10.226 "superblock": true, 00:09:10.226 "num_base_bdevs": 2, 00:09:10.226 "num_base_bdevs_discovered": 2, 00:09:10.226 "num_base_bdevs_operational": 2, 00:09:10.226 "base_bdevs_list": [ 00:09:10.226 { 00:09:10.226 "name": "pt1", 00:09:10.226 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:10.226 "is_configured": true, 00:09:10.226 "data_offset": 2048, 00:09:10.226 "data_size": 63488 00:09:10.226 }, 00:09:10.226 { 00:09:10.226 "name": "pt2", 00:09:10.226 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:10.226 "is_configured": true, 00:09:10.226 "data_offset": 2048, 00:09:10.226 "data_size": 63488 00:09:10.226 } 00:09:10.226 ] 00:09:10.226 }' 00:09:10.226 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:10.226 21:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.484 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:10.485 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:10.744 [2024-05-14 21:51:11.206259] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:10.744 "name": "raid_bdev1", 00:09:10.744 "aliases": [ 00:09:10.744 "134f13e6-123c-11ef-8c90-4585f0cfab08" 00:09:10.744 ], 00:09:10.744 "product_name": "Raid Volume", 00:09:10.744 "block_size": 512, 00:09:10.744 "num_blocks": 126976, 00:09:10.744 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:10.744 "assigned_rate_limits": { 00:09:10.744 "rw_ios_per_sec": 0, 00:09:10.744 "rw_mbytes_per_sec": 0, 00:09:10.744 "r_mbytes_per_sec": 0, 00:09:10.744 "w_mbytes_per_sec": 0 00:09:10.744 }, 00:09:10.744 "claimed": false, 00:09:10.744 "zoned": false, 00:09:10.744 "supported_io_types": { 00:09:10.744 "read": true, 00:09:10.744 "write": true, 00:09:10.744 "unmap": true, 00:09:10.744 "write_zeroes": true, 00:09:10.744 "flush": true, 00:09:10.744 "reset": true, 00:09:10.744 "compare": false, 00:09:10.744 "compare_and_write": false, 00:09:10.744 "abort": false, 00:09:10.744 "nvme_admin": false, 00:09:10.744 "nvme_io": false 00:09:10.744 }, 00:09:10.744 "memory_domains": [ 00:09:10.744 { 00:09:10.744 "dma_device_id": "system", 00:09:10.744 "dma_device_type": 1 00:09:10.744 }, 00:09:10.744 { 00:09:10.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.744 "dma_device_type": 2 00:09:10.744 }, 00:09:10.744 { 00:09:10.744 "dma_device_id": "system", 00:09:10.744 "dma_device_type": 1 00:09:10.744 }, 00:09:10.744 { 00:09:10.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.744 "dma_device_type": 2 00:09:10.744 } 00:09:10.744 ], 00:09:10.744 "driver_specific": { 00:09:10.744 "raid": { 00:09:10.744 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:10.744 "strip_size_kb": 64, 00:09:10.744 "state": "online", 00:09:10.744 "raid_level": "raid0", 00:09:10.744 "superblock": true, 00:09:10.744 "num_base_bdevs": 2, 00:09:10.744 "num_base_bdevs_discovered": 2, 00:09:10.744 "num_base_bdevs_operational": 2, 00:09:10.744 "base_bdevs_list": [ 00:09:10.744 { 00:09:10.744 "name": "pt1", 00:09:10.744 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:10.744 "is_configured": true, 00:09:10.744 "data_offset": 2048, 00:09:10.744 "data_size": 63488 00:09:10.744 }, 00:09:10.744 { 00:09:10.744 "name": "pt2", 00:09:10.744 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:10.744 "is_configured": true, 00:09:10.744 "data_offset": 2048, 00:09:10.744 "data_size": 63488 00:09:10.744 } 00:09:10.744 ] 00:09:10.744 } 00:09:10.744 } 00:09:10.744 }' 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:09:10.744 pt2' 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:10.744 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:11.003 "name": "pt1", 00:09:11.003 "aliases": [ 00:09:11.003 "5358c167-8129-ad51-b1d5-2bb201741405" 00:09:11.003 ], 00:09:11.003 "product_name": "passthru", 00:09:11.003 "block_size": 512, 00:09:11.003 "num_blocks": 65536, 00:09:11.003 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:11.003 "assigned_rate_limits": { 00:09:11.003 "rw_ios_per_sec": 0, 00:09:11.003 "rw_mbytes_per_sec": 0, 00:09:11.003 "r_mbytes_per_sec": 0, 00:09:11.003 "w_mbytes_per_sec": 0 00:09:11.003 }, 00:09:11.003 "claimed": true, 00:09:11.003 "claim_type": "exclusive_write", 00:09:11.003 "zoned": false, 00:09:11.003 "supported_io_types": { 00:09:11.003 "read": true, 00:09:11.003 "write": true, 00:09:11.003 "unmap": true, 00:09:11.003 "write_zeroes": true, 00:09:11.003 "flush": true, 00:09:11.003 "reset": true, 00:09:11.003 "compare": false, 00:09:11.003 "compare_and_write": false, 00:09:11.003 "abort": true, 00:09:11.003 "nvme_admin": false, 00:09:11.003 "nvme_io": false 00:09:11.003 }, 00:09:11.003 "memory_domains": [ 00:09:11.003 { 00:09:11.003 "dma_device_id": "system", 00:09:11.003 "dma_device_type": 1 00:09:11.003 }, 00:09:11.003 { 00:09:11.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.003 "dma_device_type": 2 00:09:11.003 } 00:09:11.003 ], 00:09:11.003 "driver_specific": { 00:09:11.003 "passthru": { 00:09:11.003 "name": "pt1", 00:09:11.003 "base_bdev_name": "malloc1" 00:09:11.003 } 00:09:11.003 } 00:09:11.003 }' 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:11.003 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:11.261 "name": "pt2", 00:09:11.261 "aliases": [ 00:09:11.261 "79fb1ec4-6832-d856-95c6-f800edd1738f" 00:09:11.261 ], 00:09:11.261 "product_name": "passthru", 00:09:11.261 "block_size": 512, 00:09:11.261 "num_blocks": 65536, 00:09:11.261 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:11.261 "assigned_rate_limits": { 00:09:11.261 "rw_ios_per_sec": 0, 00:09:11.261 "rw_mbytes_per_sec": 0, 00:09:11.261 "r_mbytes_per_sec": 0, 00:09:11.261 "w_mbytes_per_sec": 0 00:09:11.261 }, 00:09:11.261 "claimed": true, 00:09:11.261 "claim_type": "exclusive_write", 00:09:11.261 "zoned": false, 00:09:11.261 "supported_io_types": { 00:09:11.261 "read": true, 00:09:11.261 "write": true, 00:09:11.261 "unmap": true, 00:09:11.261 "write_zeroes": true, 00:09:11.261 "flush": true, 00:09:11.261 "reset": true, 00:09:11.261 "compare": false, 00:09:11.261 "compare_and_write": false, 00:09:11.261 "abort": true, 00:09:11.261 "nvme_admin": false, 00:09:11.261 "nvme_io": false 00:09:11.261 }, 00:09:11.261 "memory_domains": [ 00:09:11.261 { 00:09:11.261 "dma_device_id": "system", 00:09:11.261 "dma_device_type": 1 00:09:11.261 }, 00:09:11.261 { 00:09:11.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.261 "dma_device_type": 2 00:09:11.261 } 00:09:11.261 ], 00:09:11.261 "driver_specific": { 00:09:11.261 "passthru": { 00:09:11.261 "name": "pt2", 00:09:11.261 "base_bdev_name": "malloc2" 00:09:11.261 } 00:09:11.261 } 00:09:11.261 }' 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:11.261 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:11.520 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:11.779 [2024-05-14 21:51:12.114282] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.779 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=134f13e6-123c-11ef-8c90-4585f0cfab08 00:09:11.779 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 134f13e6-123c-11ef-8c90-4585f0cfab08 ']' 00:09:11.779 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:12.037 [2024-05-14 21:51:12.434246] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.037 [2024-05-14 21:51:12.434278] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.037 [2024-05-14 21:51:12.434304] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.038 [2024-05-14 21:51:12.434316] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.038 [2024-05-14 21:51:12.434321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7b8300 name raid_bdev1, state offline 00:09:12.038 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.038 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:12.355 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:12.355 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:12.355 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.355 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:12.655 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.655 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:12.913 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:12.913 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:13.172 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:13.430 [2024-05-14 21:51:13.770288] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:13.430 [2024-05-14 21:51:13.770901] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:13.430 [2024-05-14 21:51:13.770927] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:13.430 [2024-05-14 21:51:13.770969] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:13.430 [2024-05-14 21:51:13.770981] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.430 [2024-05-14 21:51:13.770985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7b8300 name raid_bdev1, state configuring 00:09:13.430 request: 00:09:13.430 { 00:09:13.430 "name": "raid_bdev1", 00:09:13.430 "raid_level": "raid0", 00:09:13.430 "base_bdevs": [ 00:09:13.430 "malloc1", 00:09:13.430 "malloc2" 00:09:13.430 ], 00:09:13.430 "superblock": false, 00:09:13.430 "strip_size_kb": 64, 00:09:13.430 "method": "bdev_raid_create", 00:09:13.430 "req_id": 1 00:09:13.430 } 00:09:13.430 Got JSON-RPC error response 00:09:13.430 response: 00:09:13.430 { 00:09:13.430 "code": -17, 00:09:13.430 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:13.430 } 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.430 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:13.688 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:13.688 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:13.688 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.688 [2024-05-14 21:51:14.274286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.688 [2024-05-14 21:51:14.274360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.688 [2024-05-14 21:51:14.274390] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7b3c80 00:09:13.688 [2024-05-14 21:51:14.274409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.688 [2024-05-14 21:51:14.275059] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.688 [2024-05-14 21:51:14.275087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.688 [2024-05-14 21:51:14.275114] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:13.688 [2024-05-14 21:51:14.275126] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.947 pt1 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.947 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.206 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:14.206 "name": "raid_bdev1", 00:09:14.206 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:14.206 "strip_size_kb": 64, 00:09:14.206 "state": "configuring", 00:09:14.206 "raid_level": "raid0", 00:09:14.206 "superblock": true, 00:09:14.206 "num_base_bdevs": 2, 00:09:14.206 "num_base_bdevs_discovered": 1, 00:09:14.206 "num_base_bdevs_operational": 2, 00:09:14.206 "base_bdevs_list": [ 00:09:14.206 { 00:09:14.206 "name": "pt1", 00:09:14.206 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:14.206 "is_configured": true, 00:09:14.206 "data_offset": 2048, 00:09:14.206 "data_size": 63488 00:09:14.206 }, 00:09:14.206 { 00:09:14.206 "name": null, 00:09:14.206 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:14.206 "is_configured": false, 00:09:14.206 "data_offset": 2048, 00:09:14.206 "data_size": 63488 00:09:14.206 } 00:09:14.206 ] 00:09:14.206 }' 00:09:14.206 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:14.206 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.465 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:14.465 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:14.465 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.465 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.724 [2024-05-14 21:51:15.258310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.724 [2024-05-14 21:51:15.258383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.724 [2024-05-14 21:51:15.258423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7b3f00 00:09:14.724 [2024-05-14 21:51:15.258433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.724 [2024-05-14 21:51:15.258555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.724 [2024-05-14 21:51:15.258570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.724 [2024-05-14 21:51:15.258608] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:14.724 [2024-05-14 21:51:15.258618] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.724 [2024-05-14 21:51:15.258653] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7b8300 00:09:14.724 [2024-05-14 21:51:15.258658] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.724 [2024-05-14 21:51:15.258679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b816e20 00:09:14.724 [2024-05-14 21:51:15.258736] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7b8300 00:09:14.724 [2024-05-14 21:51:15.258741] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7b8300 00:09:14.724 [2024-05-14 21:51:15.258763] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.724 pt2 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.724 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.293 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:15.293 "name": "raid_bdev1", 00:09:15.293 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:15.293 "strip_size_kb": 64, 00:09:15.293 "state": "online", 00:09:15.293 "raid_level": "raid0", 00:09:15.293 "superblock": true, 00:09:15.293 "num_base_bdevs": 2, 00:09:15.293 "num_base_bdevs_discovered": 2, 00:09:15.293 "num_base_bdevs_operational": 2, 00:09:15.293 "base_bdevs_list": [ 00:09:15.293 { 00:09:15.293 "name": "pt1", 00:09:15.293 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:15.293 "is_configured": true, 00:09:15.293 "data_offset": 2048, 00:09:15.293 "data_size": 63488 00:09:15.293 }, 00:09:15.293 { 00:09:15.293 "name": "pt2", 00:09:15.293 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:15.293 "is_configured": true, 00:09:15.293 "data_offset": 2048, 00:09:15.293 "data_size": 63488 00:09:15.293 } 00:09:15.293 ] 00:09:15.293 }' 00:09:15.293 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:15.293 21:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:15.552 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:15.941 [2024-05-14 21:51:16.294358] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.941 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:15.941 "name": "raid_bdev1", 00:09:15.941 "aliases": [ 00:09:15.941 "134f13e6-123c-11ef-8c90-4585f0cfab08" 00:09:15.941 ], 00:09:15.941 "product_name": "Raid Volume", 00:09:15.941 "block_size": 512, 00:09:15.941 "num_blocks": 126976, 00:09:15.941 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:15.941 "assigned_rate_limits": { 00:09:15.941 "rw_ios_per_sec": 0, 00:09:15.941 "rw_mbytes_per_sec": 0, 00:09:15.941 "r_mbytes_per_sec": 0, 00:09:15.941 "w_mbytes_per_sec": 0 00:09:15.941 }, 00:09:15.941 "claimed": false, 00:09:15.941 "zoned": false, 00:09:15.941 "supported_io_types": { 00:09:15.941 "read": true, 00:09:15.941 "write": true, 00:09:15.941 "unmap": true, 00:09:15.941 "write_zeroes": true, 00:09:15.941 "flush": true, 00:09:15.941 "reset": true, 00:09:15.941 "compare": false, 00:09:15.941 "compare_and_write": false, 00:09:15.941 "abort": false, 00:09:15.941 "nvme_admin": false, 00:09:15.941 "nvme_io": false 00:09:15.941 }, 00:09:15.941 "memory_domains": [ 00:09:15.941 { 00:09:15.941 "dma_device_id": "system", 00:09:15.941 "dma_device_type": 1 00:09:15.941 }, 00:09:15.941 { 00:09:15.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.941 "dma_device_type": 2 00:09:15.941 }, 00:09:15.941 { 00:09:15.941 "dma_device_id": "system", 00:09:15.941 "dma_device_type": 1 00:09:15.941 }, 00:09:15.941 { 00:09:15.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.941 "dma_device_type": 2 00:09:15.941 } 00:09:15.941 ], 00:09:15.942 "driver_specific": { 00:09:15.942 "raid": { 00:09:15.942 "uuid": "134f13e6-123c-11ef-8c90-4585f0cfab08", 00:09:15.942 "strip_size_kb": 64, 00:09:15.942 "state": "online", 00:09:15.942 "raid_level": "raid0", 00:09:15.942 "superblock": true, 00:09:15.942 "num_base_bdevs": 2, 00:09:15.942 "num_base_bdevs_discovered": 2, 00:09:15.942 "num_base_bdevs_operational": 2, 00:09:15.942 "base_bdevs_list": [ 00:09:15.942 { 00:09:15.942 "name": "pt1", 00:09:15.942 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:15.942 "is_configured": true, 00:09:15.942 "data_offset": 2048, 00:09:15.942 "data_size": 63488 00:09:15.942 }, 00:09:15.942 { 00:09:15.942 "name": "pt2", 00:09:15.942 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:15.942 "is_configured": true, 00:09:15.942 "data_offset": 2048, 00:09:15.942 "data_size": 63488 00:09:15.942 } 00:09:15.942 ] 00:09:15.942 } 00:09:15.942 } 00:09:15.942 }' 00:09:15.942 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.942 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:09:15.942 pt2' 00:09:15.942 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:15.942 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:15.942 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:16.201 "name": "pt1", 00:09:16.201 "aliases": [ 00:09:16.201 "5358c167-8129-ad51-b1d5-2bb201741405" 00:09:16.201 ], 00:09:16.201 "product_name": "passthru", 00:09:16.201 "block_size": 512, 00:09:16.201 "num_blocks": 65536, 00:09:16.201 "uuid": "5358c167-8129-ad51-b1d5-2bb201741405", 00:09:16.201 "assigned_rate_limits": { 00:09:16.201 "rw_ios_per_sec": 0, 00:09:16.201 "rw_mbytes_per_sec": 0, 00:09:16.201 "r_mbytes_per_sec": 0, 00:09:16.201 "w_mbytes_per_sec": 0 00:09:16.201 }, 00:09:16.201 "claimed": true, 00:09:16.201 "claim_type": "exclusive_write", 00:09:16.201 "zoned": false, 00:09:16.201 "supported_io_types": { 00:09:16.201 "read": true, 00:09:16.201 "write": true, 00:09:16.201 "unmap": true, 00:09:16.201 "write_zeroes": true, 00:09:16.201 "flush": true, 00:09:16.201 "reset": true, 00:09:16.201 "compare": false, 00:09:16.201 "compare_and_write": false, 00:09:16.201 "abort": true, 00:09:16.201 "nvme_admin": false, 00:09:16.201 "nvme_io": false 00:09:16.201 }, 00:09:16.201 "memory_domains": [ 00:09:16.201 { 00:09:16.201 "dma_device_id": "system", 00:09:16.201 "dma_device_type": 1 00:09:16.201 }, 00:09:16.201 { 00:09:16.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.201 "dma_device_type": 2 00:09:16.201 } 00:09:16.201 ], 00:09:16.201 "driver_specific": { 00:09:16.201 "passthru": { 00:09:16.201 "name": "pt1", 00:09:16.201 "base_bdev_name": "malloc1" 00:09:16.201 } 00:09:16.201 } 00:09:16.201 }' 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:16.201 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:16.202 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:16.463 "name": "pt2", 00:09:16.463 "aliases": [ 00:09:16.463 "79fb1ec4-6832-d856-95c6-f800edd1738f" 00:09:16.463 ], 00:09:16.463 "product_name": "passthru", 00:09:16.463 "block_size": 512, 00:09:16.463 "num_blocks": 65536, 00:09:16.463 "uuid": "79fb1ec4-6832-d856-95c6-f800edd1738f", 00:09:16.463 "assigned_rate_limits": { 00:09:16.463 "rw_ios_per_sec": 0, 00:09:16.463 "rw_mbytes_per_sec": 0, 00:09:16.463 "r_mbytes_per_sec": 0, 00:09:16.463 "w_mbytes_per_sec": 0 00:09:16.463 }, 00:09:16.463 "claimed": true, 00:09:16.463 "claim_type": "exclusive_write", 00:09:16.463 "zoned": false, 00:09:16.463 "supported_io_types": { 00:09:16.463 "read": true, 00:09:16.463 "write": true, 00:09:16.463 "unmap": true, 00:09:16.463 "write_zeroes": true, 00:09:16.463 "flush": true, 00:09:16.463 "reset": true, 00:09:16.463 "compare": false, 00:09:16.463 "compare_and_write": false, 00:09:16.463 "abort": true, 00:09:16.463 "nvme_admin": false, 00:09:16.463 "nvme_io": false 00:09:16.463 }, 00:09:16.463 "memory_domains": [ 00:09:16.463 { 00:09:16.463 "dma_device_id": "system", 00:09:16.463 "dma_device_type": 1 00:09:16.463 }, 00:09:16.463 { 00:09:16.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.463 "dma_device_type": 2 00:09:16.463 } 00:09:16.463 ], 00:09:16.463 "driver_specific": { 00:09:16.463 "passthru": { 00:09:16.463 "name": "pt2", 00:09:16.463 "base_bdev_name": "malloc2" 00:09:16.463 } 00:09:16.463 } 00:09:16.463 }' 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:16.463 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:16.722 [2024-05-14 21:51:17.230384] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 134f13e6-123c-11ef-8c90-4585f0cfab08 '!=' 134f13e6-123c-11ef-8c90-4585f0cfab08 ']' 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 49556 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 49556 ']' 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 49556 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 49556 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:16.722 killing process with pid 49556 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49556' 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 49556 00:09:16.722 [2024-05-14 21:51:17.263691] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.722 [2024-05-14 21:51:17.263726] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.722 [2024-05-14 21:51:17.263739] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.722 [2024-05-14 21:51:17.263743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7b8300 name raid_bdev1, state offline 00:09:16.722 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 49556 00:09:16.722 [2024-05-14 21:51:17.275373] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.982 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:09:16.982 00:09:16.982 real 0m9.621s 00:09:16.982 user 0m16.794s 00:09:16.982 sys 0m1.665s 00:09:16.982 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:16.982 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.982 ************************************ 00:09:16.982 END TEST raid_superblock_test 00:09:16.982 ************************************ 00:09:16.982 21:51:17 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:09:16.982 21:51:17 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:16.982 21:51:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:16.982 21:51:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:16.982 21:51:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.982 ************************************ 00:09:16.982 START TEST raid_state_function_test 00:09:16.982 ************************************ 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:09:16.982 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=49823 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:16.983 Process raid pid: 49823 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49823' 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 49823 /var/tmp/spdk-raid.sock 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 49823 ']' 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:16.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:16.983 21:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.983 [2024-05-14 21:51:17.515515] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:16.983 [2024-05-14 21:51:17.515713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:17.550 EAL: TSC is not safe to use in SMP mode 00:09:17.550 EAL: TSC is not invariant 00:09:17.550 [2024-05-14 21:51:18.082108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.809 [2024-05-14 21:51:18.173824] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:17.809 [2024-05-14 21:51:18.176221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.809 [2024-05-14 21:51:18.177174] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.809 [2024-05-14 21:51:18.177207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.067 21:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:18.067 21:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:09:18.067 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:18.326 [2024-05-14 21:51:18.801562] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.326 [2024-05-14 21:51:18.801623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.326 [2024-05-14 21:51:18.801628] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.326 [2024-05-14 21:51:18.801637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.326 21:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.585 21:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:18.585 "name": "Existed_Raid", 00:09:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.585 "strip_size_kb": 64, 00:09:18.585 "state": "configuring", 00:09:18.585 "raid_level": "concat", 00:09:18.585 "superblock": false, 00:09:18.585 "num_base_bdevs": 2, 00:09:18.585 "num_base_bdevs_discovered": 0, 00:09:18.585 "num_base_bdevs_operational": 2, 00:09:18.585 "base_bdevs_list": [ 00:09:18.585 { 00:09:18.585 "name": "BaseBdev1", 00:09:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.585 "is_configured": false, 00:09:18.585 "data_offset": 0, 00:09:18.585 "data_size": 0 00:09:18.585 }, 00:09:18.585 { 00:09:18.585 "name": "BaseBdev2", 00:09:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.585 "is_configured": false, 00:09:18.585 "data_offset": 0, 00:09:18.585 "data_size": 0 00:09:18.585 } 00:09:18.585 ] 00:09:18.585 }' 00:09:18.585 21:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:18.585 21:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.842 21:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:19.101 [2024-05-14 21:51:19.629573] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.101 [2024-05-14 21:51:19.629643] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc10300 name Existed_Raid, state configuring 00:09:19.101 21:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:19.362 [2024-05-14 21:51:19.913578] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.362 [2024-05-14 21:51:19.913654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.362 [2024-05-14 21:51:19.913659] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.362 [2024-05-14 21:51:19.913684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.362 21:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.620 [2024-05-14 21:51:20.186650] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.620 BaseBdev1 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:19.620 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.186 [ 00:09:20.186 { 00:09:20.186 "name": "BaseBdev1", 00:09:20.186 "aliases": [ 00:09:20.186 "19328dca-123c-11ef-8c90-4585f0cfab08" 00:09:20.186 ], 00:09:20.186 "product_name": "Malloc disk", 00:09:20.186 "block_size": 512, 00:09:20.186 "num_blocks": 65536, 00:09:20.186 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:20.186 "assigned_rate_limits": { 00:09:20.186 "rw_ios_per_sec": 0, 00:09:20.186 "rw_mbytes_per_sec": 0, 00:09:20.186 "r_mbytes_per_sec": 0, 00:09:20.186 "w_mbytes_per_sec": 0 00:09:20.186 }, 00:09:20.186 "claimed": true, 00:09:20.186 "claim_type": "exclusive_write", 00:09:20.186 "zoned": false, 00:09:20.186 "supported_io_types": { 00:09:20.186 "read": true, 00:09:20.186 "write": true, 00:09:20.186 "unmap": true, 00:09:20.186 "write_zeroes": true, 00:09:20.186 "flush": true, 00:09:20.186 "reset": true, 00:09:20.186 "compare": false, 00:09:20.186 "compare_and_write": false, 00:09:20.186 "abort": true, 00:09:20.186 "nvme_admin": false, 00:09:20.186 "nvme_io": false 00:09:20.186 }, 00:09:20.186 "memory_domains": [ 00:09:20.186 { 00:09:20.186 "dma_device_id": "system", 00:09:20.186 "dma_device_type": 1 00:09:20.186 }, 00:09:20.186 { 00:09:20.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.186 "dma_device_type": 2 00:09:20.186 } 00:09:20.186 ], 00:09:20.186 "driver_specific": {} 00:09:20.186 } 00:09:20.186 ] 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.186 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.444 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:20.444 "name": "Existed_Raid", 00:09:20.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.444 "strip_size_kb": 64, 00:09:20.444 "state": "configuring", 00:09:20.444 "raid_level": "concat", 00:09:20.444 "superblock": false, 00:09:20.444 "num_base_bdevs": 2, 00:09:20.444 "num_base_bdevs_discovered": 1, 00:09:20.444 "num_base_bdevs_operational": 2, 00:09:20.444 "base_bdevs_list": [ 00:09:20.444 { 00:09:20.444 "name": "BaseBdev1", 00:09:20.444 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:20.444 "is_configured": true, 00:09:20.444 "data_offset": 0, 00:09:20.444 "data_size": 65536 00:09:20.444 }, 00:09:20.444 { 00:09:20.444 "name": "BaseBdev2", 00:09:20.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.444 "is_configured": false, 00:09:20.444 "data_offset": 0, 00:09:20.444 "data_size": 0 00:09:20.444 } 00:09:20.444 ] 00:09:20.444 }' 00:09:20.444 21:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:20.444 21:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.703 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:20.961 [2024-05-14 21:51:21.457607] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.961 [2024-05-14 21:51:21.457640] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc10300 name Existed_Raid, state configuring 00:09:20.961 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:21.218 [2024-05-14 21:51:21.729624] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.218 [2024-05-14 21:51:21.730476] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.218 [2024-05-14 21:51:21.730518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:21.218 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.219 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.476 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:21.476 "name": "Existed_Raid", 00:09:21.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.476 "strip_size_kb": 64, 00:09:21.476 "state": "configuring", 00:09:21.476 "raid_level": "concat", 00:09:21.476 "superblock": false, 00:09:21.476 "num_base_bdevs": 2, 00:09:21.476 "num_base_bdevs_discovered": 1, 00:09:21.476 "num_base_bdevs_operational": 2, 00:09:21.476 "base_bdevs_list": [ 00:09:21.476 { 00:09:21.476 "name": "BaseBdev1", 00:09:21.476 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:21.476 "is_configured": true, 00:09:21.476 "data_offset": 0, 00:09:21.476 "data_size": 65536 00:09:21.476 }, 00:09:21.476 { 00:09:21.476 "name": "BaseBdev2", 00:09:21.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.476 "is_configured": false, 00:09:21.476 "data_offset": 0, 00:09:21.476 "data_size": 0 00:09:21.476 } 00:09:21.476 ] 00:09:21.476 }' 00:09:21.476 21:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:21.476 21:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 21:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.992 [2024-05-14 21:51:22.545787] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.992 [2024-05-14 21:51:22.545815] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc10300 00:09:21.992 [2024-05-14 21:51:22.545820] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:21.992 [2024-05-14 21:51:22.545842] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bc6eec0 00:09:21.992 [2024-05-14 21:51:22.545935] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc10300 00:09:21.992 [2024-05-14 21:51:22.545940] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bc10300 00:09:21.992 [2024-05-14 21:51:22.545982] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.992 BaseBdev2 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:21.992 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:22.249 21:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.507 [ 00:09:22.507 { 00:09:22.507 "name": "BaseBdev2", 00:09:22.507 "aliases": [ 00:09:22.507 "1a9aab8c-123c-11ef-8c90-4585f0cfab08" 00:09:22.507 ], 00:09:22.507 "product_name": "Malloc disk", 00:09:22.507 "block_size": 512, 00:09:22.507 "num_blocks": 65536, 00:09:22.507 "uuid": "1a9aab8c-123c-11ef-8c90-4585f0cfab08", 00:09:22.507 "assigned_rate_limits": { 00:09:22.507 "rw_ios_per_sec": 0, 00:09:22.507 "rw_mbytes_per_sec": 0, 00:09:22.507 "r_mbytes_per_sec": 0, 00:09:22.507 "w_mbytes_per_sec": 0 00:09:22.507 }, 00:09:22.507 "claimed": true, 00:09:22.507 "claim_type": "exclusive_write", 00:09:22.507 "zoned": false, 00:09:22.507 "supported_io_types": { 00:09:22.507 "read": true, 00:09:22.507 "write": true, 00:09:22.507 "unmap": true, 00:09:22.507 "write_zeroes": true, 00:09:22.507 "flush": true, 00:09:22.507 "reset": true, 00:09:22.507 "compare": false, 00:09:22.507 "compare_and_write": false, 00:09:22.507 "abort": true, 00:09:22.507 "nvme_admin": false, 00:09:22.507 "nvme_io": false 00:09:22.507 }, 00:09:22.507 "memory_domains": [ 00:09:22.507 { 00:09:22.507 "dma_device_id": "system", 00:09:22.507 "dma_device_type": 1 00:09:22.507 }, 00:09:22.507 { 00:09:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.507 "dma_device_type": 2 00:09:22.507 } 00:09:22.507 ], 00:09:22.507 "driver_specific": {} 00:09:22.507 } 00:09:22.507 ] 00:09:22.507 21:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:22.507 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:22.507 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:22.507 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.508 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.766 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:22.766 "name": "Existed_Raid", 00:09:22.766 "uuid": "1a9ab252-123c-11ef-8c90-4585f0cfab08", 00:09:22.766 "strip_size_kb": 64, 00:09:22.766 "state": "online", 00:09:22.766 "raid_level": "concat", 00:09:22.766 "superblock": false, 00:09:22.766 "num_base_bdevs": 2, 00:09:22.766 "num_base_bdevs_discovered": 2, 00:09:22.766 "num_base_bdevs_operational": 2, 00:09:22.766 "base_bdevs_list": [ 00:09:22.766 { 00:09:22.766 "name": "BaseBdev1", 00:09:22.766 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:22.766 "is_configured": true, 00:09:22.766 "data_offset": 0, 00:09:22.766 "data_size": 65536 00:09:22.766 }, 00:09:22.766 { 00:09:22.766 "name": "BaseBdev2", 00:09:22.766 "uuid": "1a9aab8c-123c-11ef-8c90-4585f0cfab08", 00:09:22.766 "is_configured": true, 00:09:22.766 "data_offset": 0, 00:09:22.766 "data_size": 65536 00:09:22.766 } 00:09:22.766 ] 00:09:22.766 }' 00:09:22.766 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:22.766 21:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:23.024 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:23.281 [2024-05-14 21:51:23.805696] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:23.281 "name": "Existed_Raid", 00:09:23.281 "aliases": [ 00:09:23.281 "1a9ab252-123c-11ef-8c90-4585f0cfab08" 00:09:23.281 ], 00:09:23.281 "product_name": "Raid Volume", 00:09:23.281 "block_size": 512, 00:09:23.281 "num_blocks": 131072, 00:09:23.281 "uuid": "1a9ab252-123c-11ef-8c90-4585f0cfab08", 00:09:23.281 "assigned_rate_limits": { 00:09:23.281 "rw_ios_per_sec": 0, 00:09:23.281 "rw_mbytes_per_sec": 0, 00:09:23.281 "r_mbytes_per_sec": 0, 00:09:23.281 "w_mbytes_per_sec": 0 00:09:23.281 }, 00:09:23.281 "claimed": false, 00:09:23.281 "zoned": false, 00:09:23.281 "supported_io_types": { 00:09:23.281 "read": true, 00:09:23.281 "write": true, 00:09:23.281 "unmap": true, 00:09:23.281 "write_zeroes": true, 00:09:23.281 "flush": true, 00:09:23.281 "reset": true, 00:09:23.281 "compare": false, 00:09:23.281 "compare_and_write": false, 00:09:23.281 "abort": false, 00:09:23.281 "nvme_admin": false, 00:09:23.281 "nvme_io": false 00:09:23.281 }, 00:09:23.281 "memory_domains": [ 00:09:23.281 { 00:09:23.281 "dma_device_id": "system", 00:09:23.281 "dma_device_type": 1 00:09:23.281 }, 00:09:23.281 { 00:09:23.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.281 "dma_device_type": 2 00:09:23.281 }, 00:09:23.281 { 00:09:23.281 "dma_device_id": "system", 00:09:23.281 "dma_device_type": 1 00:09:23.281 }, 00:09:23.281 { 00:09:23.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.281 "dma_device_type": 2 00:09:23.281 } 00:09:23.281 ], 00:09:23.281 "driver_specific": { 00:09:23.281 "raid": { 00:09:23.281 "uuid": "1a9ab252-123c-11ef-8c90-4585f0cfab08", 00:09:23.281 "strip_size_kb": 64, 00:09:23.281 "state": "online", 00:09:23.281 "raid_level": "concat", 00:09:23.281 "superblock": false, 00:09:23.281 "num_base_bdevs": 2, 00:09:23.281 "num_base_bdevs_discovered": 2, 00:09:23.281 "num_base_bdevs_operational": 2, 00:09:23.281 "base_bdevs_list": [ 00:09:23.281 { 00:09:23.281 "name": "BaseBdev1", 00:09:23.281 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:23.281 "is_configured": true, 00:09:23.281 "data_offset": 0, 00:09:23.281 "data_size": 65536 00:09:23.281 }, 00:09:23.281 { 00:09:23.281 "name": "BaseBdev2", 00:09:23.281 "uuid": "1a9aab8c-123c-11ef-8c90-4585f0cfab08", 00:09:23.281 "is_configured": true, 00:09:23.281 "data_offset": 0, 00:09:23.281 "data_size": 65536 00:09:23.281 } 00:09:23.281 ] 00:09:23.281 } 00:09:23.281 } 00:09:23.281 }' 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:23.281 BaseBdev2' 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:23.281 21:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:23.538 "name": "BaseBdev1", 00:09:23.538 "aliases": [ 00:09:23.538 "19328dca-123c-11ef-8c90-4585f0cfab08" 00:09:23.538 ], 00:09:23.538 "product_name": "Malloc disk", 00:09:23.538 "block_size": 512, 00:09:23.538 "num_blocks": 65536, 00:09:23.538 "uuid": "19328dca-123c-11ef-8c90-4585f0cfab08", 00:09:23.538 "assigned_rate_limits": { 00:09:23.538 "rw_ios_per_sec": 0, 00:09:23.538 "rw_mbytes_per_sec": 0, 00:09:23.538 "r_mbytes_per_sec": 0, 00:09:23.538 "w_mbytes_per_sec": 0 00:09:23.538 }, 00:09:23.538 "claimed": true, 00:09:23.538 "claim_type": "exclusive_write", 00:09:23.538 "zoned": false, 00:09:23.538 "supported_io_types": { 00:09:23.538 "read": true, 00:09:23.538 "write": true, 00:09:23.538 "unmap": true, 00:09:23.538 "write_zeroes": true, 00:09:23.538 "flush": true, 00:09:23.538 "reset": true, 00:09:23.538 "compare": false, 00:09:23.538 "compare_and_write": false, 00:09:23.538 "abort": true, 00:09:23.538 "nvme_admin": false, 00:09:23.538 "nvme_io": false 00:09:23.538 }, 00:09:23.538 "memory_domains": [ 00:09:23.538 { 00:09:23.538 "dma_device_id": "system", 00:09:23.538 "dma_device_type": 1 00:09:23.538 }, 00:09:23.538 { 00:09:23.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.538 "dma_device_type": 2 00:09:23.538 } 00:09:23.538 ], 00:09:23.538 "driver_specific": {} 00:09:23.538 }' 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:23.538 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:23.539 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:23.539 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:23.539 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:23.797 "name": "BaseBdev2", 00:09:23.797 "aliases": [ 00:09:23.797 "1a9aab8c-123c-11ef-8c90-4585f0cfab08" 00:09:23.797 ], 00:09:23.797 "product_name": "Malloc disk", 00:09:23.797 "block_size": 512, 00:09:23.797 "num_blocks": 65536, 00:09:23.797 "uuid": "1a9aab8c-123c-11ef-8c90-4585f0cfab08", 00:09:23.797 "assigned_rate_limits": { 00:09:23.797 "rw_ios_per_sec": 0, 00:09:23.797 "rw_mbytes_per_sec": 0, 00:09:23.797 "r_mbytes_per_sec": 0, 00:09:23.797 "w_mbytes_per_sec": 0 00:09:23.797 }, 00:09:23.797 "claimed": true, 00:09:23.797 "claim_type": "exclusive_write", 00:09:23.797 "zoned": false, 00:09:23.797 "supported_io_types": { 00:09:23.797 "read": true, 00:09:23.797 "write": true, 00:09:23.797 "unmap": true, 00:09:23.797 "write_zeroes": true, 00:09:23.797 "flush": true, 00:09:23.797 "reset": true, 00:09:23.797 "compare": false, 00:09:23.797 "compare_and_write": false, 00:09:23.797 "abort": true, 00:09:23.797 "nvme_admin": false, 00:09:23.797 "nvme_io": false 00:09:23.797 }, 00:09:23.797 "memory_domains": [ 00:09:23.797 { 00:09:23.797 "dma_device_id": "system", 00:09:23.797 "dma_device_type": 1 00:09:23.797 }, 00:09:23.797 { 00:09:23.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.797 "dma_device_type": 2 00:09:23.797 } 00:09:23.797 ], 00:09:23.797 "driver_specific": {} 00:09:23.797 }' 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.797 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:24.054 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:24.313 [2024-05-14 21:51:24.681709] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.313 [2024-05-14 21:51:24.681745] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.313 [2024-05-14 21:51:24.681760] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.313 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.572 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:24.572 "name": "Existed_Raid", 00:09:24.572 "uuid": "1a9ab252-123c-11ef-8c90-4585f0cfab08", 00:09:24.572 "strip_size_kb": 64, 00:09:24.572 "state": "offline", 00:09:24.572 "raid_level": "concat", 00:09:24.572 "superblock": false, 00:09:24.572 "num_base_bdevs": 2, 00:09:24.572 "num_base_bdevs_discovered": 1, 00:09:24.572 "num_base_bdevs_operational": 1, 00:09:24.572 "base_bdevs_list": [ 00:09:24.572 { 00:09:24.572 "name": null, 00:09:24.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.572 "is_configured": false, 00:09:24.572 "data_offset": 0, 00:09:24.572 "data_size": 65536 00:09:24.572 }, 00:09:24.572 { 00:09:24.572 "name": "BaseBdev2", 00:09:24.572 "uuid": "1a9aab8c-123c-11ef-8c90-4585f0cfab08", 00:09:24.572 "is_configured": true, 00:09:24.572 "data_offset": 0, 00:09:24.572 "data_size": 65536 00:09:24.572 } 00:09:24.572 ] 00:09:24.572 }' 00:09:24.572 21:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:24.572 21:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.831 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:24.831 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.831 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.831 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:09:25.090 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:09:25.090 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.090 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:25.347 [2024-05-14 21:51:25.755670] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.347 [2024-05-14 21:51:25.755705] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc10300 name Existed_Raid, state offline 00:09:25.347 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.347 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.348 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.348 21:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 49823 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 49823 ']' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 49823 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 49823 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:25.605 killing process with pid 49823 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49823' 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 49823 00:09:25.605 [2024-05-14 21:51:26.048657] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.605 [2024-05-14 21:51:26.048692] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.605 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 49823 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:09:25.863 00:09:25.863 real 0m8.724s 00:09:25.863 user 0m15.064s 00:09:25.863 sys 0m1.611s 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 ************************************ 00:09:25.863 END TEST raid_state_function_test 00:09:25.863 ************************************ 00:09:25.863 21:51:26 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:25.863 21:51:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:25.863 21:51:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.863 21:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 ************************************ 00:09:25.863 START TEST raid_state_function_test_sb 00:09:25.863 ************************************ 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:09:25.863 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=50094 00:09:25.864 Process raid pid: 50094 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50094' 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 50094 /var/tmp/spdk-raid.sock 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 50094 ']' 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:25.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:25.864 21:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.864 [2024-05-14 21:51:26.290062] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:25.864 [2024-05-14 21:51:26.290321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:26.429 EAL: TSC is not safe to use in SMP mode 00:09:26.429 EAL: TSC is not invariant 00:09:26.429 [2024-05-14 21:51:26.860064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.429 [2024-05-14 21:51:26.950493] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:26.429 [2024-05-14 21:51:26.952713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.429 [2024-05-14 21:51:26.953459] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.429 [2024-05-14 21:51:26.953475] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.995 21:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:26.995 21:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:09:26.995 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:27.252 [2024-05-14 21:51:27.613728] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.252 [2024-05-14 21:51:27.613795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.252 [2024-05-14 21:51:27.613801] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.252 [2024-05-14 21:51:27.613810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.252 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.546 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:27.546 "name": "Existed_Raid", 00:09:27.546 "uuid": "1d9ffefe-123c-11ef-8c90-4585f0cfab08", 00:09:27.546 "strip_size_kb": 64, 00:09:27.546 "state": "configuring", 00:09:27.546 "raid_level": "concat", 00:09:27.546 "superblock": true, 00:09:27.546 "num_base_bdevs": 2, 00:09:27.546 "num_base_bdevs_discovered": 0, 00:09:27.546 "num_base_bdevs_operational": 2, 00:09:27.546 "base_bdevs_list": [ 00:09:27.546 { 00:09:27.546 "name": "BaseBdev1", 00:09:27.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.546 "is_configured": false, 00:09:27.546 "data_offset": 0, 00:09:27.546 "data_size": 0 00:09:27.546 }, 00:09:27.546 { 00:09:27.546 "name": "BaseBdev2", 00:09:27.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.546 "is_configured": false, 00:09:27.546 "data_offset": 0, 00:09:27.546 "data_size": 0 00:09:27.546 } 00:09:27.546 ] 00:09:27.546 }' 00:09:27.546 21:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:27.546 21:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.805 21:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:28.063 [2024-05-14 21:51:28.417711] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.063 [2024-05-14 21:51:28.417742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c160300 name Existed_Raid, state configuring 00:09:28.063 21:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:28.321 [2024-05-14 21:51:28.693745] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.321 [2024-05-14 21:51:28.693810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.321 [2024-05-14 21:51:28.693816] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.321 [2024-05-14 21:51:28.693826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.321 21:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.581 [2024-05-14 21:51:28.994845] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.581 BaseBdev1 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:28.581 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.840 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.098 [ 00:09:29.098 { 00:09:29.098 "name": "BaseBdev1", 00:09:29.098 "aliases": [ 00:09:29.098 "1e7292eb-123c-11ef-8c90-4585f0cfab08" 00:09:29.098 ], 00:09:29.098 "product_name": "Malloc disk", 00:09:29.098 "block_size": 512, 00:09:29.098 "num_blocks": 65536, 00:09:29.098 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:29.098 "assigned_rate_limits": { 00:09:29.098 "rw_ios_per_sec": 0, 00:09:29.098 "rw_mbytes_per_sec": 0, 00:09:29.098 "r_mbytes_per_sec": 0, 00:09:29.098 "w_mbytes_per_sec": 0 00:09:29.098 }, 00:09:29.098 "claimed": true, 00:09:29.098 "claim_type": "exclusive_write", 00:09:29.098 "zoned": false, 00:09:29.098 "supported_io_types": { 00:09:29.098 "read": true, 00:09:29.098 "write": true, 00:09:29.098 "unmap": true, 00:09:29.098 "write_zeroes": true, 00:09:29.098 "flush": true, 00:09:29.098 "reset": true, 00:09:29.098 "compare": false, 00:09:29.098 "compare_and_write": false, 00:09:29.098 "abort": true, 00:09:29.098 "nvme_admin": false, 00:09:29.098 "nvme_io": false 00:09:29.098 }, 00:09:29.098 "memory_domains": [ 00:09:29.098 { 00:09:29.098 "dma_device_id": "system", 00:09:29.098 "dma_device_type": 1 00:09:29.098 }, 00:09:29.098 { 00:09:29.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.098 "dma_device_type": 2 00:09:29.098 } 00:09:29.098 ], 00:09:29.098 "driver_specific": {} 00:09:29.098 } 00:09:29.098 ] 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.098 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.355 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:29.356 "name": "Existed_Raid", 00:09:29.356 "uuid": "1e44cb17-123c-11ef-8c90-4585f0cfab08", 00:09:29.356 "strip_size_kb": 64, 00:09:29.356 "state": "configuring", 00:09:29.356 "raid_level": "concat", 00:09:29.356 "superblock": true, 00:09:29.356 "num_base_bdevs": 2, 00:09:29.356 "num_base_bdevs_discovered": 1, 00:09:29.356 "num_base_bdevs_operational": 2, 00:09:29.356 "base_bdevs_list": [ 00:09:29.356 { 00:09:29.356 "name": "BaseBdev1", 00:09:29.356 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:29.356 "is_configured": true, 00:09:29.356 "data_offset": 2048, 00:09:29.356 "data_size": 63488 00:09:29.356 }, 00:09:29.356 { 00:09:29.356 "name": "BaseBdev2", 00:09:29.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.356 "is_configured": false, 00:09:29.356 "data_offset": 0, 00:09:29.356 "data_size": 0 00:09:29.356 } 00:09:29.356 ] 00:09:29.356 }' 00:09:29.356 21:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:29.356 21:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.613 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:29.871 [2024-05-14 21:51:30.381802] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.871 [2024-05-14 21:51:30.381837] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c160300 name Existed_Raid, state configuring 00:09:29.871 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:30.129 [2024-05-14 21:51:30.661827] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.129 [2024-05-14 21:51:30.662624] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.129 [2024-05-14 21:51:30.662666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.129 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.387 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:30.387 "name": "Existed_Raid", 00:09:30.387 "uuid": "1f711943-123c-11ef-8c90-4585f0cfab08", 00:09:30.387 "strip_size_kb": 64, 00:09:30.387 "state": "configuring", 00:09:30.387 "raid_level": "concat", 00:09:30.387 "superblock": true, 00:09:30.387 "num_base_bdevs": 2, 00:09:30.387 "num_base_bdevs_discovered": 1, 00:09:30.387 "num_base_bdevs_operational": 2, 00:09:30.387 "base_bdevs_list": [ 00:09:30.387 { 00:09:30.387 "name": "BaseBdev1", 00:09:30.387 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:30.387 "is_configured": true, 00:09:30.387 "data_offset": 2048, 00:09:30.387 "data_size": 63488 00:09:30.387 }, 00:09:30.387 { 00:09:30.387 "name": "BaseBdev2", 00:09:30.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.387 "is_configured": false, 00:09:30.387 "data_offset": 0, 00:09:30.387 "data_size": 0 00:09:30.387 } 00:09:30.387 ] 00:09:30.387 }' 00:09:30.387 21:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:30.387 21:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.953 21:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.211 [2024-05-14 21:51:31.553983] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.212 [2024-05-14 21:51:31.554044] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c160300 00:09:31.212 [2024-05-14 21:51:31.554050] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:31.212 [2024-05-14 21:51:31.554070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c1beec0 00:09:31.212 [2024-05-14 21:51:31.554116] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c160300 00:09:31.212 [2024-05-14 21:51:31.554121] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c160300 00:09:31.212 [2024-05-14 21:51:31.554142] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.212 BaseBdev2 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:31.212 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:31.471 21:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.735 [ 00:09:31.735 { 00:09:31.735 "name": "BaseBdev2", 00:09:31.735 "aliases": [ 00:09:31.735 "1ff93676-123c-11ef-8c90-4585f0cfab08" 00:09:31.735 ], 00:09:31.735 "product_name": "Malloc disk", 00:09:31.735 "block_size": 512, 00:09:31.735 "num_blocks": 65536, 00:09:31.735 "uuid": "1ff93676-123c-11ef-8c90-4585f0cfab08", 00:09:31.735 "assigned_rate_limits": { 00:09:31.735 "rw_ios_per_sec": 0, 00:09:31.735 "rw_mbytes_per_sec": 0, 00:09:31.735 "r_mbytes_per_sec": 0, 00:09:31.735 "w_mbytes_per_sec": 0 00:09:31.735 }, 00:09:31.735 "claimed": true, 00:09:31.735 "claim_type": "exclusive_write", 00:09:31.735 "zoned": false, 00:09:31.735 "supported_io_types": { 00:09:31.735 "read": true, 00:09:31.735 "write": true, 00:09:31.735 "unmap": true, 00:09:31.735 "write_zeroes": true, 00:09:31.735 "flush": true, 00:09:31.735 "reset": true, 00:09:31.735 "compare": false, 00:09:31.735 "compare_and_write": false, 00:09:31.735 "abort": true, 00:09:31.735 "nvme_admin": false, 00:09:31.735 "nvme_io": false 00:09:31.735 }, 00:09:31.735 "memory_domains": [ 00:09:31.735 { 00:09:31.735 "dma_device_id": "system", 00:09:31.735 "dma_device_type": 1 00:09:31.735 }, 00:09:31.735 { 00:09:31.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.735 "dma_device_type": 2 00:09:31.735 } 00:09:31.735 ], 00:09:31.735 "driver_specific": {} 00:09:31.735 } 00:09:31.735 ] 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.735 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.992 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:31.992 "name": "Existed_Raid", 00:09:31.992 "uuid": "1f711943-123c-11ef-8c90-4585f0cfab08", 00:09:31.992 "strip_size_kb": 64, 00:09:31.992 "state": "online", 00:09:31.992 "raid_level": "concat", 00:09:31.992 "superblock": true, 00:09:31.992 "num_base_bdevs": 2, 00:09:31.992 "num_base_bdevs_discovered": 2, 00:09:31.992 "num_base_bdevs_operational": 2, 00:09:31.992 "base_bdevs_list": [ 00:09:31.992 { 00:09:31.992 "name": "BaseBdev1", 00:09:31.992 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:31.992 "is_configured": true, 00:09:31.992 "data_offset": 2048, 00:09:31.992 "data_size": 63488 00:09:31.992 }, 00:09:31.992 { 00:09:31.992 "name": "BaseBdev2", 00:09:31.992 "uuid": "1ff93676-123c-11ef-8c90-4585f0cfab08", 00:09:31.992 "is_configured": true, 00:09:31.992 "data_offset": 2048, 00:09:31.992 "data_size": 63488 00:09:31.992 } 00:09:31.992 ] 00:09:31.992 }' 00:09:31.992 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:31.992 21:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:32.250 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:32.507 [2024-05-14 21:51:32.913947] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:32.507 "name": "Existed_Raid", 00:09:32.507 "aliases": [ 00:09:32.507 "1f711943-123c-11ef-8c90-4585f0cfab08" 00:09:32.507 ], 00:09:32.507 "product_name": "Raid Volume", 00:09:32.507 "block_size": 512, 00:09:32.507 "num_blocks": 126976, 00:09:32.507 "uuid": "1f711943-123c-11ef-8c90-4585f0cfab08", 00:09:32.507 "assigned_rate_limits": { 00:09:32.507 "rw_ios_per_sec": 0, 00:09:32.507 "rw_mbytes_per_sec": 0, 00:09:32.507 "r_mbytes_per_sec": 0, 00:09:32.507 "w_mbytes_per_sec": 0 00:09:32.507 }, 00:09:32.507 "claimed": false, 00:09:32.507 "zoned": false, 00:09:32.507 "supported_io_types": { 00:09:32.507 "read": true, 00:09:32.507 "write": true, 00:09:32.507 "unmap": true, 00:09:32.507 "write_zeroes": true, 00:09:32.507 "flush": true, 00:09:32.507 "reset": true, 00:09:32.507 "compare": false, 00:09:32.507 "compare_and_write": false, 00:09:32.507 "abort": false, 00:09:32.507 "nvme_admin": false, 00:09:32.507 "nvme_io": false 00:09:32.507 }, 00:09:32.507 "memory_domains": [ 00:09:32.507 { 00:09:32.507 "dma_device_id": "system", 00:09:32.507 "dma_device_type": 1 00:09:32.507 }, 00:09:32.507 { 00:09:32.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.507 "dma_device_type": 2 00:09:32.507 }, 00:09:32.507 { 00:09:32.507 "dma_device_id": "system", 00:09:32.507 "dma_device_type": 1 00:09:32.507 }, 00:09:32.507 { 00:09:32.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.507 "dma_device_type": 2 00:09:32.507 } 00:09:32.507 ], 00:09:32.507 "driver_specific": { 00:09:32.507 "raid": { 00:09:32.507 "uuid": "1f711943-123c-11ef-8c90-4585f0cfab08", 00:09:32.507 "strip_size_kb": 64, 00:09:32.507 "state": "online", 00:09:32.507 "raid_level": "concat", 00:09:32.507 "superblock": true, 00:09:32.507 "num_base_bdevs": 2, 00:09:32.507 "num_base_bdevs_discovered": 2, 00:09:32.507 "num_base_bdevs_operational": 2, 00:09:32.507 "base_bdevs_list": [ 00:09:32.507 { 00:09:32.507 "name": "BaseBdev1", 00:09:32.507 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:32.507 "is_configured": true, 00:09:32.507 "data_offset": 2048, 00:09:32.507 "data_size": 63488 00:09:32.507 }, 00:09:32.507 { 00:09:32.507 "name": "BaseBdev2", 00:09:32.507 "uuid": "1ff93676-123c-11ef-8c90-4585f0cfab08", 00:09:32.507 "is_configured": true, 00:09:32.507 "data_offset": 2048, 00:09:32.507 "data_size": 63488 00:09:32.507 } 00:09:32.507 ] 00:09:32.507 } 00:09:32.507 } 00:09:32.507 }' 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:32.507 BaseBdev2' 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:32.507 21:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:32.765 "name": "BaseBdev1", 00:09:32.765 "aliases": [ 00:09:32.765 "1e7292eb-123c-11ef-8c90-4585f0cfab08" 00:09:32.765 ], 00:09:32.765 "product_name": "Malloc disk", 00:09:32.765 "block_size": 512, 00:09:32.765 "num_blocks": 65536, 00:09:32.765 "uuid": "1e7292eb-123c-11ef-8c90-4585f0cfab08", 00:09:32.765 "assigned_rate_limits": { 00:09:32.765 "rw_ios_per_sec": 0, 00:09:32.765 "rw_mbytes_per_sec": 0, 00:09:32.765 "r_mbytes_per_sec": 0, 00:09:32.765 "w_mbytes_per_sec": 0 00:09:32.765 }, 00:09:32.765 "claimed": true, 00:09:32.765 "claim_type": "exclusive_write", 00:09:32.765 "zoned": false, 00:09:32.765 "supported_io_types": { 00:09:32.765 "read": true, 00:09:32.765 "write": true, 00:09:32.765 "unmap": true, 00:09:32.765 "write_zeroes": true, 00:09:32.765 "flush": true, 00:09:32.765 "reset": true, 00:09:32.765 "compare": false, 00:09:32.765 "compare_and_write": false, 00:09:32.765 "abort": true, 00:09:32.765 "nvme_admin": false, 00:09:32.765 "nvme_io": false 00:09:32.765 }, 00:09:32.765 "memory_domains": [ 00:09:32.765 { 00:09:32.765 "dma_device_id": "system", 00:09:32.765 "dma_device_type": 1 00:09:32.765 }, 00:09:32.765 { 00:09:32.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.765 "dma_device_type": 2 00:09:32.765 } 00:09:32.765 ], 00:09:32.765 "driver_specific": {} 00:09:32.765 }' 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:32.765 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:33.030 "name": "BaseBdev2", 00:09:33.030 "aliases": [ 00:09:33.030 "1ff93676-123c-11ef-8c90-4585f0cfab08" 00:09:33.030 ], 00:09:33.030 "product_name": "Malloc disk", 00:09:33.030 "block_size": 512, 00:09:33.030 "num_blocks": 65536, 00:09:33.030 "uuid": "1ff93676-123c-11ef-8c90-4585f0cfab08", 00:09:33.030 "assigned_rate_limits": { 00:09:33.030 "rw_ios_per_sec": 0, 00:09:33.030 "rw_mbytes_per_sec": 0, 00:09:33.030 "r_mbytes_per_sec": 0, 00:09:33.030 "w_mbytes_per_sec": 0 00:09:33.030 }, 00:09:33.030 "claimed": true, 00:09:33.030 "claim_type": "exclusive_write", 00:09:33.030 "zoned": false, 00:09:33.030 "supported_io_types": { 00:09:33.030 "read": true, 00:09:33.030 "write": true, 00:09:33.030 "unmap": true, 00:09:33.030 "write_zeroes": true, 00:09:33.030 "flush": true, 00:09:33.030 "reset": true, 00:09:33.030 "compare": false, 00:09:33.030 "compare_and_write": false, 00:09:33.030 "abort": true, 00:09:33.030 "nvme_admin": false, 00:09:33.030 "nvme_io": false 00:09:33.030 }, 00:09:33.030 "memory_domains": [ 00:09:33.030 { 00:09:33.030 "dma_device_id": "system", 00:09:33.030 "dma_device_type": 1 00:09:33.030 }, 00:09:33.030 { 00:09:33.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.030 "dma_device_type": 2 00:09:33.030 } 00:09:33.030 ], 00:09:33.030 "driver_specific": {} 00:09:33.030 }' 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:33.030 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:33.287 [2024-05-14 21:51:33.797969] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.287 [2024-05-14 21:51:33.797996] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.287 [2024-05-14 21:51:33.798011] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:33.287 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:33.288 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:33.288 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.288 21:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.545 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:33.545 "name": "Existed_Raid", 00:09:33.545 "uuid": "1f711943-123c-11ef-8c90-4585f0cfab08", 00:09:33.545 "strip_size_kb": 64, 00:09:33.545 "state": "offline", 00:09:33.545 "raid_level": "concat", 00:09:33.545 "superblock": true, 00:09:33.545 "num_base_bdevs": 2, 00:09:33.545 "num_base_bdevs_discovered": 1, 00:09:33.545 "num_base_bdevs_operational": 1, 00:09:33.545 "base_bdevs_list": [ 00:09:33.545 { 00:09:33.545 "name": null, 00:09:33.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.545 "is_configured": false, 00:09:33.545 "data_offset": 2048, 00:09:33.545 "data_size": 63488 00:09:33.545 }, 00:09:33.545 { 00:09:33.545 "name": "BaseBdev2", 00:09:33.545 "uuid": "1ff93676-123c-11ef-8c90-4585f0cfab08", 00:09:33.545 "is_configured": true, 00:09:33.545 "data_offset": 2048, 00:09:33.545 "data_size": 63488 00:09:33.545 } 00:09:33.545 ] 00:09:33.545 }' 00:09:33.545 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:33.545 21:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.110 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:34.368 [2024-05-14 21:51:34.903812] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.368 [2024-05-14 21:51:34.903841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c160300 name Existed_Raid, state offline 00:09:34.368 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.368 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.368 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.368 21:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 50094 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 50094 ']' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 50094 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 50094 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:34.933 killing process with pid 50094 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50094' 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 50094 00:09:34.933 [2024-05-14 21:51:35.245621] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.933 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 50094 00:09:34.934 [2024-05-14 21:51:35.245666] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.934 21:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:09:34.934 00:09:34.934 real 0m9.147s 00:09:34.934 user 0m16.005s 00:09:34.934 sys 0m1.528s 00:09:34.934 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.934 21:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.934 ************************************ 00:09:34.934 END TEST raid_state_function_test_sb 00:09:34.934 ************************************ 00:09:34.934 21:51:35 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:34.934 21:51:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:34.934 21:51:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.934 21:51:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.934 ************************************ 00:09:34.934 START TEST raid_superblock_test 00:09:34.934 ************************************ 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=50368 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 50368 /var/tmp/spdk-raid.sock 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 50368 ']' 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:34.934 21:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.934 [2024-05-14 21:51:35.471644] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:34.934 [2024-05-14 21:51:35.471850] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:35.500 EAL: TSC is not safe to use in SMP mode 00:09:35.500 EAL: TSC is not invariant 00:09:35.500 [2024-05-14 21:51:36.008514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.758 [2024-05-14 21:51:36.096402] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:35.758 [2024-05-14 21:51:36.098645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.758 [2024-05-14 21:51:36.099411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.758 [2024-05-14 21:51:36.099426] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.016 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:36.274 malloc1 00:09:36.274 21:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.532 [2024-05-14 21:51:36.996336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.532 [2024-05-14 21:51:36.996401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.532 [2024-05-14 21:51:36.997006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b50e780 00:09:36.532 [2024-05-14 21:51:36.997043] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.532 [2024-05-14 21:51:36.997892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.532 [2024-05-14 21:51:36.997919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.532 pt1 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.532 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:36.790 malloc2 00:09:36.790 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.048 [2024-05-14 21:51:37.456335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.048 [2024-05-14 21:51:37.456434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.048 [2024-05-14 21:51:37.456475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b50ec80 00:09:37.048 [2024-05-14 21:51:37.456491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.048 [2024-05-14 21:51:37.457294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.048 [2024-05-14 21:51:37.457336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.048 pt2 00:09:37.048 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:37.048 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:37.048 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:09:37.306 [2024-05-14 21:51:37.684331] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:37.306 [2024-05-14 21:51:37.684947] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.306 [2024-05-14 21:51:37.685015] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b513300 00:09:37.306 [2024-05-14 21:51:37.685023] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:37.306 [2024-05-14 21:51:37.685062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b571e20 00:09:37.306 [2024-05-14 21:51:37.685144] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b513300 00:09:37.306 [2024-05-14 21:51:37.685150] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b513300 00:09:37.306 [2024-05-14 21:51:37.685180] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.306 21:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.565 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:37.565 "name": "raid_bdev1", 00:09:37.565 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:37.565 "strip_size_kb": 64, 00:09:37.565 "state": "online", 00:09:37.565 "raid_level": "concat", 00:09:37.565 "superblock": true, 00:09:37.565 "num_base_bdevs": 2, 00:09:37.565 "num_base_bdevs_discovered": 2, 00:09:37.565 "num_base_bdevs_operational": 2, 00:09:37.565 "base_bdevs_list": [ 00:09:37.565 { 00:09:37.565 "name": "pt1", 00:09:37.565 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:37.565 "is_configured": true, 00:09:37.565 "data_offset": 2048, 00:09:37.565 "data_size": 63488 00:09:37.565 }, 00:09:37.565 { 00:09:37.565 "name": "pt2", 00:09:37.565 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:37.565 "is_configured": true, 00:09:37.565 "data_offset": 2048, 00:09:37.565 "data_size": 63488 00:09:37.565 } 00:09:37.565 ] 00:09:37.565 }' 00:09:37.565 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:37.565 21:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:37.824 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:38.082 [2024-05-14 21:51:38.508339] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:38.082 "name": "raid_bdev1", 00:09:38.082 "aliases": [ 00:09:38.082 "23a0a5b4-123c-11ef-8c90-4585f0cfab08" 00:09:38.082 ], 00:09:38.082 "product_name": "Raid Volume", 00:09:38.082 "block_size": 512, 00:09:38.082 "num_blocks": 126976, 00:09:38.082 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:38.082 "assigned_rate_limits": { 00:09:38.082 "rw_ios_per_sec": 0, 00:09:38.082 "rw_mbytes_per_sec": 0, 00:09:38.082 "r_mbytes_per_sec": 0, 00:09:38.082 "w_mbytes_per_sec": 0 00:09:38.082 }, 00:09:38.082 "claimed": false, 00:09:38.082 "zoned": false, 00:09:38.082 "supported_io_types": { 00:09:38.082 "read": true, 00:09:38.082 "write": true, 00:09:38.082 "unmap": true, 00:09:38.082 "write_zeroes": true, 00:09:38.082 "flush": true, 00:09:38.082 "reset": true, 00:09:38.082 "compare": false, 00:09:38.082 "compare_and_write": false, 00:09:38.082 "abort": false, 00:09:38.082 "nvme_admin": false, 00:09:38.082 "nvme_io": false 00:09:38.082 }, 00:09:38.082 "memory_domains": [ 00:09:38.082 { 00:09:38.082 "dma_device_id": "system", 00:09:38.082 "dma_device_type": 1 00:09:38.082 }, 00:09:38.082 { 00:09:38.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.082 "dma_device_type": 2 00:09:38.082 }, 00:09:38.082 { 00:09:38.082 "dma_device_id": "system", 00:09:38.082 "dma_device_type": 1 00:09:38.082 }, 00:09:38.082 { 00:09:38.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.082 "dma_device_type": 2 00:09:38.082 } 00:09:38.082 ], 00:09:38.082 "driver_specific": { 00:09:38.082 "raid": { 00:09:38.082 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:38.082 "strip_size_kb": 64, 00:09:38.082 "state": "online", 00:09:38.082 "raid_level": "concat", 00:09:38.082 "superblock": true, 00:09:38.082 "num_base_bdevs": 2, 00:09:38.082 "num_base_bdevs_discovered": 2, 00:09:38.082 "num_base_bdevs_operational": 2, 00:09:38.082 "base_bdevs_list": [ 00:09:38.082 { 00:09:38.082 "name": "pt1", 00:09:38.082 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:38.082 "is_configured": true, 00:09:38.082 "data_offset": 2048, 00:09:38.082 "data_size": 63488 00:09:38.082 }, 00:09:38.082 { 00:09:38.082 "name": "pt2", 00:09:38.082 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:38.082 "is_configured": true, 00:09:38.082 "data_offset": 2048, 00:09:38.082 "data_size": 63488 00:09:38.082 } 00:09:38.082 ] 00:09:38.082 } 00:09:38.082 } 00:09:38.082 }' 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:09:38.082 pt2' 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:38.082 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:38.340 "name": "pt1", 00:09:38.340 "aliases": [ 00:09:38.340 "17598c69-c850-f95c-b643-73d5b2e6b98d" 00:09:38.340 ], 00:09:38.340 "product_name": "passthru", 00:09:38.340 "block_size": 512, 00:09:38.340 "num_blocks": 65536, 00:09:38.340 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:38.340 "assigned_rate_limits": { 00:09:38.340 "rw_ios_per_sec": 0, 00:09:38.340 "rw_mbytes_per_sec": 0, 00:09:38.340 "r_mbytes_per_sec": 0, 00:09:38.340 "w_mbytes_per_sec": 0 00:09:38.340 }, 00:09:38.340 "claimed": true, 00:09:38.340 "claim_type": "exclusive_write", 00:09:38.340 "zoned": false, 00:09:38.340 "supported_io_types": { 00:09:38.340 "read": true, 00:09:38.340 "write": true, 00:09:38.340 "unmap": true, 00:09:38.340 "write_zeroes": true, 00:09:38.340 "flush": true, 00:09:38.340 "reset": true, 00:09:38.340 "compare": false, 00:09:38.340 "compare_and_write": false, 00:09:38.340 "abort": true, 00:09:38.340 "nvme_admin": false, 00:09:38.340 "nvme_io": false 00:09:38.340 }, 00:09:38.340 "memory_domains": [ 00:09:38.340 { 00:09:38.340 "dma_device_id": "system", 00:09:38.340 "dma_device_type": 1 00:09:38.340 }, 00:09:38.340 { 00:09:38.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.340 "dma_device_type": 2 00:09:38.340 } 00:09:38.340 ], 00:09:38.340 "driver_specific": { 00:09:38.340 "passthru": { 00:09:38.340 "name": "pt1", 00:09:38.340 "base_bdev_name": "malloc1" 00:09:38.340 } 00:09:38.340 } 00:09:38.340 }' 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:38.340 21:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:38.598 "name": "pt2", 00:09:38.598 "aliases": [ 00:09:38.598 "fa065d0c-fcb5-2258-9efa-cf40a4a16562" 00:09:38.598 ], 00:09:38.598 "product_name": "passthru", 00:09:38.598 "block_size": 512, 00:09:38.598 "num_blocks": 65536, 00:09:38.598 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:38.598 "assigned_rate_limits": { 00:09:38.598 "rw_ios_per_sec": 0, 00:09:38.598 "rw_mbytes_per_sec": 0, 00:09:38.598 "r_mbytes_per_sec": 0, 00:09:38.598 "w_mbytes_per_sec": 0 00:09:38.598 }, 00:09:38.598 "claimed": true, 00:09:38.598 "claim_type": "exclusive_write", 00:09:38.598 "zoned": false, 00:09:38.598 "supported_io_types": { 00:09:38.598 "read": true, 00:09:38.598 "write": true, 00:09:38.598 "unmap": true, 00:09:38.598 "write_zeroes": true, 00:09:38.598 "flush": true, 00:09:38.598 "reset": true, 00:09:38.598 "compare": false, 00:09:38.598 "compare_and_write": false, 00:09:38.598 "abort": true, 00:09:38.598 "nvme_admin": false, 00:09:38.598 "nvme_io": false 00:09:38.598 }, 00:09:38.598 "memory_domains": [ 00:09:38.598 { 00:09:38.598 "dma_device_id": "system", 00:09:38.598 "dma_device_type": 1 00:09:38.598 }, 00:09:38.598 { 00:09:38.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.598 "dma_device_type": 2 00:09:38.598 } 00:09:38.598 ], 00:09:38.598 "driver_specific": { 00:09:38.598 "passthru": { 00:09:38.598 "name": "pt2", 00:09:38.598 "base_bdev_name": "malloc2" 00:09:38.598 } 00:09:38.598 } 00:09:38.598 }' 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:38.598 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:38.855 [2024-05-14 21:51:39.444297] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.112 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=23a0a5b4-123c-11ef-8c90-4585f0cfab08 00:09:39.112 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 23a0a5b4-123c-11ef-8c90-4585f0cfab08 ']' 00:09:39.112 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:39.112 [2024-05-14 21:51:39.676254] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.112 [2024-05-14 21:51:39.676281] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.112 [2024-05-14 21:51:39.676304] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.112 [2024-05-14 21:51:39.676317] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.112 [2024-05-14 21:51:39.676322] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b513300 name raid_bdev1, state offline 00:09:39.112 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.112 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.370 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.370 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.370 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.370 21:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:39.628 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.628 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:40.193 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:09:40.452 [2024-05-14 21:51:40.976227] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.452 [2024-05-14 21:51:40.976799] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.452 [2024-05-14 21:51:40.976825] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.452 [2024-05-14 21:51:40.976873] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.452 [2024-05-14 21:51:40.976884] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.452 [2024-05-14 21:51:40.976889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b513300 name raid_bdev1, state configuring 00:09:40.452 request: 00:09:40.452 { 00:09:40.452 "name": "raid_bdev1", 00:09:40.452 "raid_level": "concat", 00:09:40.452 "base_bdevs": [ 00:09:40.452 "malloc1", 00:09:40.452 "malloc2" 00:09:40.452 ], 00:09:40.452 "superblock": false, 00:09:40.452 "strip_size_kb": 64, 00:09:40.452 "method": "bdev_raid_create", 00:09:40.452 "req_id": 1 00:09:40.452 } 00:09:40.452 Got JSON-RPC error response 00:09:40.452 response: 00:09:40.452 { 00:09:40.452 "code": -17, 00:09:40.452 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.452 } 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.452 21:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.710 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.710 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.710 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.968 [2024-05-14 21:51:41.472203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.968 [2024-05-14 21:51:41.472254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.968 [2024-05-14 21:51:41.472283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b50ec80 00:09:40.968 [2024-05-14 21:51:41.472291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.968 [2024-05-14 21:51:41.472925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.968 [2024-05-14 21:51:41.472950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.968 [2024-05-14 21:51:41.472975] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:40.968 [2024-05-14 21:51:41.472987] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.968 pt1 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.968 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.226 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:41.226 "name": "raid_bdev1", 00:09:41.226 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:41.226 "strip_size_kb": 64, 00:09:41.226 "state": "configuring", 00:09:41.226 "raid_level": "concat", 00:09:41.226 "superblock": true, 00:09:41.226 "num_base_bdevs": 2, 00:09:41.226 "num_base_bdevs_discovered": 1, 00:09:41.226 "num_base_bdevs_operational": 2, 00:09:41.226 "base_bdevs_list": [ 00:09:41.226 { 00:09:41.226 "name": "pt1", 00:09:41.226 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:41.226 "is_configured": true, 00:09:41.227 "data_offset": 2048, 00:09:41.227 "data_size": 63488 00:09:41.227 }, 00:09:41.227 { 00:09:41.227 "name": null, 00:09:41.227 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:41.227 "is_configured": false, 00:09:41.227 "data_offset": 2048, 00:09:41.227 "data_size": 63488 00:09:41.227 } 00:09:41.227 ] 00:09:41.227 }' 00:09:41.227 21:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:41.227 21:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.792 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:41.792 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.792 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.792 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.050 [2024-05-14 21:51:42.396186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.050 [2024-05-14 21:51:42.396250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.050 [2024-05-14 21:51:42.396281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b50ef00 00:09:42.050 [2024-05-14 21:51:42.396290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.050 [2024-05-14 21:51:42.396404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.050 [2024-05-14 21:51:42.396416] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.050 [2024-05-14 21:51:42.396439] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:42.050 [2024-05-14 21:51:42.396448] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.050 [2024-05-14 21:51:42.396474] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b513300 00:09:42.050 [2024-05-14 21:51:42.396478] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:42.050 [2024-05-14 21:51:42.396498] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b571e20 00:09:42.050 [2024-05-14 21:51:42.396550] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b513300 00:09:42.050 [2024-05-14 21:51:42.396555] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b513300 00:09:42.050 [2024-05-14 21:51:42.396576] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.050 pt2 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.050 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.308 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:42.308 "name": "raid_bdev1", 00:09:42.308 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:42.308 "strip_size_kb": 64, 00:09:42.308 "state": "online", 00:09:42.308 "raid_level": "concat", 00:09:42.308 "superblock": true, 00:09:42.308 "num_base_bdevs": 2, 00:09:42.308 "num_base_bdevs_discovered": 2, 00:09:42.308 "num_base_bdevs_operational": 2, 00:09:42.308 "base_bdevs_list": [ 00:09:42.308 { 00:09:42.308 "name": "pt1", 00:09:42.308 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:42.308 "is_configured": true, 00:09:42.308 "data_offset": 2048, 00:09:42.308 "data_size": 63488 00:09:42.308 }, 00:09:42.308 { 00:09:42.308 "name": "pt2", 00:09:42.308 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:42.308 "is_configured": true, 00:09:42.308 "data_offset": 2048, 00:09:42.308 "data_size": 63488 00:09:42.308 } 00:09:42.308 ] 00:09:42.308 }' 00:09:42.308 21:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:42.308 21:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.565 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.565 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:09:42.565 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:42.566 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:42.566 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:42.566 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:42.566 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:42.566 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:42.823 [2024-05-14 21:51:43.276190] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.823 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:42.824 "name": "raid_bdev1", 00:09:42.824 "aliases": [ 00:09:42.824 "23a0a5b4-123c-11ef-8c90-4585f0cfab08" 00:09:42.824 ], 00:09:42.824 "product_name": "Raid Volume", 00:09:42.824 "block_size": 512, 00:09:42.824 "num_blocks": 126976, 00:09:42.824 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:42.824 "assigned_rate_limits": { 00:09:42.824 "rw_ios_per_sec": 0, 00:09:42.824 "rw_mbytes_per_sec": 0, 00:09:42.824 "r_mbytes_per_sec": 0, 00:09:42.824 "w_mbytes_per_sec": 0 00:09:42.824 }, 00:09:42.824 "claimed": false, 00:09:42.824 "zoned": false, 00:09:42.824 "supported_io_types": { 00:09:42.824 "read": true, 00:09:42.824 "write": true, 00:09:42.824 "unmap": true, 00:09:42.824 "write_zeroes": true, 00:09:42.824 "flush": true, 00:09:42.824 "reset": true, 00:09:42.824 "compare": false, 00:09:42.824 "compare_and_write": false, 00:09:42.824 "abort": false, 00:09:42.824 "nvme_admin": false, 00:09:42.824 "nvme_io": false 00:09:42.824 }, 00:09:42.824 "memory_domains": [ 00:09:42.824 { 00:09:42.824 "dma_device_id": "system", 00:09:42.824 "dma_device_type": 1 00:09:42.824 }, 00:09:42.824 { 00:09:42.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.824 "dma_device_type": 2 00:09:42.824 }, 00:09:42.824 { 00:09:42.824 "dma_device_id": "system", 00:09:42.824 "dma_device_type": 1 00:09:42.824 }, 00:09:42.824 { 00:09:42.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.824 "dma_device_type": 2 00:09:42.824 } 00:09:42.824 ], 00:09:42.824 "driver_specific": { 00:09:42.824 "raid": { 00:09:42.824 "uuid": "23a0a5b4-123c-11ef-8c90-4585f0cfab08", 00:09:42.824 "strip_size_kb": 64, 00:09:42.824 "state": "online", 00:09:42.824 "raid_level": "concat", 00:09:42.824 "superblock": true, 00:09:42.824 "num_base_bdevs": 2, 00:09:42.824 "num_base_bdevs_discovered": 2, 00:09:42.824 "num_base_bdevs_operational": 2, 00:09:42.824 "base_bdevs_list": [ 00:09:42.824 { 00:09:42.824 "name": "pt1", 00:09:42.824 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:42.824 "is_configured": true, 00:09:42.824 "data_offset": 2048, 00:09:42.824 "data_size": 63488 00:09:42.824 }, 00:09:42.824 { 00:09:42.824 "name": "pt2", 00:09:42.824 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:42.824 "is_configured": true, 00:09:42.824 "data_offset": 2048, 00:09:42.824 "data_size": 63488 00:09:42.824 } 00:09:42.824 ] 00:09:42.824 } 00:09:42.824 } 00:09:42.824 }' 00:09:42.824 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.824 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:09:42.824 pt2' 00:09:42.824 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:42.824 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:42.824 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:43.082 "name": "pt1", 00:09:43.082 "aliases": [ 00:09:43.082 "17598c69-c850-f95c-b643-73d5b2e6b98d" 00:09:43.082 ], 00:09:43.082 "product_name": "passthru", 00:09:43.082 "block_size": 512, 00:09:43.082 "num_blocks": 65536, 00:09:43.082 "uuid": "17598c69-c850-f95c-b643-73d5b2e6b98d", 00:09:43.082 "assigned_rate_limits": { 00:09:43.082 "rw_ios_per_sec": 0, 00:09:43.082 "rw_mbytes_per_sec": 0, 00:09:43.082 "r_mbytes_per_sec": 0, 00:09:43.082 "w_mbytes_per_sec": 0 00:09:43.082 }, 00:09:43.082 "claimed": true, 00:09:43.082 "claim_type": "exclusive_write", 00:09:43.082 "zoned": false, 00:09:43.082 "supported_io_types": { 00:09:43.082 "read": true, 00:09:43.082 "write": true, 00:09:43.082 "unmap": true, 00:09:43.082 "write_zeroes": true, 00:09:43.082 "flush": true, 00:09:43.082 "reset": true, 00:09:43.082 "compare": false, 00:09:43.082 "compare_and_write": false, 00:09:43.082 "abort": true, 00:09:43.082 "nvme_admin": false, 00:09:43.082 "nvme_io": false 00:09:43.082 }, 00:09:43.082 "memory_domains": [ 00:09:43.082 { 00:09:43.082 "dma_device_id": "system", 00:09:43.082 "dma_device_type": 1 00:09:43.082 }, 00:09:43.082 { 00:09:43.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.082 "dma_device_type": 2 00:09:43.082 } 00:09:43.082 ], 00:09:43.082 "driver_specific": { 00:09:43.082 "passthru": { 00:09:43.082 "name": "pt1", 00:09:43.082 "base_bdev_name": "malloc1" 00:09:43.082 } 00:09:43.082 } 00:09:43.082 }' 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:43.082 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:43.339 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:43.339 "name": "pt2", 00:09:43.339 "aliases": [ 00:09:43.339 "fa065d0c-fcb5-2258-9efa-cf40a4a16562" 00:09:43.339 ], 00:09:43.339 "product_name": "passthru", 00:09:43.339 "block_size": 512, 00:09:43.339 "num_blocks": 65536, 00:09:43.339 "uuid": "fa065d0c-fcb5-2258-9efa-cf40a4a16562", 00:09:43.339 "assigned_rate_limits": { 00:09:43.339 "rw_ios_per_sec": 0, 00:09:43.339 "rw_mbytes_per_sec": 0, 00:09:43.339 "r_mbytes_per_sec": 0, 00:09:43.339 "w_mbytes_per_sec": 0 00:09:43.339 }, 00:09:43.339 "claimed": true, 00:09:43.339 "claim_type": "exclusive_write", 00:09:43.339 "zoned": false, 00:09:43.339 "supported_io_types": { 00:09:43.339 "read": true, 00:09:43.339 "write": true, 00:09:43.339 "unmap": true, 00:09:43.339 "write_zeroes": true, 00:09:43.339 "flush": true, 00:09:43.339 "reset": true, 00:09:43.339 "compare": false, 00:09:43.339 "compare_and_write": false, 00:09:43.339 "abort": true, 00:09:43.339 "nvme_admin": false, 00:09:43.339 "nvme_io": false 00:09:43.339 }, 00:09:43.339 "memory_domains": [ 00:09:43.339 { 00:09:43.339 "dma_device_id": "system", 00:09:43.339 "dma_device_type": 1 00:09:43.339 }, 00:09:43.339 { 00:09:43.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.339 "dma_device_type": 2 00:09:43.339 } 00:09:43.339 ], 00:09:43.339 "driver_specific": { 00:09:43.339 "passthru": { 00:09:43.339 "name": "pt2", 00:09:43.339 "base_bdev_name": "malloc2" 00:09:43.339 } 00:09:43.339 } 00:09:43.339 }' 00:09:43.339 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:43.596 21:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:43.853 [2024-05-14 21:51:44.236166] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.853 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 23a0a5b4-123c-11ef-8c90-4585f0cfab08 '!=' 23a0a5b4-123c-11ef-8c90-4585f0cfab08 ']' 00:09:43.853 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:43.853 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 50368 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 50368 ']' 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 50368 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 50368 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:43.854 killing process with pid 50368 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50368' 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 50368 00:09:43.854 [2024-05-14 21:51:44.266010] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.854 [2024-05-14 21:51:44.266042] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.854 [2024-05-14 21:51:44.266062] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.854 [2024-05-14 21:51:44.266066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b513300 name raid_bdev1, state offline 00:09:43.854 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 50368 00:09:43.854 [2024-05-14 21:51:44.277457] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.111 21:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:09:44.111 00:09:44.111 real 0m8.990s 00:09:44.112 user 0m15.794s 00:09:44.112 sys 0m1.432s 00:09:44.112 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.112 21:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.112 ************************************ 00:09:44.112 END TEST raid_superblock_test 00:09:44.112 ************************************ 00:09:44.112 21:51:44 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:09:44.112 21:51:44 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:44.112 21:51:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:44.112 21:51:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.112 21:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.112 ************************************ 00:09:44.112 START TEST raid_state_function_test 00:09:44.112 ************************************ 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=50635 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50635' 00:09:44.112 Process raid pid: 50635 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 50635 /var/tmp/spdk-raid.sock 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 50635 ']' 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:44.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:44.112 21:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.112 [2024-05-14 21:51:44.513515] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:44.112 [2024-05-14 21:51:44.513784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:44.679 EAL: TSC is not safe to use in SMP mode 00:09:44.679 EAL: TSC is not invariant 00:09:44.679 [2024-05-14 21:51:45.034235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.679 [2024-05-14 21:51:45.131303] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:44.679 [2024-05-14 21:51:45.133965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.679 [2024-05-14 21:51:45.134938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.679 [2024-05-14 21:51:45.134956] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:45.246 [2024-05-14 21:51:45.772141] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.246 [2024-05-14 21:51:45.772197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.246 [2024-05-14 21:51:45.772203] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.246 [2024-05-14 21:51:45.772212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.246 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.504 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:45.504 "name": "Existed_Raid", 00:09:45.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.504 "strip_size_kb": 0, 00:09:45.504 "state": "configuring", 00:09:45.504 "raid_level": "raid1", 00:09:45.504 "superblock": false, 00:09:45.504 "num_base_bdevs": 2, 00:09:45.504 "num_base_bdevs_discovered": 0, 00:09:45.504 "num_base_bdevs_operational": 2, 00:09:45.504 "base_bdevs_list": [ 00:09:45.504 { 00:09:45.504 "name": "BaseBdev1", 00:09:45.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.504 "is_configured": false, 00:09:45.504 "data_offset": 0, 00:09:45.504 "data_size": 0 00:09:45.504 }, 00:09:45.504 { 00:09:45.504 "name": "BaseBdev2", 00:09:45.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.504 "is_configured": false, 00:09:45.504 "data_offset": 0, 00:09:45.504 "data_size": 0 00:09:45.504 } 00:09:45.504 ] 00:09:45.504 }' 00:09:45.504 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:45.504 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.070 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:46.070 [2024-05-14 21:51:46.608097] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.070 [2024-05-14 21:51:46.608121] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b097300 name Existed_Raid, state configuring 00:09:46.070 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:46.328 [2024-05-14 21:51:46.828100] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.328 [2024-05-14 21:51:46.828146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.328 [2024-05-14 21:51:46.828151] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.328 [2024-05-14 21:51:46.828160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.328 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.586 [2024-05-14 21:51:47.085142] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.586 BaseBdev1 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:46.586 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.843 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.101 [ 00:09:47.101 { 00:09:47.101 "name": "BaseBdev1", 00:09:47.101 "aliases": [ 00:09:47.101 "293af18e-123c-11ef-8c90-4585f0cfab08" 00:09:47.101 ], 00:09:47.101 "product_name": "Malloc disk", 00:09:47.101 "block_size": 512, 00:09:47.101 "num_blocks": 65536, 00:09:47.101 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:47.101 "assigned_rate_limits": { 00:09:47.101 "rw_ios_per_sec": 0, 00:09:47.101 "rw_mbytes_per_sec": 0, 00:09:47.101 "r_mbytes_per_sec": 0, 00:09:47.101 "w_mbytes_per_sec": 0 00:09:47.101 }, 00:09:47.101 "claimed": true, 00:09:47.101 "claim_type": "exclusive_write", 00:09:47.101 "zoned": false, 00:09:47.101 "supported_io_types": { 00:09:47.101 "read": true, 00:09:47.101 "write": true, 00:09:47.101 "unmap": true, 00:09:47.101 "write_zeroes": true, 00:09:47.101 "flush": true, 00:09:47.101 "reset": true, 00:09:47.101 "compare": false, 00:09:47.101 "compare_and_write": false, 00:09:47.101 "abort": true, 00:09:47.101 "nvme_admin": false, 00:09:47.101 "nvme_io": false 00:09:47.101 }, 00:09:47.101 "memory_domains": [ 00:09:47.101 { 00:09:47.101 "dma_device_id": "system", 00:09:47.101 "dma_device_type": 1 00:09:47.101 }, 00:09:47.101 { 00:09:47.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.101 "dma_device_type": 2 00:09:47.101 } 00:09:47.101 ], 00:09:47.101 "driver_specific": {} 00:09:47.101 } 00:09:47.101 ] 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.101 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.359 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:47.359 "name": "Existed_Raid", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "strip_size_kb": 0, 00:09:47.359 "state": "configuring", 00:09:47.359 "raid_level": "raid1", 00:09:47.359 "superblock": false, 00:09:47.359 "num_base_bdevs": 2, 00:09:47.359 "num_base_bdevs_discovered": 1, 00:09:47.359 "num_base_bdevs_operational": 2, 00:09:47.359 "base_bdevs_list": [ 00:09:47.359 { 00:09:47.359 "name": "BaseBdev1", 00:09:47.359 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:47.359 "is_configured": true, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 65536 00:09:47.359 }, 00:09:47.359 { 00:09:47.359 "name": "BaseBdev2", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "is_configured": false, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 0 00:09:47.359 } 00:09:47.359 ] 00:09:47.359 }' 00:09:47.359 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:47.359 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.925 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:48.185 [2024-05-14 21:51:48.540073] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.185 [2024-05-14 21:51:48.540126] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b097300 name Existed_Raid, state configuring 00:09:48.185 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:48.444 [2024-05-14 21:51:48.828084] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.444 [2024-05-14 21:51:48.828969] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.444 [2024-05-14 21:51:48.829010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.444 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.702 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:48.702 "name": "Existed_Raid", 00:09:48.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.702 "strip_size_kb": 0, 00:09:48.702 "state": "configuring", 00:09:48.702 "raid_level": "raid1", 00:09:48.702 "superblock": false, 00:09:48.702 "num_base_bdevs": 2, 00:09:48.702 "num_base_bdevs_discovered": 1, 00:09:48.702 "num_base_bdevs_operational": 2, 00:09:48.702 "base_bdevs_list": [ 00:09:48.702 { 00:09:48.702 "name": "BaseBdev1", 00:09:48.702 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:48.702 "is_configured": true, 00:09:48.702 "data_offset": 0, 00:09:48.702 "data_size": 65536 00:09:48.702 }, 00:09:48.702 { 00:09:48.702 "name": "BaseBdev2", 00:09:48.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.702 "is_configured": false, 00:09:48.702 "data_offset": 0, 00:09:48.702 "data_size": 0 00:09:48.702 } 00:09:48.702 ] 00:09:48.702 }' 00:09:48.702 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:48.702 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.960 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.219 [2024-05-14 21:51:49.636205] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.219 [2024-05-14 21:51:49.636235] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b097300 00:09:49.219 [2024-05-14 21:51:49.636240] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:49.219 [2024-05-14 21:51:49.636261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b0f5ec0 00:09:49.219 [2024-05-14 21:51:49.636356] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b097300 00:09:49.219 [2024-05-14 21:51:49.636360] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b097300 00:09:49.219 [2024-05-14 21:51:49.636395] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.219 BaseBdev2 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:49.219 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:49.477 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.738 [ 00:09:49.738 { 00:09:49.738 "name": "BaseBdev2", 00:09:49.738 "aliases": [ 00:09:49.738 "2ac05713-123c-11ef-8c90-4585f0cfab08" 00:09:49.738 ], 00:09:49.738 "product_name": "Malloc disk", 00:09:49.738 "block_size": 512, 00:09:49.738 "num_blocks": 65536, 00:09:49.738 "uuid": "2ac05713-123c-11ef-8c90-4585f0cfab08", 00:09:49.738 "assigned_rate_limits": { 00:09:49.738 "rw_ios_per_sec": 0, 00:09:49.738 "rw_mbytes_per_sec": 0, 00:09:49.738 "r_mbytes_per_sec": 0, 00:09:49.738 "w_mbytes_per_sec": 0 00:09:49.738 }, 00:09:49.738 "claimed": true, 00:09:49.738 "claim_type": "exclusive_write", 00:09:49.738 "zoned": false, 00:09:49.738 "supported_io_types": { 00:09:49.738 "read": true, 00:09:49.738 "write": true, 00:09:49.738 "unmap": true, 00:09:49.738 "write_zeroes": true, 00:09:49.738 "flush": true, 00:09:49.738 "reset": true, 00:09:49.738 "compare": false, 00:09:49.738 "compare_and_write": false, 00:09:49.738 "abort": true, 00:09:49.738 "nvme_admin": false, 00:09:49.738 "nvme_io": false 00:09:49.738 }, 00:09:49.738 "memory_domains": [ 00:09:49.738 { 00:09:49.738 "dma_device_id": "system", 00:09:49.738 "dma_device_type": 1 00:09:49.738 }, 00:09:49.738 { 00:09:49.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.738 "dma_device_type": 2 00:09:49.738 } 00:09:49.738 ], 00:09:49.738 "driver_specific": {} 00:09:49.738 } 00:09:49.738 ] 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.738 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.010 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:50.010 "name": "Existed_Raid", 00:09:50.010 "uuid": "2ac05dd8-123c-11ef-8c90-4585f0cfab08", 00:09:50.010 "strip_size_kb": 0, 00:09:50.010 "state": "online", 00:09:50.010 "raid_level": "raid1", 00:09:50.010 "superblock": false, 00:09:50.010 "num_base_bdevs": 2, 00:09:50.010 "num_base_bdevs_discovered": 2, 00:09:50.010 "num_base_bdevs_operational": 2, 00:09:50.010 "base_bdevs_list": [ 00:09:50.010 { 00:09:50.010 "name": "BaseBdev1", 00:09:50.010 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:50.010 "is_configured": true, 00:09:50.010 "data_offset": 0, 00:09:50.010 "data_size": 65536 00:09:50.010 }, 00:09:50.010 { 00:09:50.010 "name": "BaseBdev2", 00:09:50.010 "uuid": "2ac05713-123c-11ef-8c90-4585f0cfab08", 00:09:50.010 "is_configured": true, 00:09:50.010 "data_offset": 0, 00:09:50.010 "data_size": 65536 00:09:50.010 } 00:09:50.010 ] 00:09:50.010 }' 00:09:50.010 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:50.010 21:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:50.268 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:50.528 [2024-05-14 21:51:50.996088] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:50.528 "name": "Existed_Raid", 00:09:50.528 "aliases": [ 00:09:50.528 "2ac05dd8-123c-11ef-8c90-4585f0cfab08" 00:09:50.528 ], 00:09:50.528 "product_name": "Raid Volume", 00:09:50.528 "block_size": 512, 00:09:50.528 "num_blocks": 65536, 00:09:50.528 "uuid": "2ac05dd8-123c-11ef-8c90-4585f0cfab08", 00:09:50.528 "assigned_rate_limits": { 00:09:50.528 "rw_ios_per_sec": 0, 00:09:50.528 "rw_mbytes_per_sec": 0, 00:09:50.528 "r_mbytes_per_sec": 0, 00:09:50.528 "w_mbytes_per_sec": 0 00:09:50.528 }, 00:09:50.528 "claimed": false, 00:09:50.528 "zoned": false, 00:09:50.528 "supported_io_types": { 00:09:50.528 "read": true, 00:09:50.528 "write": true, 00:09:50.528 "unmap": false, 00:09:50.528 "write_zeroes": true, 00:09:50.528 "flush": false, 00:09:50.528 "reset": true, 00:09:50.528 "compare": false, 00:09:50.528 "compare_and_write": false, 00:09:50.528 "abort": false, 00:09:50.528 "nvme_admin": false, 00:09:50.528 "nvme_io": false 00:09:50.528 }, 00:09:50.528 "memory_domains": [ 00:09:50.528 { 00:09:50.528 "dma_device_id": "system", 00:09:50.528 "dma_device_type": 1 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.528 "dma_device_type": 2 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "system", 00:09:50.528 "dma_device_type": 1 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.528 "dma_device_type": 2 00:09:50.528 } 00:09:50.528 ], 00:09:50.528 "driver_specific": { 00:09:50.528 "raid": { 00:09:50.528 "uuid": "2ac05dd8-123c-11ef-8c90-4585f0cfab08", 00:09:50.528 "strip_size_kb": 0, 00:09:50.528 "state": "online", 00:09:50.528 "raid_level": "raid1", 00:09:50.528 "superblock": false, 00:09:50.528 "num_base_bdevs": 2, 00:09:50.528 "num_base_bdevs_discovered": 2, 00:09:50.528 "num_base_bdevs_operational": 2, 00:09:50.528 "base_bdevs_list": [ 00:09:50.528 { 00:09:50.528 "name": "BaseBdev1", 00:09:50.528 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:50.528 "is_configured": true, 00:09:50.528 "data_offset": 0, 00:09:50.528 "data_size": 65536 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "name": "BaseBdev2", 00:09:50.528 "uuid": "2ac05713-123c-11ef-8c90-4585f0cfab08", 00:09:50.528 "is_configured": true, 00:09:50.528 "data_offset": 0, 00:09:50.528 "data_size": 65536 00:09:50.528 } 00:09:50.528 ] 00:09:50.528 } 00:09:50.528 } 00:09:50.528 }' 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:50.528 BaseBdev2' 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:50.528 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:50.787 "name": "BaseBdev1", 00:09:50.787 "aliases": [ 00:09:50.787 "293af18e-123c-11ef-8c90-4585f0cfab08" 00:09:50.787 ], 00:09:50.787 "product_name": "Malloc disk", 00:09:50.787 "block_size": 512, 00:09:50.787 "num_blocks": 65536, 00:09:50.787 "uuid": "293af18e-123c-11ef-8c90-4585f0cfab08", 00:09:50.787 "assigned_rate_limits": { 00:09:50.787 "rw_ios_per_sec": 0, 00:09:50.787 "rw_mbytes_per_sec": 0, 00:09:50.787 "r_mbytes_per_sec": 0, 00:09:50.787 "w_mbytes_per_sec": 0 00:09:50.787 }, 00:09:50.787 "claimed": true, 00:09:50.787 "claim_type": "exclusive_write", 00:09:50.787 "zoned": false, 00:09:50.787 "supported_io_types": { 00:09:50.787 "read": true, 00:09:50.787 "write": true, 00:09:50.787 "unmap": true, 00:09:50.787 "write_zeroes": true, 00:09:50.787 "flush": true, 00:09:50.787 "reset": true, 00:09:50.787 "compare": false, 00:09:50.787 "compare_and_write": false, 00:09:50.787 "abort": true, 00:09:50.787 "nvme_admin": false, 00:09:50.787 "nvme_io": false 00:09:50.787 }, 00:09:50.787 "memory_domains": [ 00:09:50.787 { 00:09:50.787 "dma_device_id": "system", 00:09:50.787 "dma_device_type": 1 00:09:50.787 }, 00:09:50.787 { 00:09:50.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.787 "dma_device_type": 2 00:09:50.787 } 00:09:50.787 ], 00:09:50.787 "driver_specific": {} 00:09:50.787 }' 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:50.787 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:51.045 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:51.045 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:51.045 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:51.045 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:51.304 "name": "BaseBdev2", 00:09:51.304 "aliases": [ 00:09:51.304 "2ac05713-123c-11ef-8c90-4585f0cfab08" 00:09:51.304 ], 00:09:51.304 "product_name": "Malloc disk", 00:09:51.304 "block_size": 512, 00:09:51.304 "num_blocks": 65536, 00:09:51.304 "uuid": "2ac05713-123c-11ef-8c90-4585f0cfab08", 00:09:51.304 "assigned_rate_limits": { 00:09:51.304 "rw_ios_per_sec": 0, 00:09:51.304 "rw_mbytes_per_sec": 0, 00:09:51.304 "r_mbytes_per_sec": 0, 00:09:51.304 "w_mbytes_per_sec": 0 00:09:51.304 }, 00:09:51.304 "claimed": true, 00:09:51.304 "claim_type": "exclusive_write", 00:09:51.304 "zoned": false, 00:09:51.304 "supported_io_types": { 00:09:51.304 "read": true, 00:09:51.304 "write": true, 00:09:51.304 "unmap": true, 00:09:51.304 "write_zeroes": true, 00:09:51.304 "flush": true, 00:09:51.304 "reset": true, 00:09:51.304 "compare": false, 00:09:51.304 "compare_and_write": false, 00:09:51.304 "abort": true, 00:09:51.304 "nvme_admin": false, 00:09:51.304 "nvme_io": false 00:09:51.304 }, 00:09:51.304 "memory_domains": [ 00:09:51.304 { 00:09:51.304 "dma_device_id": "system", 00:09:51.304 "dma_device_type": 1 00:09:51.304 }, 00:09:51.304 { 00:09:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.304 "dma_device_type": 2 00:09:51.304 } 00:09:51.304 ], 00:09:51.304 "driver_specific": {} 00:09:51.304 }' 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:09:51.304 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:51.562 [2024-05-14 21:51:52.052059] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.563 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.821 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:51.821 "name": "Existed_Raid", 00:09:51.821 "uuid": "2ac05dd8-123c-11ef-8c90-4585f0cfab08", 00:09:51.821 "strip_size_kb": 0, 00:09:51.821 "state": "online", 00:09:51.821 "raid_level": "raid1", 00:09:51.821 "superblock": false, 00:09:51.821 "num_base_bdevs": 2, 00:09:51.821 "num_base_bdevs_discovered": 1, 00:09:51.821 "num_base_bdevs_operational": 1, 00:09:51.821 "base_bdevs_list": [ 00:09:51.821 { 00:09:51.821 "name": null, 00:09:51.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.821 "is_configured": false, 00:09:51.821 "data_offset": 0, 00:09:51.821 "data_size": 65536 00:09:51.821 }, 00:09:51.821 { 00:09:51.821 "name": "BaseBdev2", 00:09:51.821 "uuid": "2ac05713-123c-11ef-8c90-4585f0cfab08", 00:09:51.821 "is_configured": true, 00:09:51.821 "data_offset": 0, 00:09:51.821 "data_size": 65536 00:09:51.821 } 00:09:51.821 ] 00:09:51.821 }' 00:09:51.821 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:51.821 21:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:52.079 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.079 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.079 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:09:52.336 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:09:52.336 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:52.336 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:52.594 [2024-05-14 21:51:53.138085] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.594 [2024-05-14 21:51:53.138120] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.594 [2024-05-14 21:51:53.143810] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.594 [2024-05-14 21:51:53.143852] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.594 [2024-05-14 21:51:53.143857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b097300 name Existed_Raid, state offline 00:09:52.594 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.594 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.594 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.594 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:09:52.852 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:09:52.852 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:09:52.852 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:09:52.852 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 50635 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 50635 ']' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 50635 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 50635 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:09:53.109 killing process with pid 50635 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50635' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 50635 00:09:53.109 [2024-05-14 21:51:53.452217] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.109 [2024-05-14 21:51:53.452249] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 50635 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:09:53.109 00:09:53.109 real 0m9.130s 00:09:53.109 user 0m15.957s 00:09:53.109 sys 0m1.528s 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.109 ************************************ 00:09:53.109 END TEST raid_state_function_test 00:09:53.109 ************************************ 00:09:53.109 21:51:53 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:53.109 21:51:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:53.109 21:51:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.109 21:51:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.109 ************************************ 00:09:53.109 START TEST raid_state_function_test_sb 00:09:53.109 ************************************ 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=50906 00:09:53.109 Process raid pid: 50906 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50906' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 50906 /var/tmp/spdk-raid.sock 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 50906 ']' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:53.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:53.109 21:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.109 [2024-05-14 21:51:53.694778] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:53.109 [2024-05-14 21:51:53.695019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:53.673 EAL: TSC is not safe to use in SMP mode 00:09:53.673 EAL: TSC is not invariant 00:09:53.673 [2024-05-14 21:51:54.223409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.930 [2024-05-14 21:51:54.311882] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:53.930 [2024-05-14 21:51:54.314135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.930 [2024-05-14 21:51:54.314902] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.930 [2024-05-14 21:51:54.314921] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.247 21:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:54.247 21:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:09:54.247 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:54.505 [2024-05-14 21:51:54.946826] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.505 [2024-05-14 21:51:54.946873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.505 [2024-05-14 21:51:54.946878] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.505 [2024-05-14 21:51:54.946887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.505 21:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.762 21:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:54.762 "name": "Existed_Raid", 00:09:54.762 "uuid": "2deab22e-123c-11ef-8c90-4585f0cfab08", 00:09:54.762 "strip_size_kb": 0, 00:09:54.763 "state": "configuring", 00:09:54.763 "raid_level": "raid1", 00:09:54.763 "superblock": true, 00:09:54.763 "num_base_bdevs": 2, 00:09:54.763 "num_base_bdevs_discovered": 0, 00:09:54.763 "num_base_bdevs_operational": 2, 00:09:54.763 "base_bdevs_list": [ 00:09:54.763 { 00:09:54.763 "name": "BaseBdev1", 00:09:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.763 "is_configured": false, 00:09:54.763 "data_offset": 0, 00:09:54.763 "data_size": 0 00:09:54.763 }, 00:09:54.763 { 00:09:54.763 "name": "BaseBdev2", 00:09:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.763 "is_configured": false, 00:09:54.763 "data_offset": 0, 00:09:54.763 "data_size": 0 00:09:54.763 } 00:09:54.763 ] 00:09:54.763 }' 00:09:54.763 21:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:54.763 21:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.019 21:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:55.278 [2024-05-14 21:51:55.802804] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.278 [2024-05-14 21:51:55.802834] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce3c300 name Existed_Raid, state configuring 00:09:55.278 21:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:55.536 [2024-05-14 21:51:56.094808] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.536 [2024-05-14 21:51:56.094856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.536 [2024-05-14 21:51:56.094861] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.536 [2024-05-14 21:51:56.094869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.536 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.794 [2024-05-14 21:51:56.327822] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.794 BaseBdev1 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:55.794 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:56.052 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.309 [ 00:09:56.309 { 00:09:56.309 "name": "BaseBdev1", 00:09:56.309 "aliases": [ 00:09:56.309 "2ebd4389-123c-11ef-8c90-4585f0cfab08" 00:09:56.309 ], 00:09:56.309 "product_name": "Malloc disk", 00:09:56.309 "block_size": 512, 00:09:56.309 "num_blocks": 65536, 00:09:56.309 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:56.309 "assigned_rate_limits": { 00:09:56.309 "rw_ios_per_sec": 0, 00:09:56.309 "rw_mbytes_per_sec": 0, 00:09:56.309 "r_mbytes_per_sec": 0, 00:09:56.309 "w_mbytes_per_sec": 0 00:09:56.309 }, 00:09:56.309 "claimed": true, 00:09:56.309 "claim_type": "exclusive_write", 00:09:56.309 "zoned": false, 00:09:56.309 "supported_io_types": { 00:09:56.309 "read": true, 00:09:56.309 "write": true, 00:09:56.309 "unmap": true, 00:09:56.309 "write_zeroes": true, 00:09:56.309 "flush": true, 00:09:56.309 "reset": true, 00:09:56.309 "compare": false, 00:09:56.309 "compare_and_write": false, 00:09:56.309 "abort": true, 00:09:56.309 "nvme_admin": false, 00:09:56.309 "nvme_io": false 00:09:56.309 }, 00:09:56.309 "memory_domains": [ 00:09:56.309 { 00:09:56.309 "dma_device_id": "system", 00:09:56.309 "dma_device_type": 1 00:09:56.309 }, 00:09:56.309 { 00:09:56.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.309 "dma_device_type": 2 00:09:56.309 } 00:09:56.309 ], 00:09:56.309 "driver_specific": {} 00:09:56.309 } 00:09:56.309 ] 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.309 21:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.567 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:56.567 "name": "Existed_Raid", 00:09:56.567 "uuid": "2e99dd33-123c-11ef-8c90-4585f0cfab08", 00:09:56.567 "strip_size_kb": 0, 00:09:56.567 "state": "configuring", 00:09:56.567 "raid_level": "raid1", 00:09:56.567 "superblock": true, 00:09:56.567 "num_base_bdevs": 2, 00:09:56.567 "num_base_bdevs_discovered": 1, 00:09:56.567 "num_base_bdevs_operational": 2, 00:09:56.567 "base_bdevs_list": [ 00:09:56.567 { 00:09:56.567 "name": "BaseBdev1", 00:09:56.567 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:56.567 "is_configured": true, 00:09:56.567 "data_offset": 2048, 00:09:56.567 "data_size": 63488 00:09:56.567 }, 00:09:56.567 { 00:09:56.567 "name": "BaseBdev2", 00:09:56.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.567 "is_configured": false, 00:09:56.567 "data_offset": 0, 00:09:56.567 "data_size": 0 00:09:56.567 } 00:09:56.567 ] 00:09:56.567 }' 00:09:56.567 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:56.567 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.132 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:57.132 [2024-05-14 21:51:57.698794] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.132 [2024-05-14 21:51:57.698826] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce3c300 name Existed_Raid, state configuring 00:09:57.132 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:57.389 [2024-05-14 21:51:57.934808] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.389 [2024-05-14 21:51:57.935606] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.389 [2024-05-14 21:51:57.935651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.389 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.646 21:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:57.646 "name": "Existed_Raid", 00:09:57.646 "uuid": "2fb2a014-123c-11ef-8c90-4585f0cfab08", 00:09:57.646 "strip_size_kb": 0, 00:09:57.646 "state": "configuring", 00:09:57.646 "raid_level": "raid1", 00:09:57.646 "superblock": true, 00:09:57.646 "num_base_bdevs": 2, 00:09:57.646 "num_base_bdevs_discovered": 1, 00:09:57.646 "num_base_bdevs_operational": 2, 00:09:57.646 "base_bdevs_list": [ 00:09:57.646 { 00:09:57.646 "name": "BaseBdev1", 00:09:57.646 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:57.646 "is_configured": true, 00:09:57.646 "data_offset": 2048, 00:09:57.646 "data_size": 63488 00:09:57.646 }, 00:09:57.646 { 00:09:57.646 "name": "BaseBdev2", 00:09:57.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.646 "is_configured": false, 00:09:57.646 "data_offset": 0, 00:09:57.646 "data_size": 0 00:09:57.646 } 00:09:57.646 ] 00:09:57.646 }' 00:09:57.646 21:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:57.646 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.211 21:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.468 [2024-05-14 21:51:58.830941] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.468 [2024-05-14 21:51:58.831006] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ce3c300 00:09:58.468 [2024-05-14 21:51:58.831012] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.468 [2024-05-14 21:51:58.831033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ce9aec0 00:09:58.469 [2024-05-14 21:51:58.831080] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ce3c300 00:09:58.469 [2024-05-14 21:51:58.831084] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ce3c300 00:09:58.469 [2024-05-14 21:51:58.831105] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.469 BaseBdev2 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:58.469 21:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:58.726 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.726 [ 00:09:58.726 { 00:09:58.726 "name": "BaseBdev2", 00:09:58.726 "aliases": [ 00:09:58.726 "303b584d-123c-11ef-8c90-4585f0cfab08" 00:09:58.726 ], 00:09:58.726 "product_name": "Malloc disk", 00:09:58.726 "block_size": 512, 00:09:58.726 "num_blocks": 65536, 00:09:58.726 "uuid": "303b584d-123c-11ef-8c90-4585f0cfab08", 00:09:58.726 "assigned_rate_limits": { 00:09:58.726 "rw_ios_per_sec": 0, 00:09:58.726 "rw_mbytes_per_sec": 0, 00:09:58.726 "r_mbytes_per_sec": 0, 00:09:58.726 "w_mbytes_per_sec": 0 00:09:58.726 }, 00:09:58.726 "claimed": true, 00:09:58.726 "claim_type": "exclusive_write", 00:09:58.726 "zoned": false, 00:09:58.726 "supported_io_types": { 00:09:58.726 "read": true, 00:09:58.726 "write": true, 00:09:58.726 "unmap": true, 00:09:58.726 "write_zeroes": true, 00:09:58.726 "flush": true, 00:09:58.726 "reset": true, 00:09:58.726 "compare": false, 00:09:58.726 "compare_and_write": false, 00:09:58.726 "abort": true, 00:09:58.726 "nvme_admin": false, 00:09:58.726 "nvme_io": false 00:09:58.726 }, 00:09:58.726 "memory_domains": [ 00:09:58.726 { 00:09:58.726 "dma_device_id": "system", 00:09:58.726 "dma_device_type": 1 00:09:58.726 }, 00:09:58.726 { 00:09:58.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.726 "dma_device_type": 2 00:09:58.726 } 00:09:58.726 ], 00:09:58.726 "driver_specific": {} 00:09:58.726 } 00:09:58.726 ] 00:09:58.726 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.727 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.984 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:58.984 "name": "Existed_Raid", 00:09:58.984 "uuid": "2fb2a014-123c-11ef-8c90-4585f0cfab08", 00:09:58.984 "strip_size_kb": 0, 00:09:58.984 "state": "online", 00:09:58.984 "raid_level": "raid1", 00:09:58.984 "superblock": true, 00:09:58.984 "num_base_bdevs": 2, 00:09:58.984 "num_base_bdevs_discovered": 2, 00:09:58.984 "num_base_bdevs_operational": 2, 00:09:58.984 "base_bdevs_list": [ 00:09:58.984 { 00:09:58.984 "name": "BaseBdev1", 00:09:58.984 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:58.984 "is_configured": true, 00:09:58.984 "data_offset": 2048, 00:09:58.984 "data_size": 63488 00:09:58.984 }, 00:09:58.984 { 00:09:58.985 "name": "BaseBdev2", 00:09:58.985 "uuid": "303b584d-123c-11ef-8c90-4585f0cfab08", 00:09:58.985 "is_configured": true, 00:09:58.985 "data_offset": 2048, 00:09:58.985 "data_size": 63488 00:09:58.985 } 00:09:58.985 ] 00:09:58.985 }' 00:09:58.985 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:58.985 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:59.576 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:09:59.576 [2024-05-14 21:52:00.110810] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:09:59.576 "name": "Existed_Raid", 00:09:59.576 "aliases": [ 00:09:59.576 "2fb2a014-123c-11ef-8c90-4585f0cfab08" 00:09:59.576 ], 00:09:59.576 "product_name": "Raid Volume", 00:09:59.576 "block_size": 512, 00:09:59.576 "num_blocks": 63488, 00:09:59.576 "uuid": "2fb2a014-123c-11ef-8c90-4585f0cfab08", 00:09:59.576 "assigned_rate_limits": { 00:09:59.576 "rw_ios_per_sec": 0, 00:09:59.576 "rw_mbytes_per_sec": 0, 00:09:59.576 "r_mbytes_per_sec": 0, 00:09:59.576 "w_mbytes_per_sec": 0 00:09:59.576 }, 00:09:59.576 "claimed": false, 00:09:59.576 "zoned": false, 00:09:59.576 "supported_io_types": { 00:09:59.576 "read": true, 00:09:59.576 "write": true, 00:09:59.576 "unmap": false, 00:09:59.576 "write_zeroes": true, 00:09:59.576 "flush": false, 00:09:59.576 "reset": true, 00:09:59.576 "compare": false, 00:09:59.576 "compare_and_write": false, 00:09:59.576 "abort": false, 00:09:59.576 "nvme_admin": false, 00:09:59.576 "nvme_io": false 00:09:59.576 }, 00:09:59.576 "memory_domains": [ 00:09:59.576 { 00:09:59.576 "dma_device_id": "system", 00:09:59.576 "dma_device_type": 1 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.576 "dma_device_type": 2 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "dma_device_id": "system", 00:09:59.576 "dma_device_type": 1 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.576 "dma_device_type": 2 00:09:59.576 } 00:09:59.576 ], 00:09:59.576 "driver_specific": { 00:09:59.576 "raid": { 00:09:59.576 "uuid": "2fb2a014-123c-11ef-8c90-4585f0cfab08", 00:09:59.576 "strip_size_kb": 0, 00:09:59.576 "state": "online", 00:09:59.576 "raid_level": "raid1", 00:09:59.576 "superblock": true, 00:09:59.576 "num_base_bdevs": 2, 00:09:59.576 "num_base_bdevs_discovered": 2, 00:09:59.576 "num_base_bdevs_operational": 2, 00:09:59.576 "base_bdevs_list": [ 00:09:59.576 { 00:09:59.576 "name": "BaseBdev1", 00:09:59.576 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:59.576 "is_configured": true, 00:09:59.576 "data_offset": 2048, 00:09:59.576 "data_size": 63488 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "name": "BaseBdev2", 00:09:59.576 "uuid": "303b584d-123c-11ef-8c90-4585f0cfab08", 00:09:59.576 "is_configured": true, 00:09:59.576 "data_offset": 2048, 00:09:59.576 "data_size": 63488 00:09:59.576 } 00:09:59.576 ] 00:09:59.576 } 00:09:59.576 } 00:09:59.576 }' 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:09:59.576 BaseBdev2' 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:59.576 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:09:59.834 "name": "BaseBdev1", 00:09:59.834 "aliases": [ 00:09:59.834 "2ebd4389-123c-11ef-8c90-4585f0cfab08" 00:09:59.834 ], 00:09:59.834 "product_name": "Malloc disk", 00:09:59.834 "block_size": 512, 00:09:59.834 "num_blocks": 65536, 00:09:59.834 "uuid": "2ebd4389-123c-11ef-8c90-4585f0cfab08", 00:09:59.834 "assigned_rate_limits": { 00:09:59.834 "rw_ios_per_sec": 0, 00:09:59.834 "rw_mbytes_per_sec": 0, 00:09:59.834 "r_mbytes_per_sec": 0, 00:09:59.834 "w_mbytes_per_sec": 0 00:09:59.834 }, 00:09:59.834 "claimed": true, 00:09:59.834 "claim_type": "exclusive_write", 00:09:59.834 "zoned": false, 00:09:59.834 "supported_io_types": { 00:09:59.834 "read": true, 00:09:59.834 "write": true, 00:09:59.834 "unmap": true, 00:09:59.834 "write_zeroes": true, 00:09:59.834 "flush": true, 00:09:59.834 "reset": true, 00:09:59.834 "compare": false, 00:09:59.834 "compare_and_write": false, 00:09:59.834 "abort": true, 00:09:59.834 "nvme_admin": false, 00:09:59.834 "nvme_io": false 00:09:59.834 }, 00:09:59.834 "memory_domains": [ 00:09:59.834 { 00:09:59.834 "dma_device_id": "system", 00:09:59.834 "dma_device_type": 1 00:09:59.834 }, 00:09:59.834 { 00:09:59.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.834 "dma_device_type": 2 00:09:59.834 } 00:09:59.834 ], 00:09:59.834 "driver_specific": {} 00:09:59.834 }' 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:09:59.834 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:00.092 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:00.350 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:00.350 "name": "BaseBdev2", 00:10:00.350 "aliases": [ 00:10:00.350 "303b584d-123c-11ef-8c90-4585f0cfab08" 00:10:00.350 ], 00:10:00.350 "product_name": "Malloc disk", 00:10:00.350 "block_size": 512, 00:10:00.350 "num_blocks": 65536, 00:10:00.350 "uuid": "303b584d-123c-11ef-8c90-4585f0cfab08", 00:10:00.350 "assigned_rate_limits": { 00:10:00.350 "rw_ios_per_sec": 0, 00:10:00.350 "rw_mbytes_per_sec": 0, 00:10:00.350 "r_mbytes_per_sec": 0, 00:10:00.350 "w_mbytes_per_sec": 0 00:10:00.350 }, 00:10:00.350 "claimed": true, 00:10:00.350 "claim_type": "exclusive_write", 00:10:00.350 "zoned": false, 00:10:00.350 "supported_io_types": { 00:10:00.350 "read": true, 00:10:00.350 "write": true, 00:10:00.350 "unmap": true, 00:10:00.350 "write_zeroes": true, 00:10:00.350 "flush": true, 00:10:00.350 "reset": true, 00:10:00.350 "compare": false, 00:10:00.350 "compare_and_write": false, 00:10:00.350 "abort": true, 00:10:00.350 "nvme_admin": false, 00:10:00.350 "nvme_io": false 00:10:00.350 }, 00:10:00.350 "memory_domains": [ 00:10:00.350 { 00:10:00.350 "dma_device_id": "system", 00:10:00.350 "dma_device_type": 1 00:10:00.350 }, 00:10:00.350 { 00:10:00.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.350 "dma_device_type": 2 00:10:00.350 } 00:10:00.350 ], 00:10:00.350 "driver_specific": {} 00:10:00.350 }' 00:10:00.350 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:00.350 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:00.350 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:00.350 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:00.351 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:00.608 [2024-05-14 21:52:00.970780] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.608 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.866 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.866 "name": "Existed_Raid", 00:10:00.866 "uuid": "2fb2a014-123c-11ef-8c90-4585f0cfab08", 00:10:00.866 "strip_size_kb": 0, 00:10:00.866 "state": "online", 00:10:00.866 "raid_level": "raid1", 00:10:00.866 "superblock": true, 00:10:00.866 "num_base_bdevs": 2, 00:10:00.866 "num_base_bdevs_discovered": 1, 00:10:00.866 "num_base_bdevs_operational": 1, 00:10:00.866 "base_bdevs_list": [ 00:10:00.866 { 00:10:00.866 "name": null, 00:10:00.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.866 "is_configured": false, 00:10:00.866 "data_offset": 2048, 00:10:00.866 "data_size": 63488 00:10:00.866 }, 00:10:00.866 { 00:10:00.866 "name": "BaseBdev2", 00:10:00.866 "uuid": "303b584d-123c-11ef-8c90-4585f0cfab08", 00:10:00.866 "is_configured": true, 00:10:00.866 "data_offset": 2048, 00:10:00.866 "data_size": 63488 00:10:00.866 } 00:10:00.866 ] 00:10:00.866 }' 00:10:00.866 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.866 21:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.123 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.123 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.123 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.123 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:01.380 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:01.380 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.380 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:01.637 [2024-05-14 21:52:02.024500] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.637 [2024-05-14 21:52:02.024538] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.637 [2024-05-14 21:52:02.030594] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.637 [2024-05-14 21:52:02.030637] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.637 [2024-05-14 21:52:02.030642] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce3c300 name Existed_Raid, state offline 00:10:01.638 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.638 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.638 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.638 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 50906 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 50906 ']' 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 50906 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 50906 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:10:01.896 killing process with pid 50906 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50906' 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 50906 00:10:01.896 [2024-05-14 21:52:02.326532] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.896 [2024-05-14 21:52:02.326566] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.896 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 50906 00:10:02.154 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:10:02.154 00:10:02.154 real 0m8.828s 00:10:02.154 user 0m15.282s 00:10:02.154 sys 0m1.628s 00:10:02.154 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:02.154 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 ************************************ 00:10:02.154 END TEST raid_state_function_test_sb 00:10:02.154 ************************************ 00:10:02.154 21:52:02 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:02.154 21:52:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:02.154 21:52:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:02.154 21:52:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 ************************************ 00:10:02.154 START TEST raid_superblock_test 00:10:02.154 ************************************ 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=51180 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 51180 /var/tmp/spdk-raid.sock 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 51180 ']' 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:02.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:02.154 21:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 [2024-05-14 21:52:02.569889] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:02.154 [2024-05-14 21:52:02.570037] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:02.720 EAL: TSC is not safe to use in SMP mode 00:10:02.720 EAL: TSC is not invariant 00:10:02.720 [2024-05-14 21:52:03.106291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.720 [2024-05-14 21:52:03.197379] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:02.720 [2024-05-14 21:52:03.199750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.720 [2024-05-14 21:52:03.200570] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.720 [2024-05-14 21:52:03.200588] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.285 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:03.551 malloc1 00:10:03.552 21:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.826 [2024-05-14 21:52:04.205037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.826 [2024-05-14 21:52:04.205103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.826 [2024-05-14 21:52:04.205730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b58c780 00:10:03.826 [2024-05-14 21:52:04.205777] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.826 [2024-05-14 21:52:04.206732] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.826 [2024-05-14 21:52:04.206765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.826 pt1 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.826 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:04.083 malloc2 00:10:04.083 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.341 [2024-05-14 21:52:04.745051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.341 [2024-05-14 21:52:04.745116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.341 [2024-05-14 21:52:04.745146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b58cc80 00:10:04.341 [2024-05-14 21:52:04.745155] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.341 [2024-05-14 21:52:04.745826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.341 [2024-05-14 21:52:04.745862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.341 pt2 00:10:04.341 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.341 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.341 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:04.599 [2024-05-14 21:52:04.981046] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.599 [2024-05-14 21:52:04.981632] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.600 [2024-05-14 21:52:04.981692] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b591300 00:10:04.600 [2024-05-14 21:52:04.981699] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.600 [2024-05-14 21:52:04.981736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5efe20 00:10:04.600 [2024-05-14 21:52:04.981807] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b591300 00:10:04.600 [2024-05-14 21:52:04.981812] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b591300 00:10:04.600 [2024-05-14 21:52:04.981839] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.600 21:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.857 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:04.857 "name": "raid_bdev1", 00:10:04.857 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:04.857 "strip_size_kb": 0, 00:10:04.857 "state": "online", 00:10:04.857 "raid_level": "raid1", 00:10:04.857 "superblock": true, 00:10:04.857 "num_base_bdevs": 2, 00:10:04.857 "num_base_bdevs_discovered": 2, 00:10:04.857 "num_base_bdevs_operational": 2, 00:10:04.857 "base_bdevs_list": [ 00:10:04.857 { 00:10:04.857 "name": "pt1", 00:10:04.857 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:04.857 "is_configured": true, 00:10:04.857 "data_offset": 2048, 00:10:04.857 "data_size": 63488 00:10:04.857 }, 00:10:04.857 { 00:10:04.857 "name": "pt2", 00:10:04.857 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:04.857 "is_configured": true, 00:10:04.857 "data_offset": 2048, 00:10:04.857 "data_size": 63488 00:10:04.857 } 00:10:04.857 ] 00:10:04.857 }' 00:10:04.857 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:04.857 21:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.115 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:05.116 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:05.374 [2024-05-14 21:52:05.805069] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:05.374 "name": "raid_bdev1", 00:10:05.374 "aliases": [ 00:10:05.374 "33e5cbcf-123c-11ef-8c90-4585f0cfab08" 00:10:05.374 ], 00:10:05.374 "product_name": "Raid Volume", 00:10:05.374 "block_size": 512, 00:10:05.374 "num_blocks": 63488, 00:10:05.374 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:05.374 "assigned_rate_limits": { 00:10:05.374 "rw_ios_per_sec": 0, 00:10:05.374 "rw_mbytes_per_sec": 0, 00:10:05.374 "r_mbytes_per_sec": 0, 00:10:05.374 "w_mbytes_per_sec": 0 00:10:05.374 }, 00:10:05.374 "claimed": false, 00:10:05.374 "zoned": false, 00:10:05.374 "supported_io_types": { 00:10:05.374 "read": true, 00:10:05.374 "write": true, 00:10:05.374 "unmap": false, 00:10:05.374 "write_zeroes": true, 00:10:05.374 "flush": false, 00:10:05.374 "reset": true, 00:10:05.374 "compare": false, 00:10:05.374 "compare_and_write": false, 00:10:05.374 "abort": false, 00:10:05.374 "nvme_admin": false, 00:10:05.374 "nvme_io": false 00:10:05.374 }, 00:10:05.374 "memory_domains": [ 00:10:05.374 { 00:10:05.374 "dma_device_id": "system", 00:10:05.374 "dma_device_type": 1 00:10:05.374 }, 00:10:05.374 { 00:10:05.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.374 "dma_device_type": 2 00:10:05.374 }, 00:10:05.374 { 00:10:05.374 "dma_device_id": "system", 00:10:05.374 "dma_device_type": 1 00:10:05.374 }, 00:10:05.374 { 00:10:05.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.374 "dma_device_type": 2 00:10:05.374 } 00:10:05.374 ], 00:10:05.374 "driver_specific": { 00:10:05.374 "raid": { 00:10:05.374 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:05.374 "strip_size_kb": 0, 00:10:05.374 "state": "online", 00:10:05.374 "raid_level": "raid1", 00:10:05.374 "superblock": true, 00:10:05.374 "num_base_bdevs": 2, 00:10:05.374 "num_base_bdevs_discovered": 2, 00:10:05.374 "num_base_bdevs_operational": 2, 00:10:05.374 "base_bdevs_list": [ 00:10:05.374 { 00:10:05.374 "name": "pt1", 00:10:05.374 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:05.374 "is_configured": true, 00:10:05.374 "data_offset": 2048, 00:10:05.374 "data_size": 63488 00:10:05.374 }, 00:10:05.374 { 00:10:05.374 "name": "pt2", 00:10:05.374 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:05.374 "is_configured": true, 00:10:05.374 "data_offset": 2048, 00:10:05.374 "data_size": 63488 00:10:05.374 } 00:10:05.374 ] 00:10:05.374 } 00:10:05.374 } 00:10:05.374 }' 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:05.374 pt2' 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:05.374 21:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:05.632 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:05.632 "name": "pt1", 00:10:05.632 "aliases": [ 00:10:05.632 "9bf30449-18dc-dc59-9331-822b2b730952" 00:10:05.632 ], 00:10:05.632 "product_name": "passthru", 00:10:05.632 "block_size": 512, 00:10:05.632 "num_blocks": 65536, 00:10:05.632 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:05.632 "assigned_rate_limits": { 00:10:05.632 "rw_ios_per_sec": 0, 00:10:05.632 "rw_mbytes_per_sec": 0, 00:10:05.632 "r_mbytes_per_sec": 0, 00:10:05.632 "w_mbytes_per_sec": 0 00:10:05.632 }, 00:10:05.632 "claimed": true, 00:10:05.632 "claim_type": "exclusive_write", 00:10:05.632 "zoned": false, 00:10:05.632 "supported_io_types": { 00:10:05.632 "read": true, 00:10:05.632 "write": true, 00:10:05.632 "unmap": true, 00:10:05.632 "write_zeroes": true, 00:10:05.632 "flush": true, 00:10:05.632 "reset": true, 00:10:05.632 "compare": false, 00:10:05.632 "compare_and_write": false, 00:10:05.632 "abort": true, 00:10:05.632 "nvme_admin": false, 00:10:05.632 "nvme_io": false 00:10:05.632 }, 00:10:05.632 "memory_domains": [ 00:10:05.632 { 00:10:05.632 "dma_device_id": "system", 00:10:05.632 "dma_device_type": 1 00:10:05.632 }, 00:10:05.632 { 00:10:05.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.632 "dma_device_type": 2 00:10:05.632 } 00:10:05.632 ], 00:10:05.632 "driver_specific": { 00:10:05.632 "passthru": { 00:10:05.632 "name": "pt1", 00:10:05.632 "base_bdev_name": "malloc1" 00:10:05.632 } 00:10:05.632 } 00:10:05.632 }' 00:10:05.632 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:05.632 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:05.633 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:05.890 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:05.891 "name": "pt2", 00:10:05.891 "aliases": [ 00:10:05.891 "7868e347-4151-3a5e-95f4-d6125017c88e" 00:10:05.891 ], 00:10:05.891 "product_name": "passthru", 00:10:05.891 "block_size": 512, 00:10:05.891 "num_blocks": 65536, 00:10:05.891 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:05.891 "assigned_rate_limits": { 00:10:05.891 "rw_ios_per_sec": 0, 00:10:05.891 "rw_mbytes_per_sec": 0, 00:10:05.891 "r_mbytes_per_sec": 0, 00:10:05.891 "w_mbytes_per_sec": 0 00:10:05.891 }, 00:10:05.891 "claimed": true, 00:10:05.891 "claim_type": "exclusive_write", 00:10:05.891 "zoned": false, 00:10:05.891 "supported_io_types": { 00:10:05.891 "read": true, 00:10:05.891 "write": true, 00:10:05.891 "unmap": true, 00:10:05.891 "write_zeroes": true, 00:10:05.891 "flush": true, 00:10:05.891 "reset": true, 00:10:05.891 "compare": false, 00:10:05.891 "compare_and_write": false, 00:10:05.891 "abort": true, 00:10:05.891 "nvme_admin": false, 00:10:05.891 "nvme_io": false 00:10:05.891 }, 00:10:05.891 "memory_domains": [ 00:10:05.891 { 00:10:05.891 "dma_device_id": "system", 00:10:05.891 "dma_device_type": 1 00:10:05.891 }, 00:10:05.891 { 00:10:05.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.891 "dma_device_type": 2 00:10:05.891 } 00:10:05.891 ], 00:10:05.891 "driver_specific": { 00:10:05.891 "passthru": { 00:10:05.891 "name": "pt2", 00:10:05.891 "base_bdev_name": "malloc2" 00:10:05.891 } 00:10:05.891 } 00:10:05.891 }' 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:05.891 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:06.149 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:06.150 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:06.150 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:06.150 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:06.408 [2024-05-14 21:52:06.757062] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.408 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=33e5cbcf-123c-11ef-8c90-4585f0cfab08 00:10:06.408 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 33e5cbcf-123c-11ef-8c90-4585f0cfab08 ']' 00:10:06.408 21:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:06.666 [2024-05-14 21:52:07.029022] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.666 [2024-05-14 21:52:07.029046] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.666 [2024-05-14 21:52:07.029067] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.666 [2024-05-14 21:52:07.029081] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.666 [2024-05-14 21:52:07.029086] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b591300 name raid_bdev1, state offline 00:10:06.666 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.666 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:06.925 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:06.925 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:06.925 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.925 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:07.183 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.183 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:07.442 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:07.442 21:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:07.442 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.700 [2024-05-14 21:52:08.281034] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.700 [2024-05-14 21:52:08.281620] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.700 [2024-05-14 21:52:08.281638] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:07.700 [2024-05-14 21:52:08.281679] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:07.700 [2024-05-14 21:52:08.281691] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.700 [2024-05-14 21:52:08.281695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b591300 name raid_bdev1, state configuring 00:10:07.700 request: 00:10:07.700 { 00:10:07.700 "name": "raid_bdev1", 00:10:07.700 "raid_level": "raid1", 00:10:07.700 "base_bdevs": [ 00:10:07.700 "malloc1", 00:10:07.700 "malloc2" 00:10:07.700 ], 00:10:07.700 "superblock": false, 00:10:07.700 "method": "bdev_raid_create", 00:10:07.700 "req_id": 1 00:10:07.700 } 00:10:07.700 Got JSON-RPC error response 00:10:07.700 response: 00:10:07.700 { 00:10:07.700 "code": -17, 00:10:07.700 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.700 } 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.967 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:08.238 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:08.238 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:08.238 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:08.238 [2024-05-14 21:52:08.773021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.238 [2024-05-14 21:52:08.773078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.238 [2024-05-14 21:52:08.773106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b58cc80 00:10:08.238 [2024-05-14 21:52:08.773115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.238 [2024-05-14 21:52:08.773775] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.238 [2024-05-14 21:52:08.773804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.238 [2024-05-14 21:52:08.773829] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:08.238 [2024-05-14 21:52:08.773844] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.238 pt1 00:10:08.238 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.239 21:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.497 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:08.497 "name": "raid_bdev1", 00:10:08.497 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:08.497 "strip_size_kb": 0, 00:10:08.497 "state": "configuring", 00:10:08.497 "raid_level": "raid1", 00:10:08.497 "superblock": true, 00:10:08.497 "num_base_bdevs": 2, 00:10:08.497 "num_base_bdevs_discovered": 1, 00:10:08.497 "num_base_bdevs_operational": 2, 00:10:08.497 "base_bdevs_list": [ 00:10:08.497 { 00:10:08.498 "name": "pt1", 00:10:08.498 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:08.498 "is_configured": true, 00:10:08.498 "data_offset": 2048, 00:10:08.498 "data_size": 63488 00:10:08.498 }, 00:10:08.498 { 00:10:08.498 "name": null, 00:10:08.498 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:08.498 "is_configured": false, 00:10:08.498 "data_offset": 2048, 00:10:08.498 "data_size": 63488 00:10:08.498 } 00:10:08.498 ] 00:10:08.498 }' 00:10:08.498 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:08.498 21:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.756 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:08.756 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:08.756 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.756 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.015 [2024-05-14 21:52:09.513018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.015 [2024-05-14 21:52:09.513073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.015 [2024-05-14 21:52:09.513101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b58cf00 00:10:09.015 [2024-05-14 21:52:09.513110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.015 [2024-05-14 21:52:09.513226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.015 [2024-05-14 21:52:09.513246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.015 [2024-05-14 21:52:09.513271] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:09.015 [2024-05-14 21:52:09.513280] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.015 [2024-05-14 21:52:09.513308] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b591300 00:10:09.015 [2024-05-14 21:52:09.513312] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.015 [2024-05-14 21:52:09.513347] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5efe20 00:10:09.015 [2024-05-14 21:52:09.513402] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b591300 00:10:09.015 [2024-05-14 21:52:09.513407] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b591300 00:10:09.015 [2024-05-14 21:52:09.513429] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.015 pt2 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.015 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.274 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:09.274 "name": "raid_bdev1", 00:10:09.274 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:09.274 "strip_size_kb": 0, 00:10:09.274 "state": "online", 00:10:09.274 "raid_level": "raid1", 00:10:09.274 "superblock": true, 00:10:09.274 "num_base_bdevs": 2, 00:10:09.274 "num_base_bdevs_discovered": 2, 00:10:09.274 "num_base_bdevs_operational": 2, 00:10:09.274 "base_bdevs_list": [ 00:10:09.274 { 00:10:09.274 "name": "pt1", 00:10:09.274 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:09.274 "is_configured": true, 00:10:09.274 "data_offset": 2048, 00:10:09.274 "data_size": 63488 00:10:09.274 }, 00:10:09.274 { 00:10:09.274 "name": "pt2", 00:10:09.274 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:09.274 "is_configured": true, 00:10:09.274 "data_offset": 2048, 00:10:09.274 "data_size": 63488 00:10:09.274 } 00:10:09.274 ] 00:10:09.274 }' 00:10:09.274 21:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:09.274 21:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:09.533 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.792 [2024-05-14 21:52:10.349049] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.792 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:09.792 "name": "raid_bdev1", 00:10:09.792 "aliases": [ 00:10:09.792 "33e5cbcf-123c-11ef-8c90-4585f0cfab08" 00:10:09.792 ], 00:10:09.792 "product_name": "Raid Volume", 00:10:09.792 "block_size": 512, 00:10:09.792 "num_blocks": 63488, 00:10:09.792 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:09.792 "assigned_rate_limits": { 00:10:09.792 "rw_ios_per_sec": 0, 00:10:09.792 "rw_mbytes_per_sec": 0, 00:10:09.792 "r_mbytes_per_sec": 0, 00:10:09.792 "w_mbytes_per_sec": 0 00:10:09.792 }, 00:10:09.792 "claimed": false, 00:10:09.792 "zoned": false, 00:10:09.792 "supported_io_types": { 00:10:09.792 "read": true, 00:10:09.792 "write": true, 00:10:09.792 "unmap": false, 00:10:09.792 "write_zeroes": true, 00:10:09.792 "flush": false, 00:10:09.792 "reset": true, 00:10:09.792 "compare": false, 00:10:09.792 "compare_and_write": false, 00:10:09.792 "abort": false, 00:10:09.792 "nvme_admin": false, 00:10:09.792 "nvme_io": false 00:10:09.792 }, 00:10:09.792 "memory_domains": [ 00:10:09.792 { 00:10:09.792 "dma_device_id": "system", 00:10:09.792 "dma_device_type": 1 00:10:09.792 }, 00:10:09.792 { 00:10:09.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.792 "dma_device_type": 2 00:10:09.792 }, 00:10:09.792 { 00:10:09.792 "dma_device_id": "system", 00:10:09.792 "dma_device_type": 1 00:10:09.792 }, 00:10:09.792 { 00:10:09.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.792 "dma_device_type": 2 00:10:09.792 } 00:10:09.792 ], 00:10:09.792 "driver_specific": { 00:10:09.792 "raid": { 00:10:09.792 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:09.792 "strip_size_kb": 0, 00:10:09.792 "state": "online", 00:10:09.792 "raid_level": "raid1", 00:10:09.792 "superblock": true, 00:10:09.792 "num_base_bdevs": 2, 00:10:09.792 "num_base_bdevs_discovered": 2, 00:10:09.793 "num_base_bdevs_operational": 2, 00:10:09.793 "base_bdevs_list": [ 00:10:09.793 { 00:10:09.793 "name": "pt1", 00:10:09.793 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:09.793 "is_configured": true, 00:10:09.793 "data_offset": 2048, 00:10:09.793 "data_size": 63488 00:10:09.793 }, 00:10:09.793 { 00:10:09.793 "name": "pt2", 00:10:09.793 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:09.793 "is_configured": true, 00:10:09.793 "data_offset": 2048, 00:10:09.793 "data_size": 63488 00:10:09.793 } 00:10:09.793 ] 00:10:09.793 } 00:10:09.793 } 00:10:09.793 }' 00:10:09.793 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.793 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:10:09.793 pt2' 00:10:09.793 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:09.793 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:09.793 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:10.361 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:10.361 "name": "pt1", 00:10:10.361 "aliases": [ 00:10:10.361 "9bf30449-18dc-dc59-9331-822b2b730952" 00:10:10.361 ], 00:10:10.361 "product_name": "passthru", 00:10:10.361 "block_size": 512, 00:10:10.361 "num_blocks": 65536, 00:10:10.361 "uuid": "9bf30449-18dc-dc59-9331-822b2b730952", 00:10:10.361 "assigned_rate_limits": { 00:10:10.361 "rw_ios_per_sec": 0, 00:10:10.361 "rw_mbytes_per_sec": 0, 00:10:10.361 "r_mbytes_per_sec": 0, 00:10:10.361 "w_mbytes_per_sec": 0 00:10:10.361 }, 00:10:10.361 "claimed": true, 00:10:10.361 "claim_type": "exclusive_write", 00:10:10.361 "zoned": false, 00:10:10.361 "supported_io_types": { 00:10:10.361 "read": true, 00:10:10.361 "write": true, 00:10:10.361 "unmap": true, 00:10:10.361 "write_zeroes": true, 00:10:10.361 "flush": true, 00:10:10.361 "reset": true, 00:10:10.361 "compare": false, 00:10:10.361 "compare_and_write": false, 00:10:10.361 "abort": true, 00:10:10.361 "nvme_admin": false, 00:10:10.361 "nvme_io": false 00:10:10.362 }, 00:10:10.362 "memory_domains": [ 00:10:10.362 { 00:10:10.362 "dma_device_id": "system", 00:10:10.362 "dma_device_type": 1 00:10:10.362 }, 00:10:10.362 { 00:10:10.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.362 "dma_device_type": 2 00:10:10.362 } 00:10:10.362 ], 00:10:10.362 "driver_specific": { 00:10:10.362 "passthru": { 00:10:10.362 "name": "pt1", 00:10:10.362 "base_bdev_name": "malloc1" 00:10:10.362 } 00:10:10.362 } 00:10:10.362 }' 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:10.362 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:10.621 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:10.621 "name": "pt2", 00:10:10.621 "aliases": [ 00:10:10.621 "7868e347-4151-3a5e-95f4-d6125017c88e" 00:10:10.621 ], 00:10:10.621 "product_name": "passthru", 00:10:10.621 "block_size": 512, 00:10:10.621 "num_blocks": 65536, 00:10:10.621 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:10.621 "assigned_rate_limits": { 00:10:10.621 "rw_ios_per_sec": 0, 00:10:10.621 "rw_mbytes_per_sec": 0, 00:10:10.621 "r_mbytes_per_sec": 0, 00:10:10.621 "w_mbytes_per_sec": 0 00:10:10.621 }, 00:10:10.621 "claimed": true, 00:10:10.621 "claim_type": "exclusive_write", 00:10:10.621 "zoned": false, 00:10:10.621 "supported_io_types": { 00:10:10.621 "read": true, 00:10:10.621 "write": true, 00:10:10.621 "unmap": true, 00:10:10.621 "write_zeroes": true, 00:10:10.621 "flush": true, 00:10:10.621 "reset": true, 00:10:10.621 "compare": false, 00:10:10.621 "compare_and_write": false, 00:10:10.621 "abort": true, 00:10:10.621 "nvme_admin": false, 00:10:10.621 "nvme_io": false 00:10:10.621 }, 00:10:10.621 "memory_domains": [ 00:10:10.621 { 00:10:10.621 "dma_device_id": "system", 00:10:10.621 "dma_device_type": 1 00:10:10.621 }, 00:10:10.621 { 00:10:10.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.621 "dma_device_type": 2 00:10:10.621 } 00:10:10.621 ], 00:10:10.621 "driver_specific": { 00:10:10.621 "passthru": { 00:10:10.621 "name": "pt2", 00:10:10.621 "base_bdev_name": "malloc2" 00:10:10.621 } 00:10:10.621 } 00:10:10.621 }' 00:10:10.621 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.621 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:10.621 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:10.621 21:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:10.621 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:10.880 [2024-05-14 21:52:11.301047] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.880 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 33e5cbcf-123c-11ef-8c90-4585f0cfab08 '!=' 33e5cbcf-123c-11ef-8c90-4585f0cfab08 ']' 00:10:10.880 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:10.880 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:10.880 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:10:10.880 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:11.139 [2024-05-14 21:52:11.573021] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.140 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.398 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:11.398 "name": "raid_bdev1", 00:10:11.398 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:11.398 "strip_size_kb": 0, 00:10:11.398 "state": "online", 00:10:11.398 "raid_level": "raid1", 00:10:11.398 "superblock": true, 00:10:11.399 "num_base_bdevs": 2, 00:10:11.399 "num_base_bdevs_discovered": 1, 00:10:11.399 "num_base_bdevs_operational": 1, 00:10:11.399 "base_bdevs_list": [ 00:10:11.399 { 00:10:11.399 "name": null, 00:10:11.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.399 "is_configured": false, 00:10:11.399 "data_offset": 2048, 00:10:11.399 "data_size": 63488 00:10:11.399 }, 00:10:11.399 { 00:10:11.399 "name": "pt2", 00:10:11.399 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:11.399 "is_configured": true, 00:10:11.399 "data_offset": 2048, 00:10:11.399 "data_size": 63488 00:10:11.399 } 00:10:11.399 ] 00:10:11.399 }' 00:10:11.399 21:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:11.399 21:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.657 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:11.915 [2024-05-14 21:52:12.353008] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.915 [2024-05-14 21:52:12.353033] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.915 [2024-05-14 21:52:12.353056] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.915 [2024-05-14 21:52:12.353067] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.915 [2024-05-14 21:52:12.353072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b591300 name raid_bdev1, state offline 00:10:11.915 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.915 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:12.173 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:12.173 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:12.173 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:12.173 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:12.173 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:12.431 21:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.720 [2024-05-14 21:52:13.109009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.720 [2024-05-14 21:52:13.109065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.720 [2024-05-14 21:52:13.109092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b58cf00 00:10:12.720 [2024-05-14 21:52:13.109101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.720 [2024-05-14 21:52:13.109772] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.720 [2024-05-14 21:52:13.109815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.720 [2024-05-14 21:52:13.109842] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:12.720 [2024-05-14 21:52:13.109854] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.720 [2024-05-14 21:52:13.109879] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b591300 00:10:12.720 [2024-05-14 21:52:13.109883] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.720 [2024-05-14 21:52:13.109919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5efe20 00:10:12.720 [2024-05-14 21:52:13.109979] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b591300 00:10:12.720 [2024-05-14 21:52:13.109989] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b591300 00:10:12.720 [2024-05-14 21:52:13.110012] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.720 pt2 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.720 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.005 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:13.005 "name": "raid_bdev1", 00:10:13.005 "uuid": "33e5cbcf-123c-11ef-8c90-4585f0cfab08", 00:10:13.005 "strip_size_kb": 0, 00:10:13.005 "state": "online", 00:10:13.005 "raid_level": "raid1", 00:10:13.005 "superblock": true, 00:10:13.005 "num_base_bdevs": 2, 00:10:13.005 "num_base_bdevs_discovered": 1, 00:10:13.005 "num_base_bdevs_operational": 1, 00:10:13.005 "base_bdevs_list": [ 00:10:13.005 { 00:10:13.005 "name": null, 00:10:13.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.005 "is_configured": false, 00:10:13.005 "data_offset": 2048, 00:10:13.005 "data_size": 63488 00:10:13.005 }, 00:10:13.005 { 00:10:13.005 "name": "pt2", 00:10:13.005 "uuid": "7868e347-4151-3a5e-95f4-d6125017c88e", 00:10:13.005 "is_configured": true, 00:10:13.005 "data_offset": 2048, 00:10:13.005 "data_size": 63488 00:10:13.005 } 00:10:13.005 ] 00:10:13.005 }' 00:10:13.005 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:13.005 21:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.264 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:10:13.264 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:13.264 21:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:10:13.523 [2024-05-14 21:52:14.029041] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 33e5cbcf-123c-11ef-8c90-4585f0cfab08 '!=' 33e5cbcf-123c-11ef-8c90-4585f0cfab08 ']' 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 51180 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 51180 ']' 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 51180 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 51180 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:10:13.523 killing process with pid 51180 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51180' 00:10:13.523 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 51180 00:10:13.523 [2024-05-14 21:52:14.058103] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.523 [2024-05-14 21:52:14.058123] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.524 [2024-05-14 21:52:14.058134] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.524 [2024-05-14 21:52:14.058139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b591300 name raid_bdev1, state offline 00:10:13.524 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 51180 00:10:13.524 [2024-05-14 21:52:14.070003] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.782 21:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:10:13.782 00:10:13.782 real 0m11.689s 00:10:13.782 user 0m20.879s 00:10:13.782 sys 0m1.721s 00:10:13.782 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.782 21:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.782 ************************************ 00:10:13.782 END TEST raid_superblock_test 00:10:13.782 ************************************ 00:10:13.782 21:52:14 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:10:13.782 21:52:14 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:10:13.782 21:52:14 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:13.782 21:52:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:13.782 21:52:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.782 21:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.782 ************************************ 00:10:13.782 START TEST raid_state_function_test 00:10:13.782 ************************************ 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=51525 00:10:13.782 Process raid pid: 51525 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 51525' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 51525 /var/tmp/spdk-raid.sock 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 51525 ']' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:13.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:13.782 21:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.782 [2024-05-14 21:52:14.309927] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:13.782 [2024-05-14 21:52:14.310143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:14.348 EAL: TSC is not safe to use in SMP mode 00:10:14.348 EAL: TSC is not invariant 00:10:14.348 [2024-05-14 21:52:14.869589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.610 [2024-05-14 21:52:14.975774] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:14.610 [2024-05-14 21:52:14.979167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.610 [2024-05-14 21:52:14.980129] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.610 [2024-05-14 21:52:14.980146] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.878 21:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.878 21:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:10:14.878 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:15.135 [2024-05-14 21:52:15.598404] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.135 [2024-05-14 21:52:15.598473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.135 [2024-05-14 21:52:15.598479] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.135 [2024-05-14 21:52:15.598488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.135 [2024-05-14 21:52:15.598492] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.135 [2024-05-14 21:52:15.598500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.135 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.396 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:15.396 "name": "Existed_Raid", 00:10:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.396 "strip_size_kb": 64, 00:10:15.396 "state": "configuring", 00:10:15.396 "raid_level": "raid0", 00:10:15.396 "superblock": false, 00:10:15.396 "num_base_bdevs": 3, 00:10:15.396 "num_base_bdevs_discovered": 0, 00:10:15.396 "num_base_bdevs_operational": 3, 00:10:15.396 "base_bdevs_list": [ 00:10:15.396 { 00:10:15.396 "name": "BaseBdev1", 00:10:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.396 "is_configured": false, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 0 00:10:15.396 }, 00:10:15.396 { 00:10:15.396 "name": "BaseBdev2", 00:10:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.396 "is_configured": false, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 0 00:10:15.396 }, 00:10:15.396 { 00:10:15.396 "name": "BaseBdev3", 00:10:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.396 "is_configured": false, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 0 00:10:15.396 } 00:10:15.396 ] 00:10:15.396 }' 00:10:15.396 21:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:15.396 21:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 21:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:15.913 [2024-05-14 21:52:16.454394] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.913 [2024-05-14 21:52:16.454425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ade4300 name Existed_Raid, state configuring 00:10:15.913 21:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:16.171 [2024-05-14 21:52:16.722399] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.171 [2024-05-14 21:52:16.722458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.171 [2024-05-14 21:52:16.722464] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.171 [2024-05-14 21:52:16.722473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.171 [2024-05-14 21:52:16.722477] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.171 [2024-05-14 21:52:16.722484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.171 21:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.429 [2024-05-14 21:52:16.991781] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.429 BaseBdev1 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:16.429 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:16.687 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.944 [ 00:10:16.944 { 00:10:16.944 "name": "BaseBdev1", 00:10:16.944 "aliases": [ 00:10:16.944 "3b0e48ca-123c-11ef-8c90-4585f0cfab08" 00:10:16.944 ], 00:10:16.944 "product_name": "Malloc disk", 00:10:16.944 "block_size": 512, 00:10:16.944 "num_blocks": 65536, 00:10:16.944 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:16.944 "assigned_rate_limits": { 00:10:16.944 "rw_ios_per_sec": 0, 00:10:16.944 "rw_mbytes_per_sec": 0, 00:10:16.944 "r_mbytes_per_sec": 0, 00:10:16.944 "w_mbytes_per_sec": 0 00:10:16.944 }, 00:10:16.944 "claimed": true, 00:10:16.944 "claim_type": "exclusive_write", 00:10:16.944 "zoned": false, 00:10:16.944 "supported_io_types": { 00:10:16.944 "read": true, 00:10:16.944 "write": true, 00:10:16.944 "unmap": true, 00:10:16.944 "write_zeroes": true, 00:10:16.944 "flush": true, 00:10:16.944 "reset": true, 00:10:16.944 "compare": false, 00:10:16.944 "compare_and_write": false, 00:10:16.944 "abort": true, 00:10:16.944 "nvme_admin": false, 00:10:16.944 "nvme_io": false 00:10:16.944 }, 00:10:16.944 "memory_domains": [ 00:10:16.944 { 00:10:16.944 "dma_device_id": "system", 00:10:16.944 "dma_device_type": 1 00:10:16.944 }, 00:10:16.944 { 00:10:16.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.944 "dma_device_type": 2 00:10:16.944 } 00:10:16.944 ], 00:10:16.944 "driver_specific": {} 00:10:16.944 } 00:10:16.944 ] 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.202 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.460 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:17.460 "name": "Existed_Raid", 00:10:17.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.460 "strip_size_kb": 64, 00:10:17.460 "state": "configuring", 00:10:17.460 "raid_level": "raid0", 00:10:17.460 "superblock": false, 00:10:17.460 "num_base_bdevs": 3, 00:10:17.460 "num_base_bdevs_discovered": 1, 00:10:17.460 "num_base_bdevs_operational": 3, 00:10:17.460 "base_bdevs_list": [ 00:10:17.460 { 00:10:17.460 "name": "BaseBdev1", 00:10:17.460 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:17.460 "is_configured": true, 00:10:17.460 "data_offset": 0, 00:10:17.460 "data_size": 65536 00:10:17.460 }, 00:10:17.460 { 00:10:17.460 "name": "BaseBdev2", 00:10:17.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.460 "is_configured": false, 00:10:17.460 "data_offset": 0, 00:10:17.460 "data_size": 0 00:10:17.460 }, 00:10:17.460 { 00:10:17.460 "name": "BaseBdev3", 00:10:17.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.460 "is_configured": false, 00:10:17.460 "data_offset": 0, 00:10:17.460 "data_size": 0 00:10:17.460 } 00:10:17.460 ] 00:10:17.460 }' 00:10:17.460 21:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:17.460 21:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:17.976 [2024-05-14 21:52:18.382411] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.976 [2024-05-14 21:52:18.382452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ade4300 name Existed_Raid, state configuring 00:10:17.976 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:18.288 [2024-05-14 21:52:18.718410] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.288 [2024-05-14 21:52:18.719235] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.288 [2024-05-14 21:52:18.719278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.288 [2024-05-14 21:52:18.719284] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.288 [2024-05-14 21:52:18.719292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.288 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.547 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:18.547 "name": "Existed_Raid", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.547 "strip_size_kb": 64, 00:10:18.547 "state": "configuring", 00:10:18.547 "raid_level": "raid0", 00:10:18.547 "superblock": false, 00:10:18.547 "num_base_bdevs": 3, 00:10:18.547 "num_base_bdevs_discovered": 1, 00:10:18.547 "num_base_bdevs_operational": 3, 00:10:18.547 "base_bdevs_list": [ 00:10:18.547 { 00:10:18.547 "name": "BaseBdev1", 00:10:18.547 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 0, 00:10:18.547 "data_size": 65536 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "BaseBdev2", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.547 "is_configured": false, 00:10:18.547 "data_offset": 0, 00:10:18.547 "data_size": 0 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "BaseBdev3", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.547 "is_configured": false, 00:10:18.547 "data_offset": 0, 00:10:18.547 "data_size": 0 00:10:18.547 } 00:10:18.547 ] 00:10:18.547 }' 00:10:18.547 21:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:18.547 21:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.805 21:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.063 [2024-05-14 21:52:19.570543] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.063 BaseBdev2 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:19.063 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:19.628 21:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.629 [ 00:10:19.629 { 00:10:19.629 "name": "BaseBdev2", 00:10:19.629 "aliases": [ 00:10:19.629 "3c97f51e-123c-11ef-8c90-4585f0cfab08" 00:10:19.629 ], 00:10:19.629 "product_name": "Malloc disk", 00:10:19.629 "block_size": 512, 00:10:19.629 "num_blocks": 65536, 00:10:19.629 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:19.629 "assigned_rate_limits": { 00:10:19.629 "rw_ios_per_sec": 0, 00:10:19.629 "rw_mbytes_per_sec": 0, 00:10:19.629 "r_mbytes_per_sec": 0, 00:10:19.629 "w_mbytes_per_sec": 0 00:10:19.629 }, 00:10:19.629 "claimed": true, 00:10:19.629 "claim_type": "exclusive_write", 00:10:19.629 "zoned": false, 00:10:19.629 "supported_io_types": { 00:10:19.629 "read": true, 00:10:19.629 "write": true, 00:10:19.629 "unmap": true, 00:10:19.629 "write_zeroes": true, 00:10:19.629 "flush": true, 00:10:19.629 "reset": true, 00:10:19.629 "compare": false, 00:10:19.629 "compare_and_write": false, 00:10:19.629 "abort": true, 00:10:19.629 "nvme_admin": false, 00:10:19.629 "nvme_io": false 00:10:19.629 }, 00:10:19.629 "memory_domains": [ 00:10:19.629 { 00:10:19.629 "dma_device_id": "system", 00:10:19.629 "dma_device_type": 1 00:10:19.629 }, 00:10:19.629 { 00:10:19.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.629 "dma_device_type": 2 00:10:19.629 } 00:10:19.629 ], 00:10:19.629 "driver_specific": {} 00:10:19.629 } 00:10:19.629 ] 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.629 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.889 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:19.889 "name": "Existed_Raid", 00:10:19.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.889 "strip_size_kb": 64, 00:10:19.889 "state": "configuring", 00:10:19.889 "raid_level": "raid0", 00:10:19.889 "superblock": false, 00:10:19.889 "num_base_bdevs": 3, 00:10:19.889 "num_base_bdevs_discovered": 2, 00:10:19.889 "num_base_bdevs_operational": 3, 00:10:19.889 "base_bdevs_list": [ 00:10:19.889 { 00:10:19.889 "name": "BaseBdev1", 00:10:19.889 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:19.889 "is_configured": true, 00:10:19.889 "data_offset": 0, 00:10:19.889 "data_size": 65536 00:10:19.889 }, 00:10:19.889 { 00:10:19.889 "name": "BaseBdev2", 00:10:19.889 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:19.889 "is_configured": true, 00:10:19.889 "data_offset": 0, 00:10:19.889 "data_size": 65536 00:10:19.889 }, 00:10:19.889 { 00:10:19.889 "name": "BaseBdev3", 00:10:19.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.889 "is_configured": false, 00:10:19.889 "data_offset": 0, 00:10:19.889 "data_size": 0 00:10:19.889 } 00:10:19.889 ] 00:10:19.889 }' 00:10:19.889 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:19.889 21:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.458 21:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.458 [2024-05-14 21:52:21.030573] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.458 [2024-05-14 21:52:21.030633] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ade4300 00:10:20.458 [2024-05-14 21:52:21.030639] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:20.458 [2024-05-14 21:52:21.030664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae42ec0 00:10:20.458 [2024-05-14 21:52:21.030767] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ade4300 00:10:20.458 [2024-05-14 21:52:21.030772] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ade4300 00:10:20.458 [2024-05-14 21:52:21.030822] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.458 BaseBdev3 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:20.716 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.975 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:21.234 [ 00:10:21.234 { 00:10:21.234 "name": "BaseBdev3", 00:10:21.234 "aliases": [ 00:10:21.234 "3d76bdff-123c-11ef-8c90-4585f0cfab08" 00:10:21.234 ], 00:10:21.234 "product_name": "Malloc disk", 00:10:21.234 "block_size": 512, 00:10:21.234 "num_blocks": 65536, 00:10:21.234 "uuid": "3d76bdff-123c-11ef-8c90-4585f0cfab08", 00:10:21.234 "assigned_rate_limits": { 00:10:21.234 "rw_ios_per_sec": 0, 00:10:21.234 "rw_mbytes_per_sec": 0, 00:10:21.234 "r_mbytes_per_sec": 0, 00:10:21.234 "w_mbytes_per_sec": 0 00:10:21.234 }, 00:10:21.234 "claimed": true, 00:10:21.234 "claim_type": "exclusive_write", 00:10:21.234 "zoned": false, 00:10:21.234 "supported_io_types": { 00:10:21.234 "read": true, 00:10:21.234 "write": true, 00:10:21.234 "unmap": true, 00:10:21.234 "write_zeroes": true, 00:10:21.234 "flush": true, 00:10:21.234 "reset": true, 00:10:21.234 "compare": false, 00:10:21.234 "compare_and_write": false, 00:10:21.234 "abort": true, 00:10:21.234 "nvme_admin": false, 00:10:21.234 "nvme_io": false 00:10:21.234 }, 00:10:21.234 "memory_domains": [ 00:10:21.234 { 00:10:21.234 "dma_device_id": "system", 00:10:21.234 "dma_device_type": 1 00:10:21.234 }, 00:10:21.234 { 00:10:21.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.234 "dma_device_type": 2 00:10:21.234 } 00:10:21.234 ], 00:10:21.234 "driver_specific": {} 00:10:21.234 } 00:10:21.234 ] 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.234 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.492 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:21.492 "name": "Existed_Raid", 00:10:21.492 "uuid": "3d76c598-123c-11ef-8c90-4585f0cfab08", 00:10:21.492 "strip_size_kb": 64, 00:10:21.492 "state": "online", 00:10:21.492 "raid_level": "raid0", 00:10:21.492 "superblock": false, 00:10:21.492 "num_base_bdevs": 3, 00:10:21.492 "num_base_bdevs_discovered": 3, 00:10:21.492 "num_base_bdevs_operational": 3, 00:10:21.492 "base_bdevs_list": [ 00:10:21.492 { 00:10:21.492 "name": "BaseBdev1", 00:10:21.492 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:21.492 "is_configured": true, 00:10:21.492 "data_offset": 0, 00:10:21.492 "data_size": 65536 00:10:21.492 }, 00:10:21.492 { 00:10:21.492 "name": "BaseBdev2", 00:10:21.492 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:21.492 "is_configured": true, 00:10:21.492 "data_offset": 0, 00:10:21.492 "data_size": 65536 00:10:21.492 }, 00:10:21.492 { 00:10:21.492 "name": "BaseBdev3", 00:10:21.492 "uuid": "3d76bdff-123c-11ef-8c90-4585f0cfab08", 00:10:21.492 "is_configured": true, 00:10:21.492 "data_offset": 0, 00:10:21.492 "data_size": 65536 00:10:21.492 } 00:10:21.492 ] 00:10:21.492 }' 00:10:21.492 21:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:21.492 21:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:21.750 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:22.007 [2024-05-14 21:52:22.410503] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.007 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:22.007 "name": "Existed_Raid", 00:10:22.007 "aliases": [ 00:10:22.007 "3d76c598-123c-11ef-8c90-4585f0cfab08" 00:10:22.007 ], 00:10:22.007 "product_name": "Raid Volume", 00:10:22.007 "block_size": 512, 00:10:22.007 "num_blocks": 196608, 00:10:22.007 "uuid": "3d76c598-123c-11ef-8c90-4585f0cfab08", 00:10:22.007 "assigned_rate_limits": { 00:10:22.007 "rw_ios_per_sec": 0, 00:10:22.007 "rw_mbytes_per_sec": 0, 00:10:22.007 "r_mbytes_per_sec": 0, 00:10:22.007 "w_mbytes_per_sec": 0 00:10:22.007 }, 00:10:22.007 "claimed": false, 00:10:22.007 "zoned": false, 00:10:22.007 "supported_io_types": { 00:10:22.007 "read": true, 00:10:22.007 "write": true, 00:10:22.007 "unmap": true, 00:10:22.007 "write_zeroes": true, 00:10:22.007 "flush": true, 00:10:22.007 "reset": true, 00:10:22.007 "compare": false, 00:10:22.007 "compare_and_write": false, 00:10:22.007 "abort": false, 00:10:22.007 "nvme_admin": false, 00:10:22.007 "nvme_io": false 00:10:22.007 }, 00:10:22.008 "memory_domains": [ 00:10:22.008 { 00:10:22.008 "dma_device_id": "system", 00:10:22.008 "dma_device_type": 1 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.008 "dma_device_type": 2 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "dma_device_id": "system", 00:10:22.008 "dma_device_type": 1 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.008 "dma_device_type": 2 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "dma_device_id": "system", 00:10:22.008 "dma_device_type": 1 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.008 "dma_device_type": 2 00:10:22.008 } 00:10:22.008 ], 00:10:22.008 "driver_specific": { 00:10:22.008 "raid": { 00:10:22.008 "uuid": "3d76c598-123c-11ef-8c90-4585f0cfab08", 00:10:22.008 "strip_size_kb": 64, 00:10:22.008 "state": "online", 00:10:22.008 "raid_level": "raid0", 00:10:22.008 "superblock": false, 00:10:22.008 "num_base_bdevs": 3, 00:10:22.008 "num_base_bdevs_discovered": 3, 00:10:22.008 "num_base_bdevs_operational": 3, 00:10:22.008 "base_bdevs_list": [ 00:10:22.008 { 00:10:22.008 "name": "BaseBdev1", 00:10:22.008 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:22.008 "is_configured": true, 00:10:22.008 "data_offset": 0, 00:10:22.008 "data_size": 65536 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "name": "BaseBdev2", 00:10:22.008 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:22.008 "is_configured": true, 00:10:22.008 "data_offset": 0, 00:10:22.008 "data_size": 65536 00:10:22.008 }, 00:10:22.008 { 00:10:22.008 "name": "BaseBdev3", 00:10:22.008 "uuid": "3d76bdff-123c-11ef-8c90-4585f0cfab08", 00:10:22.008 "is_configured": true, 00:10:22.008 "data_offset": 0, 00:10:22.008 "data_size": 65536 00:10:22.008 } 00:10:22.008 ] 00:10:22.008 } 00:10:22.008 } 00:10:22.008 }' 00:10:22.008 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.008 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:10:22.008 BaseBdev2 00:10:22.008 BaseBdev3' 00:10:22.008 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:22.008 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:22.008 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:22.266 "name": "BaseBdev1", 00:10:22.266 "aliases": [ 00:10:22.266 "3b0e48ca-123c-11ef-8c90-4585f0cfab08" 00:10:22.266 ], 00:10:22.266 "product_name": "Malloc disk", 00:10:22.266 "block_size": 512, 00:10:22.266 "num_blocks": 65536, 00:10:22.266 "uuid": "3b0e48ca-123c-11ef-8c90-4585f0cfab08", 00:10:22.266 "assigned_rate_limits": { 00:10:22.266 "rw_ios_per_sec": 0, 00:10:22.266 "rw_mbytes_per_sec": 0, 00:10:22.266 "r_mbytes_per_sec": 0, 00:10:22.266 "w_mbytes_per_sec": 0 00:10:22.266 }, 00:10:22.266 "claimed": true, 00:10:22.266 "claim_type": "exclusive_write", 00:10:22.266 "zoned": false, 00:10:22.266 "supported_io_types": { 00:10:22.266 "read": true, 00:10:22.266 "write": true, 00:10:22.266 "unmap": true, 00:10:22.266 "write_zeroes": true, 00:10:22.266 "flush": true, 00:10:22.266 "reset": true, 00:10:22.266 "compare": false, 00:10:22.266 "compare_and_write": false, 00:10:22.266 "abort": true, 00:10:22.266 "nvme_admin": false, 00:10:22.266 "nvme_io": false 00:10:22.266 }, 00:10:22.266 "memory_domains": [ 00:10:22.266 { 00:10:22.266 "dma_device_id": "system", 00:10:22.266 "dma_device_type": 1 00:10:22.266 }, 00:10:22.266 { 00:10:22.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.266 "dma_device_type": 2 00:10:22.266 } 00:10:22.266 ], 00:10:22.266 "driver_specific": {} 00:10:22.266 }' 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:22.266 21:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:22.525 "name": "BaseBdev2", 00:10:22.525 "aliases": [ 00:10:22.525 "3c97f51e-123c-11ef-8c90-4585f0cfab08" 00:10:22.525 ], 00:10:22.525 "product_name": "Malloc disk", 00:10:22.525 "block_size": 512, 00:10:22.525 "num_blocks": 65536, 00:10:22.525 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:22.525 "assigned_rate_limits": { 00:10:22.525 "rw_ios_per_sec": 0, 00:10:22.525 "rw_mbytes_per_sec": 0, 00:10:22.525 "r_mbytes_per_sec": 0, 00:10:22.525 "w_mbytes_per_sec": 0 00:10:22.525 }, 00:10:22.525 "claimed": true, 00:10:22.525 "claim_type": "exclusive_write", 00:10:22.525 "zoned": false, 00:10:22.525 "supported_io_types": { 00:10:22.525 "read": true, 00:10:22.525 "write": true, 00:10:22.525 "unmap": true, 00:10:22.525 "write_zeroes": true, 00:10:22.525 "flush": true, 00:10:22.525 "reset": true, 00:10:22.525 "compare": false, 00:10:22.525 "compare_and_write": false, 00:10:22.525 "abort": true, 00:10:22.525 "nvme_admin": false, 00:10:22.525 "nvme_io": false 00:10:22.525 }, 00:10:22.525 "memory_domains": [ 00:10:22.525 { 00:10:22.525 "dma_device_id": "system", 00:10:22.525 "dma_device_type": 1 00:10:22.525 }, 00:10:22.525 { 00:10:22.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.525 "dma_device_type": 2 00:10:22.525 } 00:10:22.525 ], 00:10:22.525 "driver_specific": {} 00:10:22.525 }' 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:22.525 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:22.784 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:22.784 "name": "BaseBdev3", 00:10:22.784 "aliases": [ 00:10:22.784 "3d76bdff-123c-11ef-8c90-4585f0cfab08" 00:10:22.784 ], 00:10:22.784 "product_name": "Malloc disk", 00:10:22.784 "block_size": 512, 00:10:22.784 "num_blocks": 65536, 00:10:22.784 "uuid": "3d76bdff-123c-11ef-8c90-4585f0cfab08", 00:10:22.784 "assigned_rate_limits": { 00:10:22.784 "rw_ios_per_sec": 0, 00:10:22.784 "rw_mbytes_per_sec": 0, 00:10:22.784 "r_mbytes_per_sec": 0, 00:10:22.784 "w_mbytes_per_sec": 0 00:10:22.784 }, 00:10:22.784 "claimed": true, 00:10:22.784 "claim_type": "exclusive_write", 00:10:22.784 "zoned": false, 00:10:22.784 "supported_io_types": { 00:10:22.784 "read": true, 00:10:22.784 "write": true, 00:10:22.784 "unmap": true, 00:10:22.784 "write_zeroes": true, 00:10:22.784 "flush": true, 00:10:22.784 "reset": true, 00:10:22.784 "compare": false, 00:10:22.784 "compare_and_write": false, 00:10:22.784 "abort": true, 00:10:22.784 "nvme_admin": false, 00:10:22.784 "nvme_io": false 00:10:22.784 }, 00:10:22.784 "memory_domains": [ 00:10:22.784 { 00:10:22.784 "dma_device_id": "system", 00:10:22.784 "dma_device_type": 1 00:10:22.784 }, 00:10:22.784 { 00:10:22.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.784 "dma_device_type": 2 00:10:22.784 } 00:10:22.784 ], 00:10:22.784 "driver_specific": {} 00:10:22.784 }' 00:10:22.784 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:22.784 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:23.042 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:23.042 [2024-05-14 21:52:23.630477] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.042 [2024-05-14 21:52:23.630505] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.042 [2024-05-14 21:52:23.630536] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.300 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.559 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:23.559 "name": "Existed_Raid", 00:10:23.559 "uuid": "3d76c598-123c-11ef-8c90-4585f0cfab08", 00:10:23.559 "strip_size_kb": 64, 00:10:23.559 "state": "offline", 00:10:23.559 "raid_level": "raid0", 00:10:23.559 "superblock": false, 00:10:23.559 "num_base_bdevs": 3, 00:10:23.559 "num_base_bdevs_discovered": 2, 00:10:23.559 "num_base_bdevs_operational": 2, 00:10:23.559 "base_bdevs_list": [ 00:10:23.559 { 00:10:23.559 "name": null, 00:10:23.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.559 "is_configured": false, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 }, 00:10:23.559 { 00:10:23.559 "name": "BaseBdev2", 00:10:23.559 "uuid": "3c97f51e-123c-11ef-8c90-4585f0cfab08", 00:10:23.559 "is_configured": true, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 }, 00:10:23.559 { 00:10:23.559 "name": "BaseBdev3", 00:10:23.559 "uuid": "3d76bdff-123c-11ef-8c90-4585f0cfab08", 00:10:23.559 "is_configured": true, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 } 00:10:23.559 ] 00:10:23.559 }' 00:10:23.559 21:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:23.559 21:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.817 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:23.817 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:23.817 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.817 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:24.075 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:24.075 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.075 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:24.333 [2024-05-14 21:52:24.844691] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.333 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.333 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.333 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.333 21:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:24.592 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:24.592 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.592 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:24.850 [2024-05-14 21:52:25.358635] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.850 [2024-05-14 21:52:25.358669] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ade4300 name Existed_Raid, state offline 00:10:24.850 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.850 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.850 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.850 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:25.108 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.366 BaseBdev2 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:25.366 21:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.624 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.882 [ 00:10:25.882 { 00:10:25.882 "name": "BaseBdev2", 00:10:25.882 "aliases": [ 00:10:25.882 "406053cd-123c-11ef-8c90-4585f0cfab08" 00:10:25.882 ], 00:10:25.882 "product_name": "Malloc disk", 00:10:25.882 "block_size": 512, 00:10:25.882 "num_blocks": 65536, 00:10:25.882 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:25.882 "assigned_rate_limits": { 00:10:25.882 "rw_ios_per_sec": 0, 00:10:25.882 "rw_mbytes_per_sec": 0, 00:10:25.882 "r_mbytes_per_sec": 0, 00:10:25.882 "w_mbytes_per_sec": 0 00:10:25.882 }, 00:10:25.882 "claimed": false, 00:10:25.882 "zoned": false, 00:10:25.882 "supported_io_types": { 00:10:25.882 "read": true, 00:10:25.882 "write": true, 00:10:25.882 "unmap": true, 00:10:25.882 "write_zeroes": true, 00:10:25.882 "flush": true, 00:10:25.882 "reset": true, 00:10:25.882 "compare": false, 00:10:25.882 "compare_and_write": false, 00:10:25.882 "abort": true, 00:10:25.882 "nvme_admin": false, 00:10:25.882 "nvme_io": false 00:10:25.882 }, 00:10:25.882 "memory_domains": [ 00:10:25.882 { 00:10:25.882 "dma_device_id": "system", 00:10:25.882 "dma_device_type": 1 00:10:25.882 }, 00:10:25.882 { 00:10:25.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.882 "dma_device_type": 2 00:10:25.882 } 00:10:25.882 ], 00:10:25.882 "driver_specific": {} 00:10:25.882 } 00:10:25.882 ] 00:10:25.882 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:25.882 21:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:10:25.882 21:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:25.882 21:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.163 BaseBdev3 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:26.164 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:26.421 21:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.679 [ 00:10:26.679 { 00:10:26.679 "name": "BaseBdev3", 00:10:26.679 "aliases": [ 00:10:26.679 "40d62016-123c-11ef-8c90-4585f0cfab08" 00:10:26.679 ], 00:10:26.679 "product_name": "Malloc disk", 00:10:26.679 "block_size": 512, 00:10:26.679 "num_blocks": 65536, 00:10:26.679 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:26.679 "assigned_rate_limits": { 00:10:26.679 "rw_ios_per_sec": 0, 00:10:26.679 "rw_mbytes_per_sec": 0, 00:10:26.679 "r_mbytes_per_sec": 0, 00:10:26.679 "w_mbytes_per_sec": 0 00:10:26.679 }, 00:10:26.679 "claimed": false, 00:10:26.679 "zoned": false, 00:10:26.679 "supported_io_types": { 00:10:26.679 "read": true, 00:10:26.679 "write": true, 00:10:26.679 "unmap": true, 00:10:26.679 "write_zeroes": true, 00:10:26.679 "flush": true, 00:10:26.679 "reset": true, 00:10:26.679 "compare": false, 00:10:26.679 "compare_and_write": false, 00:10:26.679 "abort": true, 00:10:26.679 "nvme_admin": false, 00:10:26.679 "nvme_io": false 00:10:26.679 }, 00:10:26.679 "memory_domains": [ 00:10:26.679 { 00:10:26.679 "dma_device_id": "system", 00:10:26.679 "dma_device_type": 1 00:10:26.679 }, 00:10:26.679 { 00:10:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.679 "dma_device_type": 2 00:10:26.679 } 00:10:26.679 ], 00:10:26.679 "driver_specific": {} 00:10:26.679 } 00:10:26.679 ] 00:10:26.679 21:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:26.679 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:10:26.679 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:26.679 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.938 [2024-05-14 21:52:27.440732] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.938 [2024-05-14 21:52:27.440789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.938 [2024-05-14 21:52:27.440799] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.938 [2024-05-14 21:52:27.441377] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.938 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.196 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:27.196 "name": "Existed_Raid", 00:10:27.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.196 "strip_size_kb": 64, 00:10:27.196 "state": "configuring", 00:10:27.196 "raid_level": "raid0", 00:10:27.196 "superblock": false, 00:10:27.196 "num_base_bdevs": 3, 00:10:27.196 "num_base_bdevs_discovered": 2, 00:10:27.196 "num_base_bdevs_operational": 3, 00:10:27.196 "base_bdevs_list": [ 00:10:27.196 { 00:10:27.196 "name": "BaseBdev1", 00:10:27.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.196 "is_configured": false, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 0 00:10:27.196 }, 00:10:27.196 { 00:10:27.196 "name": "BaseBdev2", 00:10:27.196 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:27.196 "is_configured": true, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 65536 00:10:27.196 }, 00:10:27.196 { 00:10:27.196 "name": "BaseBdev3", 00:10:27.196 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:27.196 "is_configured": true, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 65536 00:10:27.196 } 00:10:27.196 ] 00:10:27.196 }' 00:10:27.196 21:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:27.196 21:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:27.762 [2024-05-14 21:52:28.300729] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:27.762 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:27.763 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:27.763 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:27.763 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:27.763 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.763 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.329 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:28.329 "name": "Existed_Raid", 00:10:28.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.329 "strip_size_kb": 64, 00:10:28.329 "state": "configuring", 00:10:28.329 "raid_level": "raid0", 00:10:28.329 "superblock": false, 00:10:28.329 "num_base_bdevs": 3, 00:10:28.329 "num_base_bdevs_discovered": 1, 00:10:28.329 "num_base_bdevs_operational": 3, 00:10:28.329 "base_bdevs_list": [ 00:10:28.329 { 00:10:28.329 "name": "BaseBdev1", 00:10:28.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.329 "is_configured": false, 00:10:28.329 "data_offset": 0, 00:10:28.329 "data_size": 0 00:10:28.329 }, 00:10:28.329 { 00:10:28.329 "name": null, 00:10:28.329 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:28.329 "is_configured": false, 00:10:28.329 "data_offset": 0, 00:10:28.329 "data_size": 65536 00:10:28.329 }, 00:10:28.329 { 00:10:28.329 "name": "BaseBdev3", 00:10:28.329 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:28.329 "is_configured": true, 00:10:28.329 "data_offset": 0, 00:10:28.329 "data_size": 65536 00:10:28.329 } 00:10:28.329 ] 00:10:28.329 }' 00:10:28.329 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:28.329 21:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.588 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.588 21:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.846 21:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:10:28.846 21:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.846 [2024-05-14 21:52:29.432877] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.105 BaseBdev1 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:29.105 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.376 21:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.667 [ 00:10:29.667 { 00:10:29.667 "name": "BaseBdev1", 00:10:29.667 "aliases": [ 00:10:29.667 "4278d502-123c-11ef-8c90-4585f0cfab08" 00:10:29.667 ], 00:10:29.667 "product_name": "Malloc disk", 00:10:29.667 "block_size": 512, 00:10:29.667 "num_blocks": 65536, 00:10:29.667 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:29.667 "assigned_rate_limits": { 00:10:29.667 "rw_ios_per_sec": 0, 00:10:29.667 "rw_mbytes_per_sec": 0, 00:10:29.667 "r_mbytes_per_sec": 0, 00:10:29.667 "w_mbytes_per_sec": 0 00:10:29.667 }, 00:10:29.667 "claimed": true, 00:10:29.667 "claim_type": "exclusive_write", 00:10:29.667 "zoned": false, 00:10:29.667 "supported_io_types": { 00:10:29.667 "read": true, 00:10:29.667 "write": true, 00:10:29.667 "unmap": true, 00:10:29.667 "write_zeroes": true, 00:10:29.667 "flush": true, 00:10:29.667 "reset": true, 00:10:29.668 "compare": false, 00:10:29.668 "compare_and_write": false, 00:10:29.668 "abort": true, 00:10:29.668 "nvme_admin": false, 00:10:29.668 "nvme_io": false 00:10:29.668 }, 00:10:29.668 "memory_domains": [ 00:10:29.668 { 00:10:29.668 "dma_device_id": "system", 00:10:29.668 "dma_device_type": 1 00:10:29.668 }, 00:10:29.668 { 00:10:29.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.668 "dma_device_type": 2 00:10:29.668 } 00:10:29.668 ], 00:10:29.668 "driver_specific": {} 00:10:29.668 } 00:10:29.668 ] 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.668 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.926 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.926 "name": "Existed_Raid", 00:10:29.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.926 "strip_size_kb": 64, 00:10:29.926 "state": "configuring", 00:10:29.926 "raid_level": "raid0", 00:10:29.926 "superblock": false, 00:10:29.926 "num_base_bdevs": 3, 00:10:29.926 "num_base_bdevs_discovered": 2, 00:10:29.926 "num_base_bdevs_operational": 3, 00:10:29.926 "base_bdevs_list": [ 00:10:29.926 { 00:10:29.926 "name": "BaseBdev1", 00:10:29.926 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:29.926 "is_configured": true, 00:10:29.926 "data_offset": 0, 00:10:29.926 "data_size": 65536 00:10:29.926 }, 00:10:29.926 { 00:10:29.926 "name": null, 00:10:29.926 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:29.926 "is_configured": false, 00:10:29.926 "data_offset": 0, 00:10:29.926 "data_size": 65536 00:10:29.926 }, 00:10:29.926 { 00:10:29.926 "name": "BaseBdev3", 00:10:29.926 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:29.926 "is_configured": true, 00:10:29.926 "data_offset": 0, 00:10:29.926 "data_size": 65536 00:10:29.926 } 00:10:29.926 ] 00:10:29.926 }' 00:10:29.926 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.926 21:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.184 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.184 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.442 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.442 21:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:30.701 [2024-05-14 21:52:31.204751] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.701 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.959 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:30.959 "name": "Existed_Raid", 00:10:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.959 "strip_size_kb": 64, 00:10:30.959 "state": "configuring", 00:10:30.959 "raid_level": "raid0", 00:10:30.959 "superblock": false, 00:10:30.959 "num_base_bdevs": 3, 00:10:30.959 "num_base_bdevs_discovered": 1, 00:10:30.959 "num_base_bdevs_operational": 3, 00:10:30.959 "base_bdevs_list": [ 00:10:30.959 { 00:10:30.959 "name": "BaseBdev1", 00:10:30.959 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:30.959 "is_configured": true, 00:10:30.959 "data_offset": 0, 00:10:30.959 "data_size": 65536 00:10:30.959 }, 00:10:30.959 { 00:10:30.959 "name": null, 00:10:30.959 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:30.959 "is_configured": false, 00:10:30.959 "data_offset": 0, 00:10:30.959 "data_size": 65536 00:10:30.959 }, 00:10:30.959 { 00:10:30.959 "name": null, 00:10:30.959 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:30.959 "is_configured": false, 00:10:30.959 "data_offset": 0, 00:10:30.959 "data_size": 65536 00:10:30.959 } 00:10:30.959 ] 00:10:30.959 }' 00:10:30.959 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:30.959 21:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.527 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.527 21:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.527 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:10:31.527 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.786 [2024-05-14 21:52:32.336767] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.786 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.046 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:32.046 "name": "Existed_Raid", 00:10:32.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.046 "strip_size_kb": 64, 00:10:32.046 "state": "configuring", 00:10:32.046 "raid_level": "raid0", 00:10:32.046 "superblock": false, 00:10:32.046 "num_base_bdevs": 3, 00:10:32.046 "num_base_bdevs_discovered": 2, 00:10:32.046 "num_base_bdevs_operational": 3, 00:10:32.046 "base_bdevs_list": [ 00:10:32.046 { 00:10:32.046 "name": "BaseBdev1", 00:10:32.046 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:32.046 "is_configured": true, 00:10:32.046 "data_offset": 0, 00:10:32.046 "data_size": 65536 00:10:32.046 }, 00:10:32.046 { 00:10:32.046 "name": null, 00:10:32.046 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:32.046 "is_configured": false, 00:10:32.046 "data_offset": 0, 00:10:32.046 "data_size": 65536 00:10:32.046 }, 00:10:32.046 { 00:10:32.046 "name": "BaseBdev3", 00:10:32.046 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:32.046 "is_configured": true, 00:10:32.046 "data_offset": 0, 00:10:32.046 "data_size": 65536 00:10:32.046 } 00:10:32.046 ] 00:10:32.046 }' 00:10:32.046 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:32.046 21:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.305 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.305 21:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.910 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:10:32.910 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:33.168 [2024-05-14 21:52:33.504801] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.168 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.426 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.426 "name": "Existed_Raid", 00:10:33.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.426 "strip_size_kb": 64, 00:10:33.426 "state": "configuring", 00:10:33.426 "raid_level": "raid0", 00:10:33.426 "superblock": false, 00:10:33.426 "num_base_bdevs": 3, 00:10:33.426 "num_base_bdevs_discovered": 1, 00:10:33.426 "num_base_bdevs_operational": 3, 00:10:33.426 "base_bdevs_list": [ 00:10:33.426 { 00:10:33.426 "name": null, 00:10:33.426 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:33.426 "is_configured": false, 00:10:33.426 "data_offset": 0, 00:10:33.426 "data_size": 65536 00:10:33.426 }, 00:10:33.426 { 00:10:33.426 "name": null, 00:10:33.426 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:33.426 "is_configured": false, 00:10:33.426 "data_offset": 0, 00:10:33.426 "data_size": 65536 00:10:33.426 }, 00:10:33.426 { 00:10:33.426 "name": "BaseBdev3", 00:10:33.426 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:33.426 "is_configured": true, 00:10:33.426 "data_offset": 0, 00:10:33.426 "data_size": 65536 00:10:33.426 } 00:10:33.426 ] 00:10:33.426 }' 00:10:33.426 21:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.426 21:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.684 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.684 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.942 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:10:33.942 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.201 [2024-05-14 21:52:34.586828] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.201 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.459 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:34.459 "name": "Existed_Raid", 00:10:34.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.459 "strip_size_kb": 64, 00:10:34.459 "state": "configuring", 00:10:34.459 "raid_level": "raid0", 00:10:34.459 "superblock": false, 00:10:34.459 "num_base_bdevs": 3, 00:10:34.459 "num_base_bdevs_discovered": 2, 00:10:34.459 "num_base_bdevs_operational": 3, 00:10:34.459 "base_bdevs_list": [ 00:10:34.459 { 00:10:34.460 "name": null, 00:10:34.460 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:34.460 "is_configured": false, 00:10:34.460 "data_offset": 0, 00:10:34.460 "data_size": 65536 00:10:34.460 }, 00:10:34.460 { 00:10:34.460 "name": "BaseBdev2", 00:10:34.460 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:34.460 "is_configured": true, 00:10:34.460 "data_offset": 0, 00:10:34.460 "data_size": 65536 00:10:34.460 }, 00:10:34.460 { 00:10:34.460 "name": "BaseBdev3", 00:10:34.460 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:34.460 "is_configured": true, 00:10:34.460 "data_offset": 0, 00:10:34.460 "data_size": 65536 00:10:34.460 } 00:10:34.460 ] 00:10:34.460 }' 00:10:34.460 21:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:34.460 21:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.718 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.718 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.977 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:10:34.977 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.977 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.236 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4278d502-123c-11ef-8c90-4585f0cfab08 00:10:35.494 [2024-05-14 21:52:35.967044] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:35.494 [2024-05-14 21:52:35.967075] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ade4300 00:10:35.494 [2024-05-14 21:52:35.967080] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:35.494 [2024-05-14 21:52:35.967110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae42e20 00:10:35.494 [2024-05-14 21:52:35.967193] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ade4300 00:10:35.495 [2024-05-14 21:52:35.967199] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ade4300 00:10:35.495 [2024-05-14 21:52:35.967235] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.495 NewBaseBdev 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:35.495 21:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.753 21:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:36.012 [ 00:10:36.012 { 00:10:36.012 "name": "NewBaseBdev", 00:10:36.012 "aliases": [ 00:10:36.012 "4278d502-123c-11ef-8c90-4585f0cfab08" 00:10:36.012 ], 00:10:36.012 "product_name": "Malloc disk", 00:10:36.012 "block_size": 512, 00:10:36.012 "num_blocks": 65536, 00:10:36.012 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:36.012 "assigned_rate_limits": { 00:10:36.012 "rw_ios_per_sec": 0, 00:10:36.012 "rw_mbytes_per_sec": 0, 00:10:36.012 "r_mbytes_per_sec": 0, 00:10:36.012 "w_mbytes_per_sec": 0 00:10:36.012 }, 00:10:36.012 "claimed": true, 00:10:36.012 "claim_type": "exclusive_write", 00:10:36.012 "zoned": false, 00:10:36.012 "supported_io_types": { 00:10:36.012 "read": true, 00:10:36.012 "write": true, 00:10:36.012 "unmap": true, 00:10:36.012 "write_zeroes": true, 00:10:36.012 "flush": true, 00:10:36.012 "reset": true, 00:10:36.012 "compare": false, 00:10:36.012 "compare_and_write": false, 00:10:36.012 "abort": true, 00:10:36.012 "nvme_admin": false, 00:10:36.012 "nvme_io": false 00:10:36.012 }, 00:10:36.012 "memory_domains": [ 00:10:36.012 { 00:10:36.012 "dma_device_id": "system", 00:10:36.012 "dma_device_type": 1 00:10:36.012 }, 00:10:36.012 { 00:10:36.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.012 "dma_device_type": 2 00:10:36.012 } 00:10:36.012 ], 00:10:36.012 "driver_specific": {} 00:10:36.012 } 00:10:36.012 ] 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.012 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.270 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:36.270 "name": "Existed_Raid", 00:10:36.270 "uuid": "465de458-123c-11ef-8c90-4585f0cfab08", 00:10:36.270 "strip_size_kb": 64, 00:10:36.270 "state": "online", 00:10:36.270 "raid_level": "raid0", 00:10:36.270 "superblock": false, 00:10:36.270 "num_base_bdevs": 3, 00:10:36.270 "num_base_bdevs_discovered": 3, 00:10:36.270 "num_base_bdevs_operational": 3, 00:10:36.270 "base_bdevs_list": [ 00:10:36.270 { 00:10:36.270 "name": "NewBaseBdev", 00:10:36.270 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:36.270 "is_configured": true, 00:10:36.270 "data_offset": 0, 00:10:36.270 "data_size": 65536 00:10:36.270 }, 00:10:36.270 { 00:10:36.270 "name": "BaseBdev2", 00:10:36.270 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:36.270 "is_configured": true, 00:10:36.270 "data_offset": 0, 00:10:36.270 "data_size": 65536 00:10:36.270 }, 00:10:36.270 { 00:10:36.270 "name": "BaseBdev3", 00:10:36.270 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:36.270 "is_configured": true, 00:10:36.270 "data_offset": 0, 00:10:36.270 "data_size": 65536 00:10:36.270 } 00:10:36.270 ] 00:10:36.270 }' 00:10:36.270 21:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:36.270 21:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:36.529 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:36.788 [2024-05-14 21:52:37.322991] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:36.788 "name": "Existed_Raid", 00:10:36.788 "aliases": [ 00:10:36.788 "465de458-123c-11ef-8c90-4585f0cfab08" 00:10:36.788 ], 00:10:36.788 "product_name": "Raid Volume", 00:10:36.788 "block_size": 512, 00:10:36.788 "num_blocks": 196608, 00:10:36.788 "uuid": "465de458-123c-11ef-8c90-4585f0cfab08", 00:10:36.788 "assigned_rate_limits": { 00:10:36.788 "rw_ios_per_sec": 0, 00:10:36.788 "rw_mbytes_per_sec": 0, 00:10:36.788 "r_mbytes_per_sec": 0, 00:10:36.788 "w_mbytes_per_sec": 0 00:10:36.788 }, 00:10:36.788 "claimed": false, 00:10:36.788 "zoned": false, 00:10:36.788 "supported_io_types": { 00:10:36.788 "read": true, 00:10:36.788 "write": true, 00:10:36.788 "unmap": true, 00:10:36.788 "write_zeroes": true, 00:10:36.788 "flush": true, 00:10:36.788 "reset": true, 00:10:36.788 "compare": false, 00:10:36.788 "compare_and_write": false, 00:10:36.788 "abort": false, 00:10:36.788 "nvme_admin": false, 00:10:36.788 "nvme_io": false 00:10:36.788 }, 00:10:36.788 "memory_domains": [ 00:10:36.788 { 00:10:36.788 "dma_device_id": "system", 00:10:36.788 "dma_device_type": 1 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.788 "dma_device_type": 2 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "dma_device_id": "system", 00:10:36.788 "dma_device_type": 1 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.788 "dma_device_type": 2 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "dma_device_id": "system", 00:10:36.788 "dma_device_type": 1 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.788 "dma_device_type": 2 00:10:36.788 } 00:10:36.788 ], 00:10:36.788 "driver_specific": { 00:10:36.788 "raid": { 00:10:36.788 "uuid": "465de458-123c-11ef-8c90-4585f0cfab08", 00:10:36.788 "strip_size_kb": 64, 00:10:36.788 "state": "online", 00:10:36.788 "raid_level": "raid0", 00:10:36.788 "superblock": false, 00:10:36.788 "num_base_bdevs": 3, 00:10:36.788 "num_base_bdevs_discovered": 3, 00:10:36.788 "num_base_bdevs_operational": 3, 00:10:36.788 "base_bdevs_list": [ 00:10:36.788 { 00:10:36.788 "name": "NewBaseBdev", 00:10:36.788 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:36.788 "is_configured": true, 00:10:36.788 "data_offset": 0, 00:10:36.788 "data_size": 65536 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "name": "BaseBdev2", 00:10:36.788 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:36.788 "is_configured": true, 00:10:36.788 "data_offset": 0, 00:10:36.788 "data_size": 65536 00:10:36.788 }, 00:10:36.788 { 00:10:36.788 "name": "BaseBdev3", 00:10:36.788 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:36.788 "is_configured": true, 00:10:36.788 "data_offset": 0, 00:10:36.788 "data_size": 65536 00:10:36.788 } 00:10:36.788 ] 00:10:36.788 } 00:10:36.788 } 00:10:36.788 }' 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:10:36.788 BaseBdev2 00:10:36.788 BaseBdev3' 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:36.788 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:37.047 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:37.047 "name": "NewBaseBdev", 00:10:37.047 "aliases": [ 00:10:37.047 "4278d502-123c-11ef-8c90-4585f0cfab08" 00:10:37.047 ], 00:10:37.047 "product_name": "Malloc disk", 00:10:37.047 "block_size": 512, 00:10:37.047 "num_blocks": 65536, 00:10:37.047 "uuid": "4278d502-123c-11ef-8c90-4585f0cfab08", 00:10:37.047 "assigned_rate_limits": { 00:10:37.047 "rw_ios_per_sec": 0, 00:10:37.047 "rw_mbytes_per_sec": 0, 00:10:37.047 "r_mbytes_per_sec": 0, 00:10:37.047 "w_mbytes_per_sec": 0 00:10:37.047 }, 00:10:37.047 "claimed": true, 00:10:37.047 "claim_type": "exclusive_write", 00:10:37.047 "zoned": false, 00:10:37.047 "supported_io_types": { 00:10:37.047 "read": true, 00:10:37.047 "write": true, 00:10:37.047 "unmap": true, 00:10:37.047 "write_zeroes": true, 00:10:37.047 "flush": true, 00:10:37.047 "reset": true, 00:10:37.047 "compare": false, 00:10:37.047 "compare_and_write": false, 00:10:37.047 "abort": true, 00:10:37.047 "nvme_admin": false, 00:10:37.047 "nvme_io": false 00:10:37.047 }, 00:10:37.047 "memory_domains": [ 00:10:37.047 { 00:10:37.047 "dma_device_id": "system", 00:10:37.047 "dma_device_type": 1 00:10:37.047 }, 00:10:37.047 { 00:10:37.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.047 "dma_device_type": 2 00:10:37.047 } 00:10:37.047 ], 00:10:37.047 "driver_specific": {} 00:10:37.047 }' 00:10:37.047 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.047 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.047 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:37.305 21:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:37.564 "name": "BaseBdev2", 00:10:37.564 "aliases": [ 00:10:37.564 "406053cd-123c-11ef-8c90-4585f0cfab08" 00:10:37.564 ], 00:10:37.564 "product_name": "Malloc disk", 00:10:37.564 "block_size": 512, 00:10:37.564 "num_blocks": 65536, 00:10:37.564 "uuid": "406053cd-123c-11ef-8c90-4585f0cfab08", 00:10:37.564 "assigned_rate_limits": { 00:10:37.564 "rw_ios_per_sec": 0, 00:10:37.564 "rw_mbytes_per_sec": 0, 00:10:37.564 "r_mbytes_per_sec": 0, 00:10:37.564 "w_mbytes_per_sec": 0 00:10:37.564 }, 00:10:37.564 "claimed": true, 00:10:37.564 "claim_type": "exclusive_write", 00:10:37.564 "zoned": false, 00:10:37.564 "supported_io_types": { 00:10:37.564 "read": true, 00:10:37.564 "write": true, 00:10:37.564 "unmap": true, 00:10:37.564 "write_zeroes": true, 00:10:37.564 "flush": true, 00:10:37.564 "reset": true, 00:10:37.564 "compare": false, 00:10:37.564 "compare_and_write": false, 00:10:37.564 "abort": true, 00:10:37.564 "nvme_admin": false, 00:10:37.564 "nvme_io": false 00:10:37.564 }, 00:10:37.564 "memory_domains": [ 00:10:37.564 { 00:10:37.564 "dma_device_id": "system", 00:10:37.564 "dma_device_type": 1 00:10:37.564 }, 00:10:37.564 { 00:10:37.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.564 "dma_device_type": 2 00:10:37.564 } 00:10:37.564 ], 00:10:37.564 "driver_specific": {} 00:10:37.564 }' 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:37.564 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:37.822 "name": "BaseBdev3", 00:10:37.822 "aliases": [ 00:10:37.822 "40d62016-123c-11ef-8c90-4585f0cfab08" 00:10:37.822 ], 00:10:37.822 "product_name": "Malloc disk", 00:10:37.822 "block_size": 512, 00:10:37.822 "num_blocks": 65536, 00:10:37.822 "uuid": "40d62016-123c-11ef-8c90-4585f0cfab08", 00:10:37.822 "assigned_rate_limits": { 00:10:37.822 "rw_ios_per_sec": 0, 00:10:37.822 "rw_mbytes_per_sec": 0, 00:10:37.822 "r_mbytes_per_sec": 0, 00:10:37.822 "w_mbytes_per_sec": 0 00:10:37.822 }, 00:10:37.822 "claimed": true, 00:10:37.822 "claim_type": "exclusive_write", 00:10:37.822 "zoned": false, 00:10:37.822 "supported_io_types": { 00:10:37.822 "read": true, 00:10:37.822 "write": true, 00:10:37.822 "unmap": true, 00:10:37.822 "write_zeroes": true, 00:10:37.822 "flush": true, 00:10:37.822 "reset": true, 00:10:37.822 "compare": false, 00:10:37.822 "compare_and_write": false, 00:10:37.822 "abort": true, 00:10:37.822 "nvme_admin": false, 00:10:37.822 "nvme_io": false 00:10:37.822 }, 00:10:37.822 "memory_domains": [ 00:10:37.822 { 00:10:37.822 "dma_device_id": "system", 00:10:37.822 "dma_device_type": 1 00:10:37.822 }, 00:10:37.822 { 00:10:37.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.822 "dma_device_type": 2 00:10:37.822 } 00:10:37.822 ], 00:10:37.822 "driver_specific": {} 00:10:37.822 }' 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:37.822 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:38.389 [2024-05-14 21:52:38.670998] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.389 [2024-05-14 21:52:38.671042] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.389 [2024-05-14 21:52:38.671115] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.389 [2024-05-14 21:52:38.671138] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.389 [2024-05-14 21:52:38.671146] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ade4300 name Existed_Raid, state offline 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 51525 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 51525 ']' 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 51525 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 51525 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:10:38.389 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:10:38.390 killing process with pid 51525 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51525' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 51525 00:10:38.390 [2024-05-14 21:52:38.700610] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 51525 00:10:38.390 [2024-05-14 21:52:38.719442] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:10:38.390 00:10:38.390 real 0m24.617s 00:10:38.390 user 0m45.156s 00:10:38.390 sys 0m3.218s 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 ************************************ 00:10:38.390 END TEST raid_state_function_test 00:10:38.390 ************************************ 00:10:38.390 21:52:38 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:38.390 21:52:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:38.390 21:52:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.390 21:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 ************************************ 00:10:38.390 START TEST raid_state_function_test_sb 00:10:38.390 ************************************ 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=52254 00:10:38.390 Process raid pid: 52254 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 52254' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 52254 /var/tmp/spdk-raid.sock 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 52254 ']' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:38.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:38.390 21:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 [2024-05-14 21:52:38.976691] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:38.390 [2024-05-14 21:52:38.977068] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:38.956 EAL: TSC is not safe to use in SMP mode 00:10:38.956 EAL: TSC is not invariant 00:10:38.956 [2024-05-14 21:52:39.531334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.214 [2024-05-14 21:52:39.618388] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:39.214 [2024-05-14 21:52:39.620680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.214 [2024-05-14 21:52:39.621512] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.214 [2024-05-14 21:52:39.621529] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.473 21:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.473 21:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:10:39.473 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:39.732 [2024-05-14 21:52:40.262112] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.732 [2024-05-14 21:52:40.262184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.732 [2024-05-14 21:52:40.262190] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.732 [2024-05-14 21:52:40.262206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.732 [2024-05-14 21:52:40.262210] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.732 [2024-05-14 21:52:40.262217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.732 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.990 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:39.990 "name": "Existed_Raid", 00:10:39.990 "uuid": "48ed4282-123c-11ef-8c90-4585f0cfab08", 00:10:39.990 "strip_size_kb": 64, 00:10:39.990 "state": "configuring", 00:10:39.990 "raid_level": "raid0", 00:10:39.990 "superblock": true, 00:10:39.990 "num_base_bdevs": 3, 00:10:39.990 "num_base_bdevs_discovered": 0, 00:10:39.990 "num_base_bdevs_operational": 3, 00:10:39.990 "base_bdevs_list": [ 00:10:39.990 { 00:10:39.990 "name": "BaseBdev1", 00:10:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.990 "is_configured": false, 00:10:39.990 "data_offset": 0, 00:10:39.990 "data_size": 0 00:10:39.990 }, 00:10:39.990 { 00:10:39.990 "name": "BaseBdev2", 00:10:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.990 "is_configured": false, 00:10:39.990 "data_offset": 0, 00:10:39.990 "data_size": 0 00:10:39.990 }, 00:10:39.990 { 00:10:39.990 "name": "BaseBdev3", 00:10:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.990 "is_configured": false, 00:10:39.990 "data_offset": 0, 00:10:39.990 "data_size": 0 00:10:39.990 } 00:10:39.990 ] 00:10:39.990 }' 00:10:39.990 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:39.990 21:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 21:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:40.557 [2024-05-14 21:52:41.058097] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.557 [2024-05-14 21:52:41.058130] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c468300 name Existed_Raid, state configuring 00:10:40.557 21:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:40.816 [2024-05-14 21:52:41.334123] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.816 [2024-05-14 21:52:41.334186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.816 [2024-05-14 21:52:41.334191] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.816 [2024-05-14 21:52:41.334200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.816 [2024-05-14 21:52:41.334203] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.816 [2024-05-14 21:52:41.334211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.816 21:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.075 [2024-05-14 21:52:41.559189] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.075 BaseBdev1 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:41.075 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:41.334 21:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.593 [ 00:10:41.593 { 00:10:41.593 "name": "BaseBdev1", 00:10:41.593 "aliases": [ 00:10:41.593 "49b30475-123c-11ef-8c90-4585f0cfab08" 00:10:41.593 ], 00:10:41.593 "product_name": "Malloc disk", 00:10:41.593 "block_size": 512, 00:10:41.593 "num_blocks": 65536, 00:10:41.593 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:41.593 "assigned_rate_limits": { 00:10:41.593 "rw_ios_per_sec": 0, 00:10:41.593 "rw_mbytes_per_sec": 0, 00:10:41.593 "r_mbytes_per_sec": 0, 00:10:41.593 "w_mbytes_per_sec": 0 00:10:41.593 }, 00:10:41.593 "claimed": true, 00:10:41.593 "claim_type": "exclusive_write", 00:10:41.593 "zoned": false, 00:10:41.593 "supported_io_types": { 00:10:41.593 "read": true, 00:10:41.593 "write": true, 00:10:41.593 "unmap": true, 00:10:41.593 "write_zeroes": true, 00:10:41.593 "flush": true, 00:10:41.593 "reset": true, 00:10:41.593 "compare": false, 00:10:41.593 "compare_and_write": false, 00:10:41.594 "abort": true, 00:10:41.594 "nvme_admin": false, 00:10:41.594 "nvme_io": false 00:10:41.594 }, 00:10:41.594 "memory_domains": [ 00:10:41.594 { 00:10:41.594 "dma_device_id": "system", 00:10:41.594 "dma_device_type": 1 00:10:41.594 }, 00:10:41.594 { 00:10:41.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.594 "dma_device_type": 2 00:10:41.594 } 00:10:41.594 ], 00:10:41.594 "driver_specific": {} 00:10:41.594 } 00:10:41.594 ] 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.594 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.853 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:41.853 "name": "Existed_Raid", 00:10:41.853 "uuid": "4990d5e9-123c-11ef-8c90-4585f0cfab08", 00:10:41.853 "strip_size_kb": 64, 00:10:41.853 "state": "configuring", 00:10:41.853 "raid_level": "raid0", 00:10:41.853 "superblock": true, 00:10:41.853 "num_base_bdevs": 3, 00:10:41.853 "num_base_bdevs_discovered": 1, 00:10:41.853 "num_base_bdevs_operational": 3, 00:10:41.853 "base_bdevs_list": [ 00:10:41.853 { 00:10:41.853 "name": "BaseBdev1", 00:10:41.853 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:41.853 "is_configured": true, 00:10:41.853 "data_offset": 2048, 00:10:41.853 "data_size": 63488 00:10:41.853 }, 00:10:41.853 { 00:10:41.853 "name": "BaseBdev2", 00:10:41.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.853 "is_configured": false, 00:10:41.853 "data_offset": 0, 00:10:41.853 "data_size": 0 00:10:41.853 }, 00:10:41.853 { 00:10:41.853 "name": "BaseBdev3", 00:10:41.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.853 "is_configured": false, 00:10:41.853 "data_offset": 0, 00:10:41.853 "data_size": 0 00:10:41.853 } 00:10:41.853 ] 00:10:41.853 }' 00:10:41.853 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:41.853 21:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.420 21:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:42.678 [2024-05-14 21:52:43.054177] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.678 [2024-05-14 21:52:43.054215] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c468300 name Existed_Raid, state configuring 00:10:42.678 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:42.937 [2024-05-14 21:52:43.290218] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.937 [2024-05-14 21:52:43.291037] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.937 [2024-05-14 21:52:43.291079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.937 [2024-05-14 21:52:43.291086] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.937 [2024-05-14 21:52:43.291101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.937 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.195 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:43.195 "name": "Existed_Raid", 00:10:43.195 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:43.195 "strip_size_kb": 64, 00:10:43.195 "state": "configuring", 00:10:43.195 "raid_level": "raid0", 00:10:43.195 "superblock": true, 00:10:43.195 "num_base_bdevs": 3, 00:10:43.195 "num_base_bdevs_discovered": 1, 00:10:43.195 "num_base_bdevs_operational": 3, 00:10:43.195 "base_bdevs_list": [ 00:10:43.195 { 00:10:43.195 "name": "BaseBdev1", 00:10:43.196 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:43.196 "is_configured": true, 00:10:43.196 "data_offset": 2048, 00:10:43.196 "data_size": 63488 00:10:43.196 }, 00:10:43.196 { 00:10:43.196 "name": "BaseBdev2", 00:10:43.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.196 "is_configured": false, 00:10:43.196 "data_offset": 0, 00:10:43.196 "data_size": 0 00:10:43.196 }, 00:10:43.196 { 00:10:43.196 "name": "BaseBdev3", 00:10:43.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.196 "is_configured": false, 00:10:43.196 "data_offset": 0, 00:10:43.196 "data_size": 0 00:10:43.196 } 00:10:43.196 ] 00:10:43.196 }' 00:10:43.196 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:43.196 21:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.454 21:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.712 [2024-05-14 21:52:44.186379] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.712 BaseBdev2 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:43.712 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:43.970 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.228 [ 00:10:44.228 { 00:10:44.228 "name": "BaseBdev2", 00:10:44.228 "aliases": [ 00:10:44.228 "4b440951-123c-11ef-8c90-4585f0cfab08" 00:10:44.228 ], 00:10:44.228 "product_name": "Malloc disk", 00:10:44.228 "block_size": 512, 00:10:44.228 "num_blocks": 65536, 00:10:44.228 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:44.228 "assigned_rate_limits": { 00:10:44.228 "rw_ios_per_sec": 0, 00:10:44.228 "rw_mbytes_per_sec": 0, 00:10:44.228 "r_mbytes_per_sec": 0, 00:10:44.228 "w_mbytes_per_sec": 0 00:10:44.228 }, 00:10:44.228 "claimed": true, 00:10:44.228 "claim_type": "exclusive_write", 00:10:44.228 "zoned": false, 00:10:44.228 "supported_io_types": { 00:10:44.228 "read": true, 00:10:44.228 "write": true, 00:10:44.228 "unmap": true, 00:10:44.228 "write_zeroes": true, 00:10:44.228 "flush": true, 00:10:44.228 "reset": true, 00:10:44.228 "compare": false, 00:10:44.228 "compare_and_write": false, 00:10:44.228 "abort": true, 00:10:44.228 "nvme_admin": false, 00:10:44.228 "nvme_io": false 00:10:44.228 }, 00:10:44.228 "memory_domains": [ 00:10:44.228 { 00:10:44.228 "dma_device_id": "system", 00:10:44.228 "dma_device_type": 1 00:10:44.228 }, 00:10:44.228 { 00:10:44.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.228 "dma_device_type": 2 00:10:44.228 } 00:10:44.228 ], 00:10:44.228 "driver_specific": {} 00:10:44.228 } 00:10:44.228 ] 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:44.228 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:44.229 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:44.229 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:44.229 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.229 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.487 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:44.487 "name": "Existed_Raid", 00:10:44.487 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:44.487 "strip_size_kb": 64, 00:10:44.487 "state": "configuring", 00:10:44.487 "raid_level": "raid0", 00:10:44.487 "superblock": true, 00:10:44.487 "num_base_bdevs": 3, 00:10:44.487 "num_base_bdevs_discovered": 2, 00:10:44.487 "num_base_bdevs_operational": 3, 00:10:44.487 "base_bdevs_list": [ 00:10:44.487 { 00:10:44.487 "name": "BaseBdev1", 00:10:44.487 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:44.487 "is_configured": true, 00:10:44.487 "data_offset": 2048, 00:10:44.487 "data_size": 63488 00:10:44.487 }, 00:10:44.487 { 00:10:44.487 "name": "BaseBdev2", 00:10:44.487 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:44.487 "is_configured": true, 00:10:44.487 "data_offset": 2048, 00:10:44.487 "data_size": 63488 00:10:44.487 }, 00:10:44.487 { 00:10:44.487 "name": "BaseBdev3", 00:10:44.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.487 "is_configured": false, 00:10:44.487 "data_offset": 0, 00:10:44.487 "data_size": 0 00:10:44.487 } 00:10:44.487 ] 00:10:44.487 }' 00:10:44.487 21:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:44.487 21:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.744 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.002 [2024-05-14 21:52:45.490432] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.002 [2024-05-14 21:52:45.490501] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c468300 00:10:45.002 [2024-05-14 21:52:45.490507] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:45.002 [2024-05-14 21:52:45.490529] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c4c6ec0 00:10:45.002 [2024-05-14 21:52:45.490585] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c468300 00:10:45.002 [2024-05-14 21:52:45.490589] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c468300 00:10:45.002 [2024-05-14 21:52:45.490611] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.002 BaseBdev3 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:45.002 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:45.260 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.519 [ 00:10:45.519 { 00:10:45.519 "name": "BaseBdev3", 00:10:45.519 "aliases": [ 00:10:45.519 "4c0b0513-123c-11ef-8c90-4585f0cfab08" 00:10:45.519 ], 00:10:45.519 "product_name": "Malloc disk", 00:10:45.519 "block_size": 512, 00:10:45.519 "num_blocks": 65536, 00:10:45.519 "uuid": "4c0b0513-123c-11ef-8c90-4585f0cfab08", 00:10:45.519 "assigned_rate_limits": { 00:10:45.519 "rw_ios_per_sec": 0, 00:10:45.519 "rw_mbytes_per_sec": 0, 00:10:45.519 "r_mbytes_per_sec": 0, 00:10:45.519 "w_mbytes_per_sec": 0 00:10:45.519 }, 00:10:45.519 "claimed": true, 00:10:45.519 "claim_type": "exclusive_write", 00:10:45.519 "zoned": false, 00:10:45.519 "supported_io_types": { 00:10:45.519 "read": true, 00:10:45.519 "write": true, 00:10:45.519 "unmap": true, 00:10:45.519 "write_zeroes": true, 00:10:45.519 "flush": true, 00:10:45.519 "reset": true, 00:10:45.519 "compare": false, 00:10:45.519 "compare_and_write": false, 00:10:45.519 "abort": true, 00:10:45.519 "nvme_admin": false, 00:10:45.519 "nvme_io": false 00:10:45.519 }, 00:10:45.519 "memory_domains": [ 00:10:45.519 { 00:10:45.519 "dma_device_id": "system", 00:10:45.519 "dma_device_type": 1 00:10:45.519 }, 00:10:45.519 { 00:10:45.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.519 "dma_device_type": 2 00:10:45.519 } 00:10:45.519 ], 00:10:45.519 "driver_specific": {} 00:10:45.519 } 00:10:45.519 ] 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:45.519 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.520 21:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.778 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:45.778 "name": "Existed_Raid", 00:10:45.778 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:45.778 "strip_size_kb": 64, 00:10:45.778 "state": "online", 00:10:45.778 "raid_level": "raid0", 00:10:45.778 "superblock": true, 00:10:45.778 "num_base_bdevs": 3, 00:10:45.778 "num_base_bdevs_discovered": 3, 00:10:45.778 "num_base_bdevs_operational": 3, 00:10:45.778 "base_bdevs_list": [ 00:10:45.778 { 00:10:45.778 "name": "BaseBdev1", 00:10:45.778 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:45.778 "is_configured": true, 00:10:45.778 "data_offset": 2048, 00:10:45.778 "data_size": 63488 00:10:45.778 }, 00:10:45.778 { 00:10:45.778 "name": "BaseBdev2", 00:10:45.778 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:45.778 "is_configured": true, 00:10:45.778 "data_offset": 2048, 00:10:45.778 "data_size": 63488 00:10:45.778 }, 00:10:45.778 { 00:10:45.778 "name": "BaseBdev3", 00:10:45.778 "uuid": "4c0b0513-123c-11ef-8c90-4585f0cfab08", 00:10:45.778 "is_configured": true, 00:10:45.778 "data_offset": 2048, 00:10:45.778 "data_size": 63488 00:10:45.778 } 00:10:45.778 ] 00:10:45.778 }' 00:10:45.778 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:45.778 21:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:46.037 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:10:46.296 [2024-05-14 21:52:46.830421] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:10:46.296 "name": "Existed_Raid", 00:10:46.296 "aliases": [ 00:10:46.296 "4abb4fbe-123c-11ef-8c90-4585f0cfab08" 00:10:46.296 ], 00:10:46.296 "product_name": "Raid Volume", 00:10:46.296 "block_size": 512, 00:10:46.296 "num_blocks": 190464, 00:10:46.296 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:46.296 "assigned_rate_limits": { 00:10:46.296 "rw_ios_per_sec": 0, 00:10:46.296 "rw_mbytes_per_sec": 0, 00:10:46.296 "r_mbytes_per_sec": 0, 00:10:46.296 "w_mbytes_per_sec": 0 00:10:46.296 }, 00:10:46.296 "claimed": false, 00:10:46.296 "zoned": false, 00:10:46.296 "supported_io_types": { 00:10:46.296 "read": true, 00:10:46.296 "write": true, 00:10:46.296 "unmap": true, 00:10:46.296 "write_zeroes": true, 00:10:46.296 "flush": true, 00:10:46.296 "reset": true, 00:10:46.296 "compare": false, 00:10:46.296 "compare_and_write": false, 00:10:46.296 "abort": false, 00:10:46.296 "nvme_admin": false, 00:10:46.296 "nvme_io": false 00:10:46.296 }, 00:10:46.296 "memory_domains": [ 00:10:46.296 { 00:10:46.296 "dma_device_id": "system", 00:10:46.296 "dma_device_type": 1 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.296 "dma_device_type": 2 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "dma_device_id": "system", 00:10:46.296 "dma_device_type": 1 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.296 "dma_device_type": 2 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "dma_device_id": "system", 00:10:46.296 "dma_device_type": 1 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.296 "dma_device_type": 2 00:10:46.296 } 00:10:46.296 ], 00:10:46.296 "driver_specific": { 00:10:46.296 "raid": { 00:10:46.296 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:46.296 "strip_size_kb": 64, 00:10:46.296 "state": "online", 00:10:46.296 "raid_level": "raid0", 00:10:46.296 "superblock": true, 00:10:46.296 "num_base_bdevs": 3, 00:10:46.296 "num_base_bdevs_discovered": 3, 00:10:46.296 "num_base_bdevs_operational": 3, 00:10:46.296 "base_bdevs_list": [ 00:10:46.296 { 00:10:46.296 "name": "BaseBdev1", 00:10:46.296 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:46.296 "is_configured": true, 00:10:46.296 "data_offset": 2048, 00:10:46.296 "data_size": 63488 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "name": "BaseBdev2", 00:10:46.296 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:46.296 "is_configured": true, 00:10:46.296 "data_offset": 2048, 00:10:46.296 "data_size": 63488 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "name": "BaseBdev3", 00:10:46.296 "uuid": "4c0b0513-123c-11ef-8c90-4585f0cfab08", 00:10:46.296 "is_configured": true, 00:10:46.296 "data_offset": 2048, 00:10:46.296 "data_size": 63488 00:10:46.296 } 00:10:46.296 ] 00:10:46.296 } 00:10:46.296 } 00:10:46.296 }' 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:10:46.296 BaseBdev2 00:10:46.296 BaseBdev3' 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:46.296 21:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:46.555 "name": "BaseBdev1", 00:10:46.555 "aliases": [ 00:10:46.555 "49b30475-123c-11ef-8c90-4585f0cfab08" 00:10:46.555 ], 00:10:46.555 "product_name": "Malloc disk", 00:10:46.555 "block_size": 512, 00:10:46.555 "num_blocks": 65536, 00:10:46.555 "uuid": "49b30475-123c-11ef-8c90-4585f0cfab08", 00:10:46.555 "assigned_rate_limits": { 00:10:46.555 "rw_ios_per_sec": 0, 00:10:46.555 "rw_mbytes_per_sec": 0, 00:10:46.555 "r_mbytes_per_sec": 0, 00:10:46.555 "w_mbytes_per_sec": 0 00:10:46.555 }, 00:10:46.555 "claimed": true, 00:10:46.555 "claim_type": "exclusive_write", 00:10:46.555 "zoned": false, 00:10:46.555 "supported_io_types": { 00:10:46.555 "read": true, 00:10:46.555 "write": true, 00:10:46.555 "unmap": true, 00:10:46.555 "write_zeroes": true, 00:10:46.555 "flush": true, 00:10:46.555 "reset": true, 00:10:46.555 "compare": false, 00:10:46.555 "compare_and_write": false, 00:10:46.555 "abort": true, 00:10:46.555 "nvme_admin": false, 00:10:46.555 "nvme_io": false 00:10:46.555 }, 00:10:46.555 "memory_domains": [ 00:10:46.555 { 00:10:46.555 "dma_device_id": "system", 00:10:46.555 "dma_device_type": 1 00:10:46.555 }, 00:10:46.555 { 00:10:46.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.555 "dma_device_type": 2 00:10:46.555 } 00:10:46.555 ], 00:10:46.555 "driver_specific": {} 00:10:46.555 }' 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:46.555 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:46.814 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:46.814 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:46.814 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:46.814 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:47.073 "name": "BaseBdev2", 00:10:47.073 "aliases": [ 00:10:47.073 "4b440951-123c-11ef-8c90-4585f0cfab08" 00:10:47.073 ], 00:10:47.073 "product_name": "Malloc disk", 00:10:47.073 "block_size": 512, 00:10:47.073 "num_blocks": 65536, 00:10:47.073 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:47.073 "assigned_rate_limits": { 00:10:47.073 "rw_ios_per_sec": 0, 00:10:47.073 "rw_mbytes_per_sec": 0, 00:10:47.073 "r_mbytes_per_sec": 0, 00:10:47.073 "w_mbytes_per_sec": 0 00:10:47.073 }, 00:10:47.073 "claimed": true, 00:10:47.073 "claim_type": "exclusive_write", 00:10:47.073 "zoned": false, 00:10:47.073 "supported_io_types": { 00:10:47.073 "read": true, 00:10:47.073 "write": true, 00:10:47.073 "unmap": true, 00:10:47.073 "write_zeroes": true, 00:10:47.073 "flush": true, 00:10:47.073 "reset": true, 00:10:47.073 "compare": false, 00:10:47.073 "compare_and_write": false, 00:10:47.073 "abort": true, 00:10:47.073 "nvme_admin": false, 00:10:47.073 "nvme_io": false 00:10:47.073 }, 00:10:47.073 "memory_domains": [ 00:10:47.073 { 00:10:47.073 "dma_device_id": "system", 00:10:47.073 "dma_device_type": 1 00:10:47.073 }, 00:10:47.073 { 00:10:47.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.073 "dma_device_type": 2 00:10:47.073 } 00:10:47.073 ], 00:10:47.073 "driver_specific": {} 00:10:47.073 }' 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:47.073 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:10:47.331 "name": "BaseBdev3", 00:10:47.331 "aliases": [ 00:10:47.331 "4c0b0513-123c-11ef-8c90-4585f0cfab08" 00:10:47.331 ], 00:10:47.331 "product_name": "Malloc disk", 00:10:47.331 "block_size": 512, 00:10:47.331 "num_blocks": 65536, 00:10:47.331 "uuid": "4c0b0513-123c-11ef-8c90-4585f0cfab08", 00:10:47.331 "assigned_rate_limits": { 00:10:47.331 "rw_ios_per_sec": 0, 00:10:47.331 "rw_mbytes_per_sec": 0, 00:10:47.331 "r_mbytes_per_sec": 0, 00:10:47.331 "w_mbytes_per_sec": 0 00:10:47.331 }, 00:10:47.331 "claimed": true, 00:10:47.331 "claim_type": "exclusive_write", 00:10:47.331 "zoned": false, 00:10:47.331 "supported_io_types": { 00:10:47.331 "read": true, 00:10:47.331 "write": true, 00:10:47.331 "unmap": true, 00:10:47.331 "write_zeroes": true, 00:10:47.331 "flush": true, 00:10:47.331 "reset": true, 00:10:47.331 "compare": false, 00:10:47.331 "compare_and_write": false, 00:10:47.331 "abort": true, 00:10:47.331 "nvme_admin": false, 00:10:47.331 "nvme_io": false 00:10:47.331 }, 00:10:47.331 "memory_domains": [ 00:10:47.331 { 00:10:47.331 "dma_device_id": "system", 00:10:47.331 "dma_device_type": 1 00:10:47.331 }, 00:10:47.331 { 00:10:47.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.331 "dma_device_type": 2 00:10:47.331 } 00:10:47.331 ], 00:10:47.331 "driver_specific": {} 00:10:47.331 }' 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:10:47.331 21:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:47.590 [2024-05-14 21:52:48.154457] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.590 [2024-05-14 21:52:48.154479] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.590 [2024-05-14 21:52:48.154493] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:47.590 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:47.849 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:47.849 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.849 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.108 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:48.108 "name": "Existed_Raid", 00:10:48.108 "uuid": "4abb4fbe-123c-11ef-8c90-4585f0cfab08", 00:10:48.108 "strip_size_kb": 64, 00:10:48.108 "state": "offline", 00:10:48.108 "raid_level": "raid0", 00:10:48.108 "superblock": true, 00:10:48.108 "num_base_bdevs": 3, 00:10:48.108 "num_base_bdevs_discovered": 2, 00:10:48.108 "num_base_bdevs_operational": 2, 00:10:48.108 "base_bdevs_list": [ 00:10:48.108 { 00:10:48.108 "name": null, 00:10:48.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.108 "is_configured": false, 00:10:48.108 "data_offset": 2048, 00:10:48.108 "data_size": 63488 00:10:48.108 }, 00:10:48.108 { 00:10:48.108 "name": "BaseBdev2", 00:10:48.108 "uuid": "4b440951-123c-11ef-8c90-4585f0cfab08", 00:10:48.108 "is_configured": true, 00:10:48.108 "data_offset": 2048, 00:10:48.108 "data_size": 63488 00:10:48.108 }, 00:10:48.108 { 00:10:48.108 "name": "BaseBdev3", 00:10:48.108 "uuid": "4c0b0513-123c-11ef-8c90-4585f0cfab08", 00:10:48.108 "is_configured": true, 00:10:48.108 "data_offset": 2048, 00:10:48.108 "data_size": 63488 00:10:48.108 } 00:10:48.108 ] 00:10:48.108 }' 00:10:48.108 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:48.108 21:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.366 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:48.366 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.366 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.366 21:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:48.624 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:48.624 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.624 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:48.882 [2024-05-14 21:52:49.312497] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.882 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:48.882 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.882 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.882 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:10:49.142 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:10:49.142 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.142 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:49.400 [2024-05-14 21:52:49.882344] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.400 [2024-05-14 21:52:49.882384] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c468300 name Existed_Raid, state offline 00:10:49.400 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.400 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.400 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.400 21:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:49.658 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.916 BaseBdev2 00:10:49.916 21:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:10:49.916 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:10:49.916 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:50.173 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:50.173 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:50.174 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:50.174 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:50.432 21:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.690 [ 00:10:50.690 { 00:10:50.690 "name": "BaseBdev2", 00:10:50.690 "aliases": [ 00:10:50.690 "4f0506ed-123c-11ef-8c90-4585f0cfab08" 00:10:50.690 ], 00:10:50.690 "product_name": "Malloc disk", 00:10:50.690 "block_size": 512, 00:10:50.690 "num_blocks": 65536, 00:10:50.690 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:50.690 "assigned_rate_limits": { 00:10:50.690 "rw_ios_per_sec": 0, 00:10:50.690 "rw_mbytes_per_sec": 0, 00:10:50.690 "r_mbytes_per_sec": 0, 00:10:50.690 "w_mbytes_per_sec": 0 00:10:50.690 }, 00:10:50.690 "claimed": false, 00:10:50.690 "zoned": false, 00:10:50.690 "supported_io_types": { 00:10:50.690 "read": true, 00:10:50.690 "write": true, 00:10:50.690 "unmap": true, 00:10:50.690 "write_zeroes": true, 00:10:50.690 "flush": true, 00:10:50.690 "reset": true, 00:10:50.690 "compare": false, 00:10:50.690 "compare_and_write": false, 00:10:50.690 "abort": true, 00:10:50.690 "nvme_admin": false, 00:10:50.690 "nvme_io": false 00:10:50.690 }, 00:10:50.690 "memory_domains": [ 00:10:50.690 { 00:10:50.690 "dma_device_id": "system", 00:10:50.690 "dma_device_type": 1 00:10:50.690 }, 00:10:50.690 { 00:10:50.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.690 "dma_device_type": 2 00:10:50.690 } 00:10:50.690 ], 00:10:50.690 "driver_specific": {} 00:10:50.690 } 00:10:50.690 ] 00:10:50.690 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:50.690 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:10:50.690 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:50.690 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.949 BaseBdev3 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:50.949 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.207 [ 00:10:51.207 { 00:10:51.207 "name": "BaseBdev3", 00:10:51.207 "aliases": [ 00:10:51.207 "4f80efe0-123c-11ef-8c90-4585f0cfab08" 00:10:51.207 ], 00:10:51.207 "product_name": "Malloc disk", 00:10:51.207 "block_size": 512, 00:10:51.207 "num_blocks": 65536, 00:10:51.207 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:51.207 "assigned_rate_limits": { 00:10:51.207 "rw_ios_per_sec": 0, 00:10:51.207 "rw_mbytes_per_sec": 0, 00:10:51.207 "r_mbytes_per_sec": 0, 00:10:51.207 "w_mbytes_per_sec": 0 00:10:51.207 }, 00:10:51.207 "claimed": false, 00:10:51.207 "zoned": false, 00:10:51.207 "supported_io_types": { 00:10:51.207 "read": true, 00:10:51.207 "write": true, 00:10:51.207 "unmap": true, 00:10:51.207 "write_zeroes": true, 00:10:51.207 "flush": true, 00:10:51.207 "reset": true, 00:10:51.207 "compare": false, 00:10:51.207 "compare_and_write": false, 00:10:51.207 "abort": true, 00:10:51.207 "nvme_admin": false, 00:10:51.207 "nvme_io": false 00:10:51.207 }, 00:10:51.207 "memory_domains": [ 00:10:51.207 { 00:10:51.207 "dma_device_id": "system", 00:10:51.207 "dma_device_type": 1 00:10:51.207 }, 00:10:51.207 { 00:10:51.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.207 "dma_device_type": 2 00:10:51.207 } 00:10:51.207 ], 00:10:51.207 "driver_specific": {} 00:10:51.207 } 00:10:51.207 ] 00:10:51.465 21:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:51.465 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:10:51.465 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:10:51.465 21:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:51.465 [2024-05-14 21:52:52.052289] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.465 [2024-05-14 21:52:52.052344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.465 [2024-05-14 21:52:52.052354] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.465 [2024-05-14 21:52:52.052908] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.722 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:51.722 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:51.722 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:51.722 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:51.722 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:51.723 "name": "Existed_Raid", 00:10:51.723 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:51.723 "strip_size_kb": 64, 00:10:51.723 "state": "configuring", 00:10:51.723 "raid_level": "raid0", 00:10:51.723 "superblock": true, 00:10:51.723 "num_base_bdevs": 3, 00:10:51.723 "num_base_bdevs_discovered": 2, 00:10:51.723 "num_base_bdevs_operational": 3, 00:10:51.723 "base_bdevs_list": [ 00:10:51.723 { 00:10:51.723 "name": "BaseBdev1", 00:10:51.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.723 "is_configured": false, 00:10:51.723 "data_offset": 0, 00:10:51.723 "data_size": 0 00:10:51.723 }, 00:10:51.723 { 00:10:51.723 "name": "BaseBdev2", 00:10:51.723 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:51.723 "is_configured": true, 00:10:51.723 "data_offset": 2048, 00:10:51.723 "data_size": 63488 00:10:51.723 }, 00:10:51.723 { 00:10:51.723 "name": "BaseBdev3", 00:10:51.723 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:51.723 "is_configured": true, 00:10:51.723 "data_offset": 2048, 00:10:51.723 "data_size": 63488 00:10:51.723 } 00:10:51.723 ] 00:10:51.723 }' 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:51.723 21:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:52.288 [2024-05-14 21:52:52.816303] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.288 21:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.546 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:52.546 "name": "Existed_Raid", 00:10:52.546 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:52.546 "strip_size_kb": 64, 00:10:52.546 "state": "configuring", 00:10:52.546 "raid_level": "raid0", 00:10:52.546 "superblock": true, 00:10:52.546 "num_base_bdevs": 3, 00:10:52.546 "num_base_bdevs_discovered": 1, 00:10:52.546 "num_base_bdevs_operational": 3, 00:10:52.546 "base_bdevs_list": [ 00:10:52.546 { 00:10:52.546 "name": "BaseBdev1", 00:10:52.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.546 "is_configured": false, 00:10:52.546 "data_offset": 0, 00:10:52.546 "data_size": 0 00:10:52.546 }, 00:10:52.546 { 00:10:52.546 "name": null, 00:10:52.546 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:52.546 "is_configured": false, 00:10:52.546 "data_offset": 2048, 00:10:52.546 "data_size": 63488 00:10:52.546 }, 00:10:52.546 { 00:10:52.546 "name": "BaseBdev3", 00:10:52.546 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:52.546 "is_configured": true, 00:10:52.546 "data_offset": 2048, 00:10:52.546 "data_size": 63488 00:10:52.546 } 00:10:52.546 ] 00:10:52.546 }' 00:10:52.546 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:52.546 21:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.127 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.127 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.385 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:10:53.385 21:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.643 [2024-05-14 21:52:54.052507] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.643 BaseBdev1 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:53.643 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:53.901 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.160 [ 00:10:54.160 { 00:10:54.160 "name": "BaseBdev1", 00:10:54.160 "aliases": [ 00:10:54.160 "51257cc3-123c-11ef-8c90-4585f0cfab08" 00:10:54.160 ], 00:10:54.160 "product_name": "Malloc disk", 00:10:54.160 "block_size": 512, 00:10:54.160 "num_blocks": 65536, 00:10:54.160 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:54.160 "assigned_rate_limits": { 00:10:54.160 "rw_ios_per_sec": 0, 00:10:54.160 "rw_mbytes_per_sec": 0, 00:10:54.160 "r_mbytes_per_sec": 0, 00:10:54.160 "w_mbytes_per_sec": 0 00:10:54.160 }, 00:10:54.160 "claimed": true, 00:10:54.160 "claim_type": "exclusive_write", 00:10:54.160 "zoned": false, 00:10:54.160 "supported_io_types": { 00:10:54.160 "read": true, 00:10:54.160 "write": true, 00:10:54.160 "unmap": true, 00:10:54.160 "write_zeroes": true, 00:10:54.160 "flush": true, 00:10:54.160 "reset": true, 00:10:54.160 "compare": false, 00:10:54.160 "compare_and_write": false, 00:10:54.160 "abort": true, 00:10:54.160 "nvme_admin": false, 00:10:54.160 "nvme_io": false 00:10:54.160 }, 00:10:54.160 "memory_domains": [ 00:10:54.160 { 00:10:54.160 "dma_device_id": "system", 00:10:54.160 "dma_device_type": 1 00:10:54.160 }, 00:10:54.160 { 00:10:54.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.160 "dma_device_type": 2 00:10:54.160 } 00:10:54.160 ], 00:10:54.160 "driver_specific": {} 00:10:54.160 } 00:10:54.160 ] 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.160 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.419 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:54.419 "name": "Existed_Raid", 00:10:54.419 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:54.419 "strip_size_kb": 64, 00:10:54.419 "state": "configuring", 00:10:54.419 "raid_level": "raid0", 00:10:54.419 "superblock": true, 00:10:54.419 "num_base_bdevs": 3, 00:10:54.419 "num_base_bdevs_discovered": 2, 00:10:54.419 "num_base_bdevs_operational": 3, 00:10:54.419 "base_bdevs_list": [ 00:10:54.419 { 00:10:54.419 "name": "BaseBdev1", 00:10:54.419 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:54.419 "is_configured": true, 00:10:54.419 "data_offset": 2048, 00:10:54.419 "data_size": 63488 00:10:54.419 }, 00:10:54.419 { 00:10:54.419 "name": null, 00:10:54.419 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:54.419 "is_configured": false, 00:10:54.419 "data_offset": 2048, 00:10:54.419 "data_size": 63488 00:10:54.419 }, 00:10:54.419 { 00:10:54.419 "name": "BaseBdev3", 00:10:54.419 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:54.419 "is_configured": true, 00:10:54.419 "data_offset": 2048, 00:10:54.419 "data_size": 63488 00:10:54.419 } 00:10:54.419 ] 00:10:54.419 }' 00:10:54.419 21:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:54.419 21:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.678 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.935 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.935 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:55.193 [2024-05-14 21:52:55.592437] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.193 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.452 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:55.452 "name": "Existed_Raid", 00:10:55.452 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:55.452 "strip_size_kb": 64, 00:10:55.452 "state": "configuring", 00:10:55.452 "raid_level": "raid0", 00:10:55.452 "superblock": true, 00:10:55.452 "num_base_bdevs": 3, 00:10:55.452 "num_base_bdevs_discovered": 1, 00:10:55.452 "num_base_bdevs_operational": 3, 00:10:55.452 "base_bdevs_list": [ 00:10:55.452 { 00:10:55.452 "name": "BaseBdev1", 00:10:55.452 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:55.452 "is_configured": true, 00:10:55.452 "data_offset": 2048, 00:10:55.452 "data_size": 63488 00:10:55.452 }, 00:10:55.452 { 00:10:55.452 "name": null, 00:10:55.452 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:55.452 "is_configured": false, 00:10:55.452 "data_offset": 2048, 00:10:55.452 "data_size": 63488 00:10:55.452 }, 00:10:55.452 { 00:10:55.452 "name": null, 00:10:55.452 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:55.452 "is_configured": false, 00:10:55.452 "data_offset": 2048, 00:10:55.452 "data_size": 63488 00:10:55.452 } 00:10:55.452 ] 00:10:55.452 }' 00:10:55.452 21:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:55.452 21:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.711 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.711 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.968 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:10:55.969 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.227 [2024-05-14 21:52:56.768508] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.227 21:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.485 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:56.485 "name": "Existed_Raid", 00:10:56.485 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:56.485 "strip_size_kb": 64, 00:10:56.485 "state": "configuring", 00:10:56.485 "raid_level": "raid0", 00:10:56.485 "superblock": true, 00:10:56.485 "num_base_bdevs": 3, 00:10:56.485 "num_base_bdevs_discovered": 2, 00:10:56.485 "num_base_bdevs_operational": 3, 00:10:56.485 "base_bdevs_list": [ 00:10:56.485 { 00:10:56.485 "name": "BaseBdev1", 00:10:56.485 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:56.485 "is_configured": true, 00:10:56.485 "data_offset": 2048, 00:10:56.485 "data_size": 63488 00:10:56.485 }, 00:10:56.485 { 00:10:56.485 "name": null, 00:10:56.485 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:56.485 "is_configured": false, 00:10:56.485 "data_offset": 2048, 00:10:56.485 "data_size": 63488 00:10:56.485 }, 00:10:56.485 { 00:10:56.485 "name": "BaseBdev3", 00:10:56.485 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:56.485 "is_configured": true, 00:10:56.485 "data_offset": 2048, 00:10:56.485 "data_size": 63488 00:10:56.485 } 00:10:56.485 ] 00:10:56.485 }' 00:10:56.485 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:56.485 21:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.051 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.051 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.309 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:10:57.309 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:57.309 [2024-05-14 21:52:57.892559] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.567 21:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.825 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:57.825 "name": "Existed_Raid", 00:10:57.825 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:57.825 "strip_size_kb": 64, 00:10:57.825 "state": "configuring", 00:10:57.825 "raid_level": "raid0", 00:10:57.825 "superblock": true, 00:10:57.825 "num_base_bdevs": 3, 00:10:57.825 "num_base_bdevs_discovered": 1, 00:10:57.825 "num_base_bdevs_operational": 3, 00:10:57.825 "base_bdevs_list": [ 00:10:57.825 { 00:10:57.825 "name": null, 00:10:57.825 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:57.825 "is_configured": false, 00:10:57.825 "data_offset": 2048, 00:10:57.825 "data_size": 63488 00:10:57.825 }, 00:10:57.825 { 00:10:57.825 "name": null, 00:10:57.825 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:57.825 "is_configured": false, 00:10:57.825 "data_offset": 2048, 00:10:57.825 "data_size": 63488 00:10:57.825 }, 00:10:57.825 { 00:10:57.825 "name": "BaseBdev3", 00:10:57.825 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:57.825 "is_configured": true, 00:10:57.825 "data_offset": 2048, 00:10:57.825 "data_size": 63488 00:10:57.825 } 00:10:57.825 ] 00:10:57.825 }' 00:10:57.825 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:57.825 21:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.128 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.128 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.386 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:10:58.386 21:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.644 [2024-05-14 21:52:59.014553] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.644 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.902 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:58.902 "name": "Existed_Raid", 00:10:58.902 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:10:58.902 "strip_size_kb": 64, 00:10:58.902 "state": "configuring", 00:10:58.902 "raid_level": "raid0", 00:10:58.902 "superblock": true, 00:10:58.902 "num_base_bdevs": 3, 00:10:58.902 "num_base_bdevs_discovered": 2, 00:10:58.902 "num_base_bdevs_operational": 3, 00:10:58.902 "base_bdevs_list": [ 00:10:58.902 { 00:10:58.902 "name": null, 00:10:58.902 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:10:58.902 "is_configured": false, 00:10:58.902 "data_offset": 2048, 00:10:58.902 "data_size": 63488 00:10:58.902 }, 00:10:58.902 { 00:10:58.902 "name": "BaseBdev2", 00:10:58.902 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:10:58.902 "is_configured": true, 00:10:58.902 "data_offset": 2048, 00:10:58.902 "data_size": 63488 00:10:58.902 }, 00:10:58.902 { 00:10:58.902 "name": "BaseBdev3", 00:10:58.902 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:10:58.902 "is_configured": true, 00:10:58.902 "data_offset": 2048, 00:10:58.902 "data_size": 63488 00:10:58.902 } 00:10:58.902 ] 00:10:58.902 }' 00:10:58.902 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:58.902 21:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.160 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.418 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:10:59.418 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:59.418 21:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.676 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 51257cc3-123c-11ef-8c90-4585f0cfab08 00:10:59.933 [2024-05-14 21:53:00.386724] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:59.933 [2024-05-14 21:53:00.386774] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c468300 00:10:59.933 [2024-05-14 21:53:00.386779] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.933 [2024-05-14 21:53:00.386809] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c4c6e20 00:10:59.933 [2024-05-14 21:53:00.386856] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c468300 00:10:59.933 [2024-05-14 21:53:00.386861] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c468300 00:10:59.933 [2024-05-14 21:53:00.386888] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.933 NewBaseBdev 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:59.933 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.191 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.449 [ 00:11:00.449 { 00:11:00.449 "name": "NewBaseBdev", 00:11:00.449 "aliases": [ 00:11:00.449 "51257cc3-123c-11ef-8c90-4585f0cfab08" 00:11:00.449 ], 00:11:00.449 "product_name": "Malloc disk", 00:11:00.449 "block_size": 512, 00:11:00.449 "num_blocks": 65536, 00:11:00.449 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:11:00.449 "assigned_rate_limits": { 00:11:00.449 "rw_ios_per_sec": 0, 00:11:00.449 "rw_mbytes_per_sec": 0, 00:11:00.449 "r_mbytes_per_sec": 0, 00:11:00.449 "w_mbytes_per_sec": 0 00:11:00.449 }, 00:11:00.449 "claimed": true, 00:11:00.449 "claim_type": "exclusive_write", 00:11:00.449 "zoned": false, 00:11:00.449 "supported_io_types": { 00:11:00.449 "read": true, 00:11:00.449 "write": true, 00:11:00.449 "unmap": true, 00:11:00.449 "write_zeroes": true, 00:11:00.449 "flush": true, 00:11:00.449 "reset": true, 00:11:00.449 "compare": false, 00:11:00.449 "compare_and_write": false, 00:11:00.449 "abort": true, 00:11:00.449 "nvme_admin": false, 00:11:00.449 "nvme_io": false 00:11:00.449 }, 00:11:00.449 "memory_domains": [ 00:11:00.449 { 00:11:00.449 "dma_device_id": "system", 00:11:00.449 "dma_device_type": 1 00:11:00.449 }, 00:11:00.449 { 00:11:00.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.449 "dma_device_type": 2 00:11:00.449 } 00:11:00.449 ], 00:11:00.449 "driver_specific": {} 00:11:00.449 } 00:11:00.449 ] 00:11:00.449 21:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:00.449 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.450 21:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.708 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:00.708 "name": "Existed_Raid", 00:11:00.708 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:11:00.708 "strip_size_kb": 64, 00:11:00.708 "state": "online", 00:11:00.708 "raid_level": "raid0", 00:11:00.708 "superblock": true, 00:11:00.708 "num_base_bdevs": 3, 00:11:00.708 "num_base_bdevs_discovered": 3, 00:11:00.708 "num_base_bdevs_operational": 3, 00:11:00.708 "base_bdevs_list": [ 00:11:00.708 { 00:11:00.708 "name": "NewBaseBdev", 00:11:00.708 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:11:00.708 "is_configured": true, 00:11:00.708 "data_offset": 2048, 00:11:00.708 "data_size": 63488 00:11:00.708 }, 00:11:00.708 { 00:11:00.708 "name": "BaseBdev2", 00:11:00.708 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:11:00.708 "is_configured": true, 00:11:00.708 "data_offset": 2048, 00:11:00.708 "data_size": 63488 00:11:00.708 }, 00:11:00.708 { 00:11:00.708 "name": "BaseBdev3", 00:11:00.708 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:11:00.708 "is_configured": true, 00:11:00.708 "data_offset": 2048, 00:11:00.708 "data_size": 63488 00:11:00.708 } 00:11:00.708 ] 00:11:00.708 }' 00:11:00.708 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:00.708 21:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:00.965 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:01.223 [2024-05-14 21:53:01.722691] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:01.223 "name": "Existed_Raid", 00:11:01.223 "aliases": [ 00:11:01.223 "4ff44c56-123c-11ef-8c90-4585f0cfab08" 00:11:01.223 ], 00:11:01.223 "product_name": "Raid Volume", 00:11:01.223 "block_size": 512, 00:11:01.223 "num_blocks": 190464, 00:11:01.223 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:11:01.223 "assigned_rate_limits": { 00:11:01.223 "rw_ios_per_sec": 0, 00:11:01.223 "rw_mbytes_per_sec": 0, 00:11:01.223 "r_mbytes_per_sec": 0, 00:11:01.223 "w_mbytes_per_sec": 0 00:11:01.223 }, 00:11:01.223 "claimed": false, 00:11:01.223 "zoned": false, 00:11:01.223 "supported_io_types": { 00:11:01.223 "read": true, 00:11:01.223 "write": true, 00:11:01.223 "unmap": true, 00:11:01.223 "write_zeroes": true, 00:11:01.223 "flush": true, 00:11:01.223 "reset": true, 00:11:01.223 "compare": false, 00:11:01.223 "compare_and_write": false, 00:11:01.223 "abort": false, 00:11:01.223 "nvme_admin": false, 00:11:01.223 "nvme_io": false 00:11:01.223 }, 00:11:01.223 "memory_domains": [ 00:11:01.223 { 00:11:01.223 "dma_device_id": "system", 00:11:01.223 "dma_device_type": 1 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.223 "dma_device_type": 2 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "dma_device_id": "system", 00:11:01.223 "dma_device_type": 1 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.223 "dma_device_type": 2 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "dma_device_id": "system", 00:11:01.223 "dma_device_type": 1 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.223 "dma_device_type": 2 00:11:01.223 } 00:11:01.223 ], 00:11:01.223 "driver_specific": { 00:11:01.223 "raid": { 00:11:01.223 "uuid": "4ff44c56-123c-11ef-8c90-4585f0cfab08", 00:11:01.223 "strip_size_kb": 64, 00:11:01.223 "state": "online", 00:11:01.223 "raid_level": "raid0", 00:11:01.223 "superblock": true, 00:11:01.223 "num_base_bdevs": 3, 00:11:01.223 "num_base_bdevs_discovered": 3, 00:11:01.223 "num_base_bdevs_operational": 3, 00:11:01.223 "base_bdevs_list": [ 00:11:01.223 { 00:11:01.223 "name": "NewBaseBdev", 00:11:01.223 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:11:01.223 "is_configured": true, 00:11:01.223 "data_offset": 2048, 00:11:01.223 "data_size": 63488 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "name": "BaseBdev2", 00:11:01.223 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:11:01.223 "is_configured": true, 00:11:01.223 "data_offset": 2048, 00:11:01.223 "data_size": 63488 00:11:01.223 }, 00:11:01.223 { 00:11:01.223 "name": "BaseBdev3", 00:11:01.223 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:11:01.223 "is_configured": true, 00:11:01.223 "data_offset": 2048, 00:11:01.223 "data_size": 63488 00:11:01.223 } 00:11:01.223 ] 00:11:01.223 } 00:11:01.223 } 00:11:01.223 }' 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:11:01.223 BaseBdev2 00:11:01.223 BaseBdev3' 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:01.223 21:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:01.481 "name": "NewBaseBdev", 00:11:01.481 "aliases": [ 00:11:01.481 "51257cc3-123c-11ef-8c90-4585f0cfab08" 00:11:01.481 ], 00:11:01.481 "product_name": "Malloc disk", 00:11:01.481 "block_size": 512, 00:11:01.481 "num_blocks": 65536, 00:11:01.481 "uuid": "51257cc3-123c-11ef-8c90-4585f0cfab08", 00:11:01.481 "assigned_rate_limits": { 00:11:01.481 "rw_ios_per_sec": 0, 00:11:01.481 "rw_mbytes_per_sec": 0, 00:11:01.481 "r_mbytes_per_sec": 0, 00:11:01.481 "w_mbytes_per_sec": 0 00:11:01.481 }, 00:11:01.481 "claimed": true, 00:11:01.481 "claim_type": "exclusive_write", 00:11:01.481 "zoned": false, 00:11:01.481 "supported_io_types": { 00:11:01.481 "read": true, 00:11:01.481 "write": true, 00:11:01.481 "unmap": true, 00:11:01.481 "write_zeroes": true, 00:11:01.481 "flush": true, 00:11:01.481 "reset": true, 00:11:01.481 "compare": false, 00:11:01.481 "compare_and_write": false, 00:11:01.481 "abort": true, 00:11:01.481 "nvme_admin": false, 00:11:01.481 "nvme_io": false 00:11:01.481 }, 00:11:01.481 "memory_domains": [ 00:11:01.481 { 00:11:01.481 "dma_device_id": "system", 00:11:01.481 "dma_device_type": 1 00:11:01.481 }, 00:11:01.481 { 00:11:01.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.481 "dma_device_type": 2 00:11:01.481 } 00:11:01.481 ], 00:11:01.481 "driver_specific": {} 00:11:01.481 }' 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:01.481 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.745 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:02.004 "name": "BaseBdev2", 00:11:02.004 "aliases": [ 00:11:02.004 "4f0506ed-123c-11ef-8c90-4585f0cfab08" 00:11:02.004 ], 00:11:02.004 "product_name": "Malloc disk", 00:11:02.004 "block_size": 512, 00:11:02.004 "num_blocks": 65536, 00:11:02.004 "uuid": "4f0506ed-123c-11ef-8c90-4585f0cfab08", 00:11:02.004 "assigned_rate_limits": { 00:11:02.004 "rw_ios_per_sec": 0, 00:11:02.004 "rw_mbytes_per_sec": 0, 00:11:02.004 "r_mbytes_per_sec": 0, 00:11:02.004 "w_mbytes_per_sec": 0 00:11:02.004 }, 00:11:02.004 "claimed": true, 00:11:02.004 "claim_type": "exclusive_write", 00:11:02.004 "zoned": false, 00:11:02.004 "supported_io_types": { 00:11:02.004 "read": true, 00:11:02.004 "write": true, 00:11:02.004 "unmap": true, 00:11:02.004 "write_zeroes": true, 00:11:02.004 "flush": true, 00:11:02.004 "reset": true, 00:11:02.004 "compare": false, 00:11:02.004 "compare_and_write": false, 00:11:02.004 "abort": true, 00:11:02.004 "nvme_admin": false, 00:11:02.004 "nvme_io": false 00:11:02.004 }, 00:11:02.004 "memory_domains": [ 00:11:02.004 { 00:11:02.004 "dma_device_id": "system", 00:11:02.004 "dma_device_type": 1 00:11:02.004 }, 00:11:02.004 { 00:11:02.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.004 "dma_device_type": 2 00:11:02.004 } 00:11:02.004 ], 00:11:02.004 "driver_specific": {} 00:11:02.004 }' 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:02.004 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:02.262 "name": "BaseBdev3", 00:11:02.262 "aliases": [ 00:11:02.262 "4f80efe0-123c-11ef-8c90-4585f0cfab08" 00:11:02.262 ], 00:11:02.262 "product_name": "Malloc disk", 00:11:02.262 "block_size": 512, 00:11:02.262 "num_blocks": 65536, 00:11:02.262 "uuid": "4f80efe0-123c-11ef-8c90-4585f0cfab08", 00:11:02.262 "assigned_rate_limits": { 00:11:02.262 "rw_ios_per_sec": 0, 00:11:02.262 "rw_mbytes_per_sec": 0, 00:11:02.262 "r_mbytes_per_sec": 0, 00:11:02.262 "w_mbytes_per_sec": 0 00:11:02.262 }, 00:11:02.262 "claimed": true, 00:11:02.262 "claim_type": "exclusive_write", 00:11:02.262 "zoned": false, 00:11:02.262 "supported_io_types": { 00:11:02.262 "read": true, 00:11:02.262 "write": true, 00:11:02.262 "unmap": true, 00:11:02.262 "write_zeroes": true, 00:11:02.262 "flush": true, 00:11:02.262 "reset": true, 00:11:02.262 "compare": false, 00:11:02.262 "compare_and_write": false, 00:11:02.262 "abort": true, 00:11:02.262 "nvme_admin": false, 00:11:02.262 "nvme_io": false 00:11:02.262 }, 00:11:02.262 "memory_domains": [ 00:11:02.262 { 00:11:02.262 "dma_device_id": "system", 00:11:02.262 "dma_device_type": 1 00:11:02.262 }, 00:11:02.262 { 00:11:02.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.262 "dma_device_type": 2 00:11:02.262 } 00:11:02.262 ], 00:11:02.262 "driver_specific": {} 00:11:02.262 }' 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:02.262 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:02.520 [2024-05-14 21:53:02.926679] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.520 [2024-05-14 21:53:02.926716] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.520 [2024-05-14 21:53:02.926737] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.520 [2024-05-14 21:53:02.926751] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.520 [2024-05-14 21:53:02.926755] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c468300 name Existed_Raid, state offline 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 52254 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 52254 ']' 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 52254 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 52254 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:11:02.520 killing process with pid 52254 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52254' 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 52254 00:11:02.520 [2024-05-14 21:53:02.954091] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.520 21:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 52254 00:11:02.520 [2024-05-14 21:53:02.971043] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.778 21:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:11:02.778 00:11:02.778 real 0m24.187s 00:11:02.778 user 0m44.162s 00:11:02.778 sys 0m3.403s 00:11:02.778 21:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.778 21:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.778 ************************************ 00:11:02.778 END TEST raid_state_function_test_sb 00:11:02.778 ************************************ 00:11:02.778 21:53:03 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:02.778 21:53:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:02.778 21:53:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.778 21:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.778 ************************************ 00:11:02.778 START TEST raid_superblock_test 00:11:02.778 ************************************ 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=52982 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 52982 /var/tmp/spdk-raid.sock 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 52982 ']' 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:02.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:02.779 21:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 [2024-05-14 21:53:03.207646] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:02.779 [2024-05-14 21:53:03.207871] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:03.344 EAL: TSC is not safe to use in SMP mode 00:11:03.344 EAL: TSC is not invariant 00:11:03.344 [2024-05-14 21:53:03.753772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.344 [2024-05-14 21:53:03.855593] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:03.344 [2024-05-14 21:53:03.858300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.344 [2024-05-14 21:53:03.859298] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.344 [2024-05-14 21:53:03.859317] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.909 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:04.167 malloc1 00:11:04.167 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.425 [2024-05-14 21:53:04.801419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.425 [2024-05-14 21:53:04.801534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.425 [2024-05-14 21:53:04.802137] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b1780 00:11:04.425 [2024-05-14 21:53:04.802164] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.425 [2024-05-14 21:53:04.803053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.425 [2024-05-14 21:53:04.803080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.425 pt1 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.425 21:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:04.683 malloc2 00:11:04.683 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.941 [2024-05-14 21:53:05.309442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.941 [2024-05-14 21:53:05.309532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.941 [2024-05-14 21:53:05.309560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b1c80 00:11:04.941 [2024-05-14 21:53:05.309569] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.941 [2024-05-14 21:53:05.310208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.941 [2024-05-14 21:53:05.310233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.941 pt2 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.941 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:05.198 malloc3 00:11:05.198 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.456 [2024-05-14 21:53:05.793450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.456 [2024-05-14 21:53:05.793540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.456 [2024-05-14 21:53:05.793566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b2180 00:11:05.456 [2024-05-14 21:53:05.793574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.456 [2024-05-14 21:53:05.794216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.456 [2024-05-14 21:53:05.794242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.456 pt3 00:11:05.456 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.456 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.456 21:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:05.456 [2024-05-14 21:53:06.029468] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.456 [2024-05-14 21:53:06.030063] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.456 [2024-05-14 21:53:06.030085] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.456 [2024-05-14 21:53:06.030132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c4b6300 00:11:05.456 [2024-05-14 21:53:06.030138] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.456 [2024-05-14 21:53:06.030172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c514e20 00:11:05.456 [2024-05-14 21:53:06.030244] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c4b6300 00:11:05.456 [2024-05-14 21:53:06.030249] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c4b6300 00:11:05.456 [2024-05-14 21:53:06.030275] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.715 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.972 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:05.972 "name": "raid_bdev1", 00:11:05.972 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:05.972 "strip_size_kb": 64, 00:11:05.972 "state": "online", 00:11:05.972 "raid_level": "raid0", 00:11:05.972 "superblock": true, 00:11:05.972 "num_base_bdevs": 3, 00:11:05.972 "num_base_bdevs_discovered": 3, 00:11:05.972 "num_base_bdevs_operational": 3, 00:11:05.972 "base_bdevs_list": [ 00:11:05.972 { 00:11:05.972 "name": "pt1", 00:11:05.972 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:05.972 "is_configured": true, 00:11:05.972 "data_offset": 2048, 00:11:05.972 "data_size": 63488 00:11:05.972 }, 00:11:05.972 { 00:11:05.972 "name": "pt2", 00:11:05.972 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:05.972 "is_configured": true, 00:11:05.972 "data_offset": 2048, 00:11:05.972 "data_size": 63488 00:11:05.972 }, 00:11:05.972 { 00:11:05.972 "name": "pt3", 00:11:05.972 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:05.972 "is_configured": true, 00:11:05.972 "data_offset": 2048, 00:11:05.972 "data_size": 63488 00:11:05.972 } 00:11:05.972 ] 00:11:05.972 }' 00:11:05.972 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:05.972 21:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:06.230 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:06.489 [2024-05-14 21:53:06.861552] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:06.489 "name": "raid_bdev1", 00:11:06.489 "aliases": [ 00:11:06.489 "58490bcf-123c-11ef-8c90-4585f0cfab08" 00:11:06.489 ], 00:11:06.489 "product_name": "Raid Volume", 00:11:06.489 "block_size": 512, 00:11:06.489 "num_blocks": 190464, 00:11:06.489 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:06.489 "assigned_rate_limits": { 00:11:06.489 "rw_ios_per_sec": 0, 00:11:06.489 "rw_mbytes_per_sec": 0, 00:11:06.489 "r_mbytes_per_sec": 0, 00:11:06.489 "w_mbytes_per_sec": 0 00:11:06.489 }, 00:11:06.489 "claimed": false, 00:11:06.489 "zoned": false, 00:11:06.489 "supported_io_types": { 00:11:06.489 "read": true, 00:11:06.489 "write": true, 00:11:06.489 "unmap": true, 00:11:06.489 "write_zeroes": true, 00:11:06.489 "flush": true, 00:11:06.489 "reset": true, 00:11:06.489 "compare": false, 00:11:06.489 "compare_and_write": false, 00:11:06.489 "abort": false, 00:11:06.489 "nvme_admin": false, 00:11:06.489 "nvme_io": false 00:11:06.489 }, 00:11:06.489 "memory_domains": [ 00:11:06.489 { 00:11:06.489 "dma_device_id": "system", 00:11:06.489 "dma_device_type": 1 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.489 "dma_device_type": 2 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "system", 00:11:06.489 "dma_device_type": 1 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.489 "dma_device_type": 2 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "system", 00:11:06.489 "dma_device_type": 1 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.489 "dma_device_type": 2 00:11:06.489 } 00:11:06.489 ], 00:11:06.489 "driver_specific": { 00:11:06.489 "raid": { 00:11:06.489 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:06.489 "strip_size_kb": 64, 00:11:06.489 "state": "online", 00:11:06.489 "raid_level": "raid0", 00:11:06.489 "superblock": true, 00:11:06.489 "num_base_bdevs": 3, 00:11:06.489 "num_base_bdevs_discovered": 3, 00:11:06.489 "num_base_bdevs_operational": 3, 00:11:06.489 "base_bdevs_list": [ 00:11:06.489 { 00:11:06.489 "name": "pt1", 00:11:06.489 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:06.489 "is_configured": true, 00:11:06.489 "data_offset": 2048, 00:11:06.489 "data_size": 63488 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "name": "pt2", 00:11:06.489 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:06.489 "is_configured": true, 00:11:06.489 "data_offset": 2048, 00:11:06.489 "data_size": 63488 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "name": "pt3", 00:11:06.489 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:06.489 "is_configured": true, 00:11:06.489 "data_offset": 2048, 00:11:06.489 "data_size": 63488 00:11:06.489 } 00:11:06.489 ] 00:11:06.489 } 00:11:06.489 } 00:11:06.489 }' 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:11:06.489 pt2 00:11:06.489 pt3' 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:06.489 21:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:06.808 "name": "pt1", 00:11:06.808 "aliases": [ 00:11:06.808 "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967" 00:11:06.808 ], 00:11:06.808 "product_name": "passthru", 00:11:06.808 "block_size": 512, 00:11:06.808 "num_blocks": 65536, 00:11:06.808 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:06.808 "assigned_rate_limits": { 00:11:06.808 "rw_ios_per_sec": 0, 00:11:06.808 "rw_mbytes_per_sec": 0, 00:11:06.808 "r_mbytes_per_sec": 0, 00:11:06.808 "w_mbytes_per_sec": 0 00:11:06.808 }, 00:11:06.808 "claimed": true, 00:11:06.808 "claim_type": "exclusive_write", 00:11:06.808 "zoned": false, 00:11:06.808 "supported_io_types": { 00:11:06.808 "read": true, 00:11:06.808 "write": true, 00:11:06.808 "unmap": true, 00:11:06.808 "write_zeroes": true, 00:11:06.808 "flush": true, 00:11:06.808 "reset": true, 00:11:06.808 "compare": false, 00:11:06.808 "compare_and_write": false, 00:11:06.808 "abort": true, 00:11:06.808 "nvme_admin": false, 00:11:06.808 "nvme_io": false 00:11:06.808 }, 00:11:06.808 "memory_domains": [ 00:11:06.808 { 00:11:06.808 "dma_device_id": "system", 00:11:06.808 "dma_device_type": 1 00:11:06.808 }, 00:11:06.808 { 00:11:06.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.808 "dma_device_type": 2 00:11:06.808 } 00:11:06.808 ], 00:11:06.808 "driver_specific": { 00:11:06.808 "passthru": { 00:11:06.808 "name": "pt1", 00:11:06.808 "base_bdev_name": "malloc1" 00:11:06.808 } 00:11:06.808 } 00:11:06.808 }' 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:06.808 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:07.066 "name": "pt2", 00:11:07.066 "aliases": [ 00:11:07.066 "64758879-a8c0-8b57-abda-37d9622cfa98" 00:11:07.066 ], 00:11:07.066 "product_name": "passthru", 00:11:07.066 "block_size": 512, 00:11:07.066 "num_blocks": 65536, 00:11:07.066 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:07.066 "assigned_rate_limits": { 00:11:07.066 "rw_ios_per_sec": 0, 00:11:07.066 "rw_mbytes_per_sec": 0, 00:11:07.066 "r_mbytes_per_sec": 0, 00:11:07.066 "w_mbytes_per_sec": 0 00:11:07.066 }, 00:11:07.066 "claimed": true, 00:11:07.066 "claim_type": "exclusive_write", 00:11:07.066 "zoned": false, 00:11:07.066 "supported_io_types": { 00:11:07.066 "read": true, 00:11:07.066 "write": true, 00:11:07.066 "unmap": true, 00:11:07.066 "write_zeroes": true, 00:11:07.066 "flush": true, 00:11:07.066 "reset": true, 00:11:07.066 "compare": false, 00:11:07.066 "compare_and_write": false, 00:11:07.066 "abort": true, 00:11:07.066 "nvme_admin": false, 00:11:07.066 "nvme_io": false 00:11:07.066 }, 00:11:07.066 "memory_domains": [ 00:11:07.066 { 00:11:07.066 "dma_device_id": "system", 00:11:07.066 "dma_device_type": 1 00:11:07.066 }, 00:11:07.066 { 00:11:07.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.066 "dma_device_type": 2 00:11:07.066 } 00:11:07.066 ], 00:11:07.066 "driver_specific": { 00:11:07.066 "passthru": { 00:11:07.066 "name": "pt2", 00:11:07.066 "base_bdev_name": "malloc2" 00:11:07.066 } 00:11:07.066 } 00:11:07.066 }' 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:07.066 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:07.325 "name": "pt3", 00:11:07.325 "aliases": [ 00:11:07.325 "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea" 00:11:07.325 ], 00:11:07.325 "product_name": "passthru", 00:11:07.325 "block_size": 512, 00:11:07.325 "num_blocks": 65536, 00:11:07.325 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:07.325 "assigned_rate_limits": { 00:11:07.325 "rw_ios_per_sec": 0, 00:11:07.325 "rw_mbytes_per_sec": 0, 00:11:07.325 "r_mbytes_per_sec": 0, 00:11:07.325 "w_mbytes_per_sec": 0 00:11:07.325 }, 00:11:07.325 "claimed": true, 00:11:07.325 "claim_type": "exclusive_write", 00:11:07.325 "zoned": false, 00:11:07.325 "supported_io_types": { 00:11:07.325 "read": true, 00:11:07.325 "write": true, 00:11:07.325 "unmap": true, 00:11:07.325 "write_zeroes": true, 00:11:07.325 "flush": true, 00:11:07.325 "reset": true, 00:11:07.325 "compare": false, 00:11:07.325 "compare_and_write": false, 00:11:07.325 "abort": true, 00:11:07.325 "nvme_admin": false, 00:11:07.325 "nvme_io": false 00:11:07.325 }, 00:11:07.325 "memory_domains": [ 00:11:07.325 { 00:11:07.325 "dma_device_id": "system", 00:11:07.325 "dma_device_type": 1 00:11:07.325 }, 00:11:07.325 { 00:11:07.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.325 "dma_device_type": 2 00:11:07.325 } 00:11:07.325 ], 00:11:07.325 "driver_specific": { 00:11:07.325 "passthru": { 00:11:07.325 "name": "pt3", 00:11:07.325 "base_bdev_name": "malloc3" 00:11:07.325 } 00:11:07.325 } 00:11:07.325 }' 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.325 21:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:07.584 [2024-05-14 21:53:08.145578] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.584 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=58490bcf-123c-11ef-8c90-4585f0cfab08 00:11:07.584 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 58490bcf-123c-11ef-8c90-4585f0cfab08 ']' 00:11:07.584 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:07.843 [2024-05-14 21:53:08.417528] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.843 [2024-05-14 21:53:08.417570] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.843 [2024-05-14 21:53:08.417591] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.843 [2024-05-14 21:53:08.417605] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.843 [2024-05-14 21:53:08.417610] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4b6300 name raid_bdev1, state offline 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.102 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:08.360 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.360 21:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:08.619 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.619 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:08.877 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:08.877 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:09.136 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:09.395 [2024-05-14 21:53:09.861607] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:09.395 [2024-05-14 21:53:09.862175] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:09.395 [2024-05-14 21:53:09.862194] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:09.395 [2024-05-14 21:53:09.862208] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:09.395 [2024-05-14 21:53:09.862244] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:09.395 [2024-05-14 21:53:09.862256] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:09.395 [2024-05-14 21:53:09.862265] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.395 [2024-05-14 21:53:09.862269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4b6300 name raid_bdev1, state configuring 00:11:09.395 request: 00:11:09.395 { 00:11:09.395 "name": "raid_bdev1", 00:11:09.395 "raid_level": "raid0", 00:11:09.395 "base_bdevs": [ 00:11:09.395 "malloc1", 00:11:09.395 "malloc2", 00:11:09.396 "malloc3" 00:11:09.396 ], 00:11:09.396 "superblock": false, 00:11:09.396 "strip_size_kb": 64, 00:11:09.396 "method": "bdev_raid_create", 00:11:09.396 "req_id": 1 00:11:09.396 } 00:11:09.396 Got JSON-RPC error response 00:11:09.396 response: 00:11:09.396 { 00:11:09.396 "code": -17, 00:11:09.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:09.396 } 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.396 21:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:09.653 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:09.653 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:09.653 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.911 [2024-05-14 21:53:10.341615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.911 [2024-05-14 21:53:10.341662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.911 [2024-05-14 21:53:10.341688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b2180 00:11:09.911 [2024-05-14 21:53:10.341696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.911 [2024-05-14 21:53:10.342326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.911 [2024-05-14 21:53:10.342350] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.911 [2024-05-14 21:53:10.342373] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:09.911 [2024-05-14 21:53:10.342385] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.911 pt1 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.911 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.170 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:10.170 "name": "raid_bdev1", 00:11:10.170 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:10.170 "strip_size_kb": 64, 00:11:10.170 "state": "configuring", 00:11:10.170 "raid_level": "raid0", 00:11:10.170 "superblock": true, 00:11:10.170 "num_base_bdevs": 3, 00:11:10.170 "num_base_bdevs_discovered": 1, 00:11:10.170 "num_base_bdevs_operational": 3, 00:11:10.170 "base_bdevs_list": [ 00:11:10.170 { 00:11:10.170 "name": "pt1", 00:11:10.170 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:10.170 "is_configured": true, 00:11:10.170 "data_offset": 2048, 00:11:10.170 "data_size": 63488 00:11:10.170 }, 00:11:10.170 { 00:11:10.170 "name": null, 00:11:10.170 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:10.170 "is_configured": false, 00:11:10.170 "data_offset": 2048, 00:11:10.170 "data_size": 63488 00:11:10.170 }, 00:11:10.170 { 00:11:10.170 "name": null, 00:11:10.170 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:10.170 "is_configured": false, 00:11:10.170 "data_offset": 2048, 00:11:10.170 "data_size": 63488 00:11:10.170 } 00:11:10.170 ] 00:11:10.170 }' 00:11:10.170 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:10.170 21:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:10.428 21:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.687 [2024-05-14 21:53:11.181651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.687 [2024-05-14 21:53:11.181705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.687 [2024-05-14 21:53:11.181731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b1780 00:11:10.687 [2024-05-14 21:53:11.181740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.687 [2024-05-14 21:53:11.181851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.687 [2024-05-14 21:53:11.181869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.687 [2024-05-14 21:53:11.181892] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:10.687 [2024-05-14 21:53:11.181901] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.687 pt2 00:11:10.688 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:10.947 [2024-05-14 21:53:11.449665] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.947 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.206 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:11.206 "name": "raid_bdev1", 00:11:11.206 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:11.206 "strip_size_kb": 64, 00:11:11.206 "state": "configuring", 00:11:11.206 "raid_level": "raid0", 00:11:11.206 "superblock": true, 00:11:11.206 "num_base_bdevs": 3, 00:11:11.206 "num_base_bdevs_discovered": 1, 00:11:11.206 "num_base_bdevs_operational": 3, 00:11:11.206 "base_bdevs_list": [ 00:11:11.206 { 00:11:11.206 "name": "pt1", 00:11:11.206 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:11.206 "is_configured": true, 00:11:11.206 "data_offset": 2048, 00:11:11.206 "data_size": 63488 00:11:11.206 }, 00:11:11.206 { 00:11:11.206 "name": null, 00:11:11.206 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:11.206 "is_configured": false, 00:11:11.206 "data_offset": 2048, 00:11:11.206 "data_size": 63488 00:11:11.206 }, 00:11:11.206 { 00:11:11.206 "name": null, 00:11:11.206 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:11.206 "is_configured": false, 00:11:11.206 "data_offset": 2048, 00:11:11.206 "data_size": 63488 00:11:11.206 } 00:11:11.206 ] 00:11:11.206 }' 00:11:11.206 21:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:11.206 21:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.465 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.465 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.465 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.724 [2024-05-14 21:53:12.269682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.724 [2024-05-14 21:53:12.269733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.724 [2024-05-14 21:53:12.269760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b1780 00:11:11.724 [2024-05-14 21:53:12.269768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.724 [2024-05-14 21:53:12.269877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.724 [2024-05-14 21:53:12.269888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.724 [2024-05-14 21:53:12.269911] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:11.724 [2024-05-14 21:53:12.269919] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.724 pt2 00:11:11.724 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.724 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.724 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.983 [2024-05-14 21:53:12.493686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.983 [2024-05-14 21:53:12.493735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.983 [2024-05-14 21:53:12.493758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4b2400 00:11:11.983 [2024-05-14 21:53:12.493766] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.983 [2024-05-14 21:53:12.493874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.983 [2024-05-14 21:53:12.493885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.983 [2024-05-14 21:53:12.493907] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:11.983 [2024-05-14 21:53:12.493915] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.983 [2024-05-14 21:53:12.493951] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c4b6300 00:11:11.983 [2024-05-14 21:53:12.493956] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.983 [2024-05-14 21:53:12.493976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c514e20 00:11:11.983 [2024-05-14 21:53:12.494032] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c4b6300 00:11:11.983 [2024-05-14 21:53:12.494036] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c4b6300 00:11:11.983 [2024-05-14 21:53:12.494057] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.983 pt3 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.983 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.243 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:12.243 "name": "raid_bdev1", 00:11:12.243 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:12.243 "strip_size_kb": 64, 00:11:12.243 "state": "online", 00:11:12.243 "raid_level": "raid0", 00:11:12.243 "superblock": true, 00:11:12.243 "num_base_bdevs": 3, 00:11:12.243 "num_base_bdevs_discovered": 3, 00:11:12.243 "num_base_bdevs_operational": 3, 00:11:12.243 "base_bdevs_list": [ 00:11:12.243 { 00:11:12.243 "name": "pt1", 00:11:12.243 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:12.243 "is_configured": true, 00:11:12.243 "data_offset": 2048, 00:11:12.243 "data_size": 63488 00:11:12.243 }, 00:11:12.243 { 00:11:12.243 "name": "pt2", 00:11:12.243 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:12.243 "is_configured": true, 00:11:12.243 "data_offset": 2048, 00:11:12.243 "data_size": 63488 00:11:12.243 }, 00:11:12.243 { 00:11:12.243 "name": "pt3", 00:11:12.243 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:12.243 "is_configured": true, 00:11:12.243 "data_offset": 2048, 00:11:12.243 "data_size": 63488 00:11:12.243 } 00:11:12.243 ] 00:11:12.243 }' 00:11:12.243 21:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:12.243 21:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:12.810 [2024-05-14 21:53:13.321758] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:12.810 "name": "raid_bdev1", 00:11:12.810 "aliases": [ 00:11:12.810 "58490bcf-123c-11ef-8c90-4585f0cfab08" 00:11:12.810 ], 00:11:12.810 "product_name": "Raid Volume", 00:11:12.810 "block_size": 512, 00:11:12.810 "num_blocks": 190464, 00:11:12.810 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:12.810 "assigned_rate_limits": { 00:11:12.810 "rw_ios_per_sec": 0, 00:11:12.810 "rw_mbytes_per_sec": 0, 00:11:12.810 "r_mbytes_per_sec": 0, 00:11:12.810 "w_mbytes_per_sec": 0 00:11:12.810 }, 00:11:12.810 "claimed": false, 00:11:12.810 "zoned": false, 00:11:12.810 "supported_io_types": { 00:11:12.810 "read": true, 00:11:12.810 "write": true, 00:11:12.810 "unmap": true, 00:11:12.810 "write_zeroes": true, 00:11:12.810 "flush": true, 00:11:12.810 "reset": true, 00:11:12.810 "compare": false, 00:11:12.810 "compare_and_write": false, 00:11:12.810 "abort": false, 00:11:12.810 "nvme_admin": false, 00:11:12.810 "nvme_io": false 00:11:12.810 }, 00:11:12.810 "memory_domains": [ 00:11:12.810 { 00:11:12.810 "dma_device_id": "system", 00:11:12.810 "dma_device_type": 1 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.810 "dma_device_type": 2 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "dma_device_id": "system", 00:11:12.810 "dma_device_type": 1 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.810 "dma_device_type": 2 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "dma_device_id": "system", 00:11:12.810 "dma_device_type": 1 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.810 "dma_device_type": 2 00:11:12.810 } 00:11:12.810 ], 00:11:12.810 "driver_specific": { 00:11:12.810 "raid": { 00:11:12.810 "uuid": "58490bcf-123c-11ef-8c90-4585f0cfab08", 00:11:12.810 "strip_size_kb": 64, 00:11:12.810 "state": "online", 00:11:12.810 "raid_level": "raid0", 00:11:12.810 "superblock": true, 00:11:12.810 "num_base_bdevs": 3, 00:11:12.810 "num_base_bdevs_discovered": 3, 00:11:12.810 "num_base_bdevs_operational": 3, 00:11:12.810 "base_bdevs_list": [ 00:11:12.810 { 00:11:12.810 "name": "pt1", 00:11:12.810 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:12.810 "is_configured": true, 00:11:12.810 "data_offset": 2048, 00:11:12.810 "data_size": 63488 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "name": "pt2", 00:11:12.810 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:12.810 "is_configured": true, 00:11:12.810 "data_offset": 2048, 00:11:12.810 "data_size": 63488 00:11:12.810 }, 00:11:12.810 { 00:11:12.810 "name": "pt3", 00:11:12.810 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:12.810 "is_configured": true, 00:11:12.810 "data_offset": 2048, 00:11:12.810 "data_size": 63488 00:11:12.810 } 00:11:12.810 ] 00:11:12.810 } 00:11:12.810 } 00:11:12.810 }' 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:11:12.810 pt2 00:11:12.810 pt3' 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:12.810 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:13.067 "name": "pt1", 00:11:13.067 "aliases": [ 00:11:13.067 "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967" 00:11:13.067 ], 00:11:13.067 "product_name": "passthru", 00:11:13.067 "block_size": 512, 00:11:13.067 "num_blocks": 65536, 00:11:13.067 "uuid": "89c8c0ee-0f1d-fc59-b4e2-d074fcee5967", 00:11:13.067 "assigned_rate_limits": { 00:11:13.067 "rw_ios_per_sec": 0, 00:11:13.067 "rw_mbytes_per_sec": 0, 00:11:13.067 "r_mbytes_per_sec": 0, 00:11:13.067 "w_mbytes_per_sec": 0 00:11:13.067 }, 00:11:13.067 "claimed": true, 00:11:13.067 "claim_type": "exclusive_write", 00:11:13.067 "zoned": false, 00:11:13.067 "supported_io_types": { 00:11:13.067 "read": true, 00:11:13.067 "write": true, 00:11:13.067 "unmap": true, 00:11:13.067 "write_zeroes": true, 00:11:13.067 "flush": true, 00:11:13.067 "reset": true, 00:11:13.067 "compare": false, 00:11:13.067 "compare_and_write": false, 00:11:13.067 "abort": true, 00:11:13.067 "nvme_admin": false, 00:11:13.067 "nvme_io": false 00:11:13.067 }, 00:11:13.067 "memory_domains": [ 00:11:13.067 { 00:11:13.067 "dma_device_id": "system", 00:11:13.067 "dma_device_type": 1 00:11:13.067 }, 00:11:13.067 { 00:11:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.067 "dma_device_type": 2 00:11:13.067 } 00:11:13.067 ], 00:11:13.067 "driver_specific": { 00:11:13.067 "passthru": { 00:11:13.067 "name": "pt1", 00:11:13.067 "base_bdev_name": "malloc1" 00:11:13.067 } 00:11:13.067 } 00:11:13.067 }' 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:13.067 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:13.632 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:13.632 "name": "pt2", 00:11:13.632 "aliases": [ 00:11:13.632 "64758879-a8c0-8b57-abda-37d9622cfa98" 00:11:13.632 ], 00:11:13.632 "product_name": "passthru", 00:11:13.632 "block_size": 512, 00:11:13.632 "num_blocks": 65536, 00:11:13.632 "uuid": "64758879-a8c0-8b57-abda-37d9622cfa98", 00:11:13.632 "assigned_rate_limits": { 00:11:13.632 "rw_ios_per_sec": 0, 00:11:13.632 "rw_mbytes_per_sec": 0, 00:11:13.632 "r_mbytes_per_sec": 0, 00:11:13.632 "w_mbytes_per_sec": 0 00:11:13.632 }, 00:11:13.632 "claimed": true, 00:11:13.632 "claim_type": "exclusive_write", 00:11:13.632 "zoned": false, 00:11:13.632 "supported_io_types": { 00:11:13.632 "read": true, 00:11:13.632 "write": true, 00:11:13.632 "unmap": true, 00:11:13.632 "write_zeroes": true, 00:11:13.632 "flush": true, 00:11:13.632 "reset": true, 00:11:13.632 "compare": false, 00:11:13.632 "compare_and_write": false, 00:11:13.632 "abort": true, 00:11:13.632 "nvme_admin": false, 00:11:13.633 "nvme_io": false 00:11:13.633 }, 00:11:13.633 "memory_domains": [ 00:11:13.633 { 00:11:13.633 "dma_device_id": "system", 00:11:13.633 "dma_device_type": 1 00:11:13.633 }, 00:11:13.633 { 00:11:13.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.633 "dma_device_type": 2 00:11:13.633 } 00:11:13.633 ], 00:11:13.633 "driver_specific": { 00:11:13.633 "passthru": { 00:11:13.633 "name": "pt2", 00:11:13.633 "base_bdev_name": "malloc2" 00:11:13.633 } 00:11:13.633 } 00:11:13.633 }' 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:13.633 21:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:13.891 "name": "pt3", 00:11:13.891 "aliases": [ 00:11:13.891 "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea" 00:11:13.891 ], 00:11:13.891 "product_name": "passthru", 00:11:13.891 "block_size": 512, 00:11:13.891 "num_blocks": 65536, 00:11:13.891 "uuid": "75f57bcb-a23e-a55c-b5e8-7af4fd7754ea", 00:11:13.891 "assigned_rate_limits": { 00:11:13.891 "rw_ios_per_sec": 0, 00:11:13.891 "rw_mbytes_per_sec": 0, 00:11:13.891 "r_mbytes_per_sec": 0, 00:11:13.891 "w_mbytes_per_sec": 0 00:11:13.891 }, 00:11:13.891 "claimed": true, 00:11:13.891 "claim_type": "exclusive_write", 00:11:13.891 "zoned": false, 00:11:13.891 "supported_io_types": { 00:11:13.891 "read": true, 00:11:13.891 "write": true, 00:11:13.891 "unmap": true, 00:11:13.891 "write_zeroes": true, 00:11:13.891 "flush": true, 00:11:13.891 "reset": true, 00:11:13.891 "compare": false, 00:11:13.891 "compare_and_write": false, 00:11:13.891 "abort": true, 00:11:13.891 "nvme_admin": false, 00:11:13.891 "nvme_io": false 00:11:13.891 }, 00:11:13.891 "memory_domains": [ 00:11:13.891 { 00:11:13.891 "dma_device_id": "system", 00:11:13.891 "dma_device_type": 1 00:11:13.891 }, 00:11:13.891 { 00:11:13.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.891 "dma_device_type": 2 00:11:13.891 } 00:11:13.891 ], 00:11:13.891 "driver_specific": { 00:11:13.891 "passthru": { 00:11:13.891 "name": "pt3", 00:11:13.891 "base_bdev_name": "malloc3" 00:11:13.891 } 00:11:13.891 } 00:11:13.891 }' 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:13.891 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:13.892 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:13.892 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:14.150 [2024-05-14 21:53:14.509781] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 58490bcf-123c-11ef-8c90-4585f0cfab08 '!=' 58490bcf-123c-11ef-8c90-4585f0cfab08 ']' 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 52982 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 52982 ']' 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 52982 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 52982 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:11:14.150 killing process with pid 52982 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52982' 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 52982 00:11:14.150 [2024-05-14 21:53:14.539752] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.150 [2024-05-14 21:53:14.539773] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.150 [2024-05-14 21:53:14.539787] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.150 [2024-05-14 21:53:14.539792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4b6300 name raid_bdev1, state offline 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 52982 00:11:14.150 [2024-05-14 21:53:14.557442] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:11:14.150 00:11:14.150 real 0m11.539s 00:11:14.150 user 0m20.347s 00:11:14.150 sys 0m1.932s 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.150 ************************************ 00:11:14.150 END TEST raid_superblock_test 00:11:14.150 ************************************ 00:11:14.150 21:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.409 21:53:14 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:11:14.409 21:53:14 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:14.409 21:53:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:14.409 21:53:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.409 21:53:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.409 ************************************ 00:11:14.409 START TEST raid_state_function_test 00:11:14.409 ************************************ 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=53331 00:11:14.409 Process raid pid: 53331 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53331' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 53331 /var/tmp/spdk-raid.sock 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 53331 ']' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:14.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:14.409 21:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.409 [2024-05-14 21:53:14.796803] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:14.409 [2024-05-14 21:53:14.797076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:14.976 EAL: TSC is not safe to use in SMP mode 00:11:14.976 EAL: TSC is not invariant 00:11:14.976 [2024-05-14 21:53:15.337505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.976 [2024-05-14 21:53:15.434492] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:14.976 [2024-05-14 21:53:15.436757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.976 [2024-05-14 21:53:15.437545] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.976 [2024-05-14 21:53:15.437560] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.234 21:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:15.234 21:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:11:15.234 21:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:15.492 [2024-05-14 21:53:16.061942] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.492 [2024-05-14 21:53:16.062017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.492 [2024-05-14 21:53:16.062024] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.492 [2024-05-14 21:53:16.062033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.492 [2024-05-14 21:53:16.062036] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.492 [2024-05-14 21:53:16.062044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:15.492 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:15.750 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.750 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.750 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:15.750 "name": "Existed_Raid", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "strip_size_kb": 64, 00:11:15.750 "state": "configuring", 00:11:15.750 "raid_level": "concat", 00:11:15.750 "superblock": false, 00:11:15.750 "num_base_bdevs": 3, 00:11:15.750 "num_base_bdevs_discovered": 0, 00:11:15.750 "num_base_bdevs_operational": 3, 00:11:15.750 "base_bdevs_list": [ 00:11:15.750 { 00:11:15.750 "name": "BaseBdev1", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "name": "BaseBdev2", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "name": "BaseBdev3", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 } 00:11:15.750 ] 00:11:15.750 }' 00:11:15.750 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:15.750 21:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.317 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:16.317 [2024-05-14 21:53:16.845946] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.317 [2024-05-14 21:53:16.845975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b131300 name Existed_Raid, state configuring 00:11:16.317 21:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:16.575 [2024-05-14 21:53:17.073973] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.575 [2024-05-14 21:53:17.074047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.575 [2024-05-14 21:53:17.074053] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.575 [2024-05-14 21:53:17.074063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.575 [2024-05-14 21:53:17.074066] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.575 [2024-05-14 21:53:17.074074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.575 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.834 [2024-05-14 21:53:17.343039] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.834 BaseBdev1 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:16.834 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.093 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.351 [ 00:11:17.351 { 00:11:17.351 "name": "BaseBdev1", 00:11:17.351 "aliases": [ 00:11:17.351 "5f0733fc-123c-11ef-8c90-4585f0cfab08" 00:11:17.351 ], 00:11:17.351 "product_name": "Malloc disk", 00:11:17.351 "block_size": 512, 00:11:17.351 "num_blocks": 65536, 00:11:17.351 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:17.351 "assigned_rate_limits": { 00:11:17.351 "rw_ios_per_sec": 0, 00:11:17.351 "rw_mbytes_per_sec": 0, 00:11:17.351 "r_mbytes_per_sec": 0, 00:11:17.351 "w_mbytes_per_sec": 0 00:11:17.351 }, 00:11:17.351 "claimed": true, 00:11:17.351 "claim_type": "exclusive_write", 00:11:17.351 "zoned": false, 00:11:17.351 "supported_io_types": { 00:11:17.351 "read": true, 00:11:17.351 "write": true, 00:11:17.351 "unmap": true, 00:11:17.351 "write_zeroes": true, 00:11:17.351 "flush": true, 00:11:17.351 "reset": true, 00:11:17.351 "compare": false, 00:11:17.351 "compare_and_write": false, 00:11:17.351 "abort": true, 00:11:17.351 "nvme_admin": false, 00:11:17.351 "nvme_io": false 00:11:17.351 }, 00:11:17.351 "memory_domains": [ 00:11:17.351 { 00:11:17.351 "dma_device_id": "system", 00:11:17.351 "dma_device_type": 1 00:11:17.351 }, 00:11:17.351 { 00:11:17.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.352 "dma_device_type": 2 00:11:17.352 } 00:11:17.352 ], 00:11:17.352 "driver_specific": {} 00:11:17.352 } 00:11:17.352 ] 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.352 21:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.610 21:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:17.610 "name": "Existed_Raid", 00:11:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.610 "strip_size_kb": 64, 00:11:17.610 "state": "configuring", 00:11:17.610 "raid_level": "concat", 00:11:17.610 "superblock": false, 00:11:17.610 "num_base_bdevs": 3, 00:11:17.610 "num_base_bdevs_discovered": 1, 00:11:17.610 "num_base_bdevs_operational": 3, 00:11:17.610 "base_bdevs_list": [ 00:11:17.610 { 00:11:17.610 "name": "BaseBdev1", 00:11:17.610 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:17.610 "is_configured": true, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 65536 00:11:17.610 }, 00:11:17.610 { 00:11:17.610 "name": "BaseBdev2", 00:11:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.610 "is_configured": false, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 0 00:11:17.610 }, 00:11:17.610 { 00:11:17.610 "name": "BaseBdev3", 00:11:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.610 "is_configured": false, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 0 00:11:17.610 } 00:11:17.610 ] 00:11:17.610 }' 00:11:17.610 21:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:17.610 21:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.178 21:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:18.437 [2024-05-14 21:53:18.794101] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.437 [2024-05-14 21:53:18.794135] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b131300 name Existed_Raid, state configuring 00:11:18.437 21:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:18.438 [2024-05-14 21:53:19.026124] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.438 [2024-05-14 21:53:19.026930] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.438 [2024-05-14 21:53:19.026973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.438 [2024-05-14 21:53:19.026986] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.438 [2024-05-14 21:53:19.026996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:18.696 "name": "Existed_Raid", 00:11:18.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.696 "strip_size_kb": 64, 00:11:18.696 "state": "configuring", 00:11:18.696 "raid_level": "concat", 00:11:18.696 "superblock": false, 00:11:18.696 "num_base_bdevs": 3, 00:11:18.696 "num_base_bdevs_discovered": 1, 00:11:18.696 "num_base_bdevs_operational": 3, 00:11:18.696 "base_bdevs_list": [ 00:11:18.696 { 00:11:18.696 "name": "BaseBdev1", 00:11:18.696 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:18.696 "is_configured": true, 00:11:18.696 "data_offset": 0, 00:11:18.696 "data_size": 65536 00:11:18.696 }, 00:11:18.696 { 00:11:18.696 "name": "BaseBdev2", 00:11:18.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.696 "is_configured": false, 00:11:18.696 "data_offset": 0, 00:11:18.696 "data_size": 0 00:11:18.696 }, 00:11:18.696 { 00:11:18.696 "name": "BaseBdev3", 00:11:18.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.696 "is_configured": false, 00:11:18.696 "data_offset": 0, 00:11:18.696 "data_size": 0 00:11:18.696 } 00:11:18.696 ] 00:11:18.696 }' 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:18.696 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.262 [2024-05-14 21:53:19.802280] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.262 BaseBdev2 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:19.262 21:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:19.521 21:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.780 [ 00:11:19.780 { 00:11:19.780 "name": "BaseBdev2", 00:11:19.780 "aliases": [ 00:11:19.780 "607e9752-123c-11ef-8c90-4585f0cfab08" 00:11:19.780 ], 00:11:19.780 "product_name": "Malloc disk", 00:11:19.780 "block_size": 512, 00:11:19.780 "num_blocks": 65536, 00:11:19.780 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:19.780 "assigned_rate_limits": { 00:11:19.780 "rw_ios_per_sec": 0, 00:11:19.780 "rw_mbytes_per_sec": 0, 00:11:19.780 "r_mbytes_per_sec": 0, 00:11:19.780 "w_mbytes_per_sec": 0 00:11:19.780 }, 00:11:19.780 "claimed": true, 00:11:19.780 "claim_type": "exclusive_write", 00:11:19.780 "zoned": false, 00:11:19.780 "supported_io_types": { 00:11:19.780 "read": true, 00:11:19.780 "write": true, 00:11:19.780 "unmap": true, 00:11:19.780 "write_zeroes": true, 00:11:19.780 "flush": true, 00:11:19.780 "reset": true, 00:11:19.780 "compare": false, 00:11:19.780 "compare_and_write": false, 00:11:19.780 "abort": true, 00:11:19.780 "nvme_admin": false, 00:11:19.780 "nvme_io": false 00:11:19.780 }, 00:11:19.780 "memory_domains": [ 00:11:19.780 { 00:11:19.780 "dma_device_id": "system", 00:11:19.780 "dma_device_type": 1 00:11:19.780 }, 00:11:19.780 { 00:11:19.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.780 "dma_device_type": 2 00:11:19.780 } 00:11:19.780 ], 00:11:19.780 "driver_specific": {} 00:11:19.780 } 00:11:19.780 ] 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.780 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.039 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:20.039 "name": "Existed_Raid", 00:11:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.039 "strip_size_kb": 64, 00:11:20.039 "state": "configuring", 00:11:20.039 "raid_level": "concat", 00:11:20.039 "superblock": false, 00:11:20.039 "num_base_bdevs": 3, 00:11:20.039 "num_base_bdevs_discovered": 2, 00:11:20.039 "num_base_bdevs_operational": 3, 00:11:20.039 "base_bdevs_list": [ 00:11:20.039 { 00:11:20.039 "name": "BaseBdev1", 00:11:20.039 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:20.039 "is_configured": true, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 65536 00:11:20.039 }, 00:11:20.039 { 00:11:20.039 "name": "BaseBdev2", 00:11:20.039 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:20.039 "is_configured": true, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 65536 00:11:20.039 }, 00:11:20.039 { 00:11:20.039 "name": "BaseBdev3", 00:11:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.039 "is_configured": false, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 0 00:11:20.039 } 00:11:20.039 ] 00:11:20.039 }' 00:11:20.039 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:20.039 21:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.298 21:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.557 [2024-05-14 21:53:21.102284] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.557 [2024-05-14 21:53:21.102313] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b131300 00:11:20.557 [2024-05-14 21:53:21.102318] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:20.557 [2024-05-14 21:53:21.102341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b18fec0 00:11:20.557 [2024-05-14 21:53:21.102448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b131300 00:11:20.557 [2024-05-14 21:53:21.102453] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b131300 00:11:20.557 [2024-05-14 21:53:21.102486] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.557 BaseBdev3 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:20.557 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:20.815 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.074 [ 00:11:21.074 { 00:11:21.074 "name": "BaseBdev3", 00:11:21.074 "aliases": [ 00:11:21.074 "6144f57c-123c-11ef-8c90-4585f0cfab08" 00:11:21.074 ], 00:11:21.074 "product_name": "Malloc disk", 00:11:21.074 "block_size": 512, 00:11:21.074 "num_blocks": 65536, 00:11:21.074 "uuid": "6144f57c-123c-11ef-8c90-4585f0cfab08", 00:11:21.074 "assigned_rate_limits": { 00:11:21.074 "rw_ios_per_sec": 0, 00:11:21.074 "rw_mbytes_per_sec": 0, 00:11:21.074 "r_mbytes_per_sec": 0, 00:11:21.074 "w_mbytes_per_sec": 0 00:11:21.074 }, 00:11:21.074 "claimed": true, 00:11:21.074 "claim_type": "exclusive_write", 00:11:21.074 "zoned": false, 00:11:21.074 "supported_io_types": { 00:11:21.074 "read": true, 00:11:21.074 "write": true, 00:11:21.074 "unmap": true, 00:11:21.074 "write_zeroes": true, 00:11:21.074 "flush": true, 00:11:21.074 "reset": true, 00:11:21.074 "compare": false, 00:11:21.074 "compare_and_write": false, 00:11:21.074 "abort": true, 00:11:21.074 "nvme_admin": false, 00:11:21.074 "nvme_io": false 00:11:21.074 }, 00:11:21.074 "memory_domains": [ 00:11:21.074 { 00:11:21.074 "dma_device_id": "system", 00:11:21.074 "dma_device_type": 1 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.074 "dma_device_type": 2 00:11:21.074 } 00:11:21.074 ], 00:11:21.074 "driver_specific": {} 00:11:21.074 } 00:11:21.074 ] 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.074 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.333 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:21.334 "name": "Existed_Raid", 00:11:21.334 "uuid": "6144fb86-123c-11ef-8c90-4585f0cfab08", 00:11:21.334 "strip_size_kb": 64, 00:11:21.334 "state": "online", 00:11:21.334 "raid_level": "concat", 00:11:21.334 "superblock": false, 00:11:21.334 "num_base_bdevs": 3, 00:11:21.334 "num_base_bdevs_discovered": 3, 00:11:21.334 "num_base_bdevs_operational": 3, 00:11:21.334 "base_bdevs_list": [ 00:11:21.334 { 00:11:21.334 "name": "BaseBdev1", 00:11:21.334 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:21.334 "is_configured": true, 00:11:21.334 "data_offset": 0, 00:11:21.334 "data_size": 65536 00:11:21.334 }, 00:11:21.334 { 00:11:21.334 "name": "BaseBdev2", 00:11:21.334 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:21.334 "is_configured": true, 00:11:21.334 "data_offset": 0, 00:11:21.334 "data_size": 65536 00:11:21.334 }, 00:11:21.334 { 00:11:21.334 "name": "BaseBdev3", 00:11:21.334 "uuid": "6144f57c-123c-11ef-8c90-4585f0cfab08", 00:11:21.334 "is_configured": true, 00:11:21.334 "data_offset": 0, 00:11:21.334 "data_size": 65536 00:11:21.334 } 00:11:21.334 ] 00:11:21.334 }' 00:11:21.334 21:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:21.334 21:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:21.901 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:21.901 [2024-05-14 21:53:22.474250] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:22.161 "name": "Existed_Raid", 00:11:22.161 "aliases": [ 00:11:22.161 "6144fb86-123c-11ef-8c90-4585f0cfab08" 00:11:22.161 ], 00:11:22.161 "product_name": "Raid Volume", 00:11:22.161 "block_size": 512, 00:11:22.161 "num_blocks": 196608, 00:11:22.161 "uuid": "6144fb86-123c-11ef-8c90-4585f0cfab08", 00:11:22.161 "assigned_rate_limits": { 00:11:22.161 "rw_ios_per_sec": 0, 00:11:22.161 "rw_mbytes_per_sec": 0, 00:11:22.161 "r_mbytes_per_sec": 0, 00:11:22.161 "w_mbytes_per_sec": 0 00:11:22.161 }, 00:11:22.161 "claimed": false, 00:11:22.161 "zoned": false, 00:11:22.161 "supported_io_types": { 00:11:22.161 "read": true, 00:11:22.161 "write": true, 00:11:22.161 "unmap": true, 00:11:22.161 "write_zeroes": true, 00:11:22.161 "flush": true, 00:11:22.161 "reset": true, 00:11:22.161 "compare": false, 00:11:22.161 "compare_and_write": false, 00:11:22.161 "abort": false, 00:11:22.161 "nvme_admin": false, 00:11:22.161 "nvme_io": false 00:11:22.161 }, 00:11:22.161 "memory_domains": [ 00:11:22.161 { 00:11:22.161 "dma_device_id": "system", 00:11:22.161 "dma_device_type": 1 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.161 "dma_device_type": 2 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "dma_device_id": "system", 00:11:22.161 "dma_device_type": 1 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.161 "dma_device_type": 2 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "dma_device_id": "system", 00:11:22.161 "dma_device_type": 1 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.161 "dma_device_type": 2 00:11:22.161 } 00:11:22.161 ], 00:11:22.161 "driver_specific": { 00:11:22.161 "raid": { 00:11:22.161 "uuid": "6144fb86-123c-11ef-8c90-4585f0cfab08", 00:11:22.161 "strip_size_kb": 64, 00:11:22.161 "state": "online", 00:11:22.161 "raid_level": "concat", 00:11:22.161 "superblock": false, 00:11:22.161 "num_base_bdevs": 3, 00:11:22.161 "num_base_bdevs_discovered": 3, 00:11:22.161 "num_base_bdevs_operational": 3, 00:11:22.161 "base_bdevs_list": [ 00:11:22.161 { 00:11:22.161 "name": "BaseBdev1", 00:11:22.161 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:22.161 "is_configured": true, 00:11:22.161 "data_offset": 0, 00:11:22.161 "data_size": 65536 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "name": "BaseBdev2", 00:11:22.161 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:22.161 "is_configured": true, 00:11:22.161 "data_offset": 0, 00:11:22.161 "data_size": 65536 00:11:22.161 }, 00:11:22.161 { 00:11:22.161 "name": "BaseBdev3", 00:11:22.161 "uuid": "6144f57c-123c-11ef-8c90-4585f0cfab08", 00:11:22.161 "is_configured": true, 00:11:22.161 "data_offset": 0, 00:11:22.161 "data_size": 65536 00:11:22.161 } 00:11:22.161 ] 00:11:22.161 } 00:11:22.161 } 00:11:22.161 }' 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:11:22.161 BaseBdev2 00:11:22.161 BaseBdev3' 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:22.161 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:22.419 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:22.419 "name": "BaseBdev1", 00:11:22.419 "aliases": [ 00:11:22.419 "5f0733fc-123c-11ef-8c90-4585f0cfab08" 00:11:22.420 ], 00:11:22.420 "product_name": "Malloc disk", 00:11:22.420 "block_size": 512, 00:11:22.420 "num_blocks": 65536, 00:11:22.420 "uuid": "5f0733fc-123c-11ef-8c90-4585f0cfab08", 00:11:22.420 "assigned_rate_limits": { 00:11:22.420 "rw_ios_per_sec": 0, 00:11:22.420 "rw_mbytes_per_sec": 0, 00:11:22.420 "r_mbytes_per_sec": 0, 00:11:22.420 "w_mbytes_per_sec": 0 00:11:22.420 }, 00:11:22.420 "claimed": true, 00:11:22.420 "claim_type": "exclusive_write", 00:11:22.420 "zoned": false, 00:11:22.420 "supported_io_types": { 00:11:22.420 "read": true, 00:11:22.420 "write": true, 00:11:22.420 "unmap": true, 00:11:22.420 "write_zeroes": true, 00:11:22.420 "flush": true, 00:11:22.420 "reset": true, 00:11:22.420 "compare": false, 00:11:22.420 "compare_and_write": false, 00:11:22.420 "abort": true, 00:11:22.420 "nvme_admin": false, 00:11:22.420 "nvme_io": false 00:11:22.420 }, 00:11:22.420 "memory_domains": [ 00:11:22.420 { 00:11:22.420 "dma_device_id": "system", 00:11:22.420 "dma_device_type": 1 00:11:22.420 }, 00:11:22.420 { 00:11:22.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.420 "dma_device_type": 2 00:11:22.420 } 00:11:22.420 ], 00:11:22.420 "driver_specific": {} 00:11:22.420 }' 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:22.420 21:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:22.679 "name": "BaseBdev2", 00:11:22.679 "aliases": [ 00:11:22.679 "607e9752-123c-11ef-8c90-4585f0cfab08" 00:11:22.679 ], 00:11:22.679 "product_name": "Malloc disk", 00:11:22.679 "block_size": 512, 00:11:22.679 "num_blocks": 65536, 00:11:22.679 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:22.679 "assigned_rate_limits": { 00:11:22.679 "rw_ios_per_sec": 0, 00:11:22.679 "rw_mbytes_per_sec": 0, 00:11:22.679 "r_mbytes_per_sec": 0, 00:11:22.679 "w_mbytes_per_sec": 0 00:11:22.679 }, 00:11:22.679 "claimed": true, 00:11:22.679 "claim_type": "exclusive_write", 00:11:22.679 "zoned": false, 00:11:22.679 "supported_io_types": { 00:11:22.679 "read": true, 00:11:22.679 "write": true, 00:11:22.679 "unmap": true, 00:11:22.679 "write_zeroes": true, 00:11:22.679 "flush": true, 00:11:22.679 "reset": true, 00:11:22.679 "compare": false, 00:11:22.679 "compare_and_write": false, 00:11:22.679 "abort": true, 00:11:22.679 "nvme_admin": false, 00:11:22.679 "nvme_io": false 00:11:22.679 }, 00:11:22.679 "memory_domains": [ 00:11:22.679 { 00:11:22.679 "dma_device_id": "system", 00:11:22.679 "dma_device_type": 1 00:11:22.679 }, 00:11:22.679 { 00:11:22.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.679 "dma_device_type": 2 00:11:22.679 } 00:11:22.679 ], 00:11:22.679 "driver_specific": {} 00:11:22.679 }' 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:22.679 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:22.938 "name": "BaseBdev3", 00:11:22.938 "aliases": [ 00:11:22.938 "6144f57c-123c-11ef-8c90-4585f0cfab08" 00:11:22.938 ], 00:11:22.938 "product_name": "Malloc disk", 00:11:22.938 "block_size": 512, 00:11:22.938 "num_blocks": 65536, 00:11:22.938 "uuid": "6144f57c-123c-11ef-8c90-4585f0cfab08", 00:11:22.938 "assigned_rate_limits": { 00:11:22.938 "rw_ios_per_sec": 0, 00:11:22.938 "rw_mbytes_per_sec": 0, 00:11:22.938 "r_mbytes_per_sec": 0, 00:11:22.938 "w_mbytes_per_sec": 0 00:11:22.938 }, 00:11:22.938 "claimed": true, 00:11:22.938 "claim_type": "exclusive_write", 00:11:22.938 "zoned": false, 00:11:22.938 "supported_io_types": { 00:11:22.938 "read": true, 00:11:22.938 "write": true, 00:11:22.938 "unmap": true, 00:11:22.938 "write_zeroes": true, 00:11:22.938 "flush": true, 00:11:22.938 "reset": true, 00:11:22.938 "compare": false, 00:11:22.938 "compare_and_write": false, 00:11:22.938 "abort": true, 00:11:22.938 "nvme_admin": false, 00:11:22.938 "nvme_io": false 00:11:22.938 }, 00:11:22.938 "memory_domains": [ 00:11:22.938 { 00:11:22.938 "dma_device_id": "system", 00:11:22.938 "dma_device_type": 1 00:11:22.938 }, 00:11:22.938 { 00:11:22.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.938 "dma_device_type": 2 00:11:22.938 } 00:11:22.938 ], 00:11:22.938 "driver_specific": {} 00:11:22.938 }' 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:22.938 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:23.197 [2024-05-14 21:53:23.730250] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.197 [2024-05-14 21:53:23.730281] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.197 [2024-05-14 21:53:23.730297] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.197 21:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.456 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:23.456 "name": "Existed_Raid", 00:11:23.456 "uuid": "6144fb86-123c-11ef-8c90-4585f0cfab08", 00:11:23.456 "strip_size_kb": 64, 00:11:23.456 "state": "offline", 00:11:23.456 "raid_level": "concat", 00:11:23.456 "superblock": false, 00:11:23.456 "num_base_bdevs": 3, 00:11:23.456 "num_base_bdevs_discovered": 2, 00:11:23.456 "num_base_bdevs_operational": 2, 00:11:23.456 "base_bdevs_list": [ 00:11:23.456 { 00:11:23.456 "name": null, 00:11:23.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.456 "is_configured": false, 00:11:23.456 "data_offset": 0, 00:11:23.456 "data_size": 65536 00:11:23.456 }, 00:11:23.456 { 00:11:23.456 "name": "BaseBdev2", 00:11:23.456 "uuid": "607e9752-123c-11ef-8c90-4585f0cfab08", 00:11:23.456 "is_configured": true, 00:11:23.456 "data_offset": 0, 00:11:23.456 "data_size": 65536 00:11:23.456 }, 00:11:23.456 { 00:11:23.456 "name": "BaseBdev3", 00:11:23.456 "uuid": "6144f57c-123c-11ef-8c90-4585f0cfab08", 00:11:23.456 "is_configured": true, 00:11:23.456 "data_offset": 0, 00:11:23.456 "data_size": 65536 00:11:23.456 } 00:11:23.456 ] 00:11:23.456 }' 00:11:23.456 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:23.456 21:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.025 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.025 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.025 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.025 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:24.284 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:24.284 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.284 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:24.284 [2024-05-14 21:53:24.864125] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.542 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.542 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.542 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.542 21:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:24.542 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:24.542 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.542 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:24.800 [2024-05-14 21:53:25.337910] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.800 [2024-05-14 21:53:25.337944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b131300 name Existed_Raid, state offline 00:11:24.800 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.800 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.800 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.800 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:25.058 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.317 BaseBdev2 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:25.317 21:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:25.575 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.833 [ 00:11:25.833 { 00:11:25.833 "name": "BaseBdev2", 00:11:25.833 "aliases": [ 00:11:25.833 "6414cf76-123c-11ef-8c90-4585f0cfab08" 00:11:25.833 ], 00:11:25.833 "product_name": "Malloc disk", 00:11:25.833 "block_size": 512, 00:11:25.833 "num_blocks": 65536, 00:11:25.833 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:25.834 "assigned_rate_limits": { 00:11:25.834 "rw_ios_per_sec": 0, 00:11:25.834 "rw_mbytes_per_sec": 0, 00:11:25.834 "r_mbytes_per_sec": 0, 00:11:25.834 "w_mbytes_per_sec": 0 00:11:25.834 }, 00:11:25.834 "claimed": false, 00:11:25.834 "zoned": false, 00:11:25.834 "supported_io_types": { 00:11:25.834 "read": true, 00:11:25.834 "write": true, 00:11:25.834 "unmap": true, 00:11:25.834 "write_zeroes": true, 00:11:25.834 "flush": true, 00:11:25.834 "reset": true, 00:11:25.834 "compare": false, 00:11:25.834 "compare_and_write": false, 00:11:25.834 "abort": true, 00:11:25.834 "nvme_admin": false, 00:11:25.834 "nvme_io": false 00:11:25.834 }, 00:11:25.834 "memory_domains": [ 00:11:25.834 { 00:11:25.834 "dma_device_id": "system", 00:11:25.834 "dma_device_type": 1 00:11:25.834 }, 00:11:25.834 { 00:11:25.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.834 "dma_device_type": 2 00:11:25.834 } 00:11:25.834 ], 00:11:25.834 "driver_specific": {} 00:11:25.834 } 00:11:25.834 ] 00:11:25.834 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:25.834 21:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:25.834 21:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:25.834 21:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.093 BaseBdev3 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:26.093 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:26.352 21:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.610 [ 00:11:26.610 { 00:11:26.610 "name": "BaseBdev3", 00:11:26.611 "aliases": [ 00:11:26.611 "648bd4e0-123c-11ef-8c90-4585f0cfab08" 00:11:26.611 ], 00:11:26.611 "product_name": "Malloc disk", 00:11:26.611 "block_size": 512, 00:11:26.611 "num_blocks": 65536, 00:11:26.611 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:26.611 "assigned_rate_limits": { 00:11:26.611 "rw_ios_per_sec": 0, 00:11:26.611 "rw_mbytes_per_sec": 0, 00:11:26.611 "r_mbytes_per_sec": 0, 00:11:26.611 "w_mbytes_per_sec": 0 00:11:26.611 }, 00:11:26.611 "claimed": false, 00:11:26.611 "zoned": false, 00:11:26.611 "supported_io_types": { 00:11:26.611 "read": true, 00:11:26.611 "write": true, 00:11:26.611 "unmap": true, 00:11:26.611 "write_zeroes": true, 00:11:26.611 "flush": true, 00:11:26.611 "reset": true, 00:11:26.611 "compare": false, 00:11:26.611 "compare_and_write": false, 00:11:26.611 "abort": true, 00:11:26.611 "nvme_admin": false, 00:11:26.611 "nvme_io": false 00:11:26.611 }, 00:11:26.611 "memory_domains": [ 00:11:26.611 { 00:11:26.611 "dma_device_id": "system", 00:11:26.611 "dma_device_type": 1 00:11:26.611 }, 00:11:26.611 { 00:11:26.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.611 "dma_device_type": 2 00:11:26.611 } 00:11:26.611 ], 00:11:26.611 "driver_specific": {} 00:11:26.611 } 00:11:26.611 ] 00:11:26.611 21:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:26.611 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:26.611 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:26.611 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:26.869 [2024-05-14 21:53:27.355817] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.869 [2024-05-14 21:53:27.355890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.869 [2024-05-14 21:53:27.355899] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.869 [2024-05-14 21:53:27.356465] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.869 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.132 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.132 "name": "Existed_Raid", 00:11:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.132 "strip_size_kb": 64, 00:11:27.132 "state": "configuring", 00:11:27.132 "raid_level": "concat", 00:11:27.132 "superblock": false, 00:11:27.132 "num_base_bdevs": 3, 00:11:27.132 "num_base_bdevs_discovered": 2, 00:11:27.132 "num_base_bdevs_operational": 3, 00:11:27.132 "base_bdevs_list": [ 00:11:27.132 { 00:11:27.132 "name": "BaseBdev1", 00:11:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.132 "is_configured": false, 00:11:27.132 "data_offset": 0, 00:11:27.132 "data_size": 0 00:11:27.132 }, 00:11:27.132 { 00:11:27.132 "name": "BaseBdev2", 00:11:27.132 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:27.132 "is_configured": true, 00:11:27.132 "data_offset": 0, 00:11:27.132 "data_size": 65536 00:11:27.132 }, 00:11:27.132 { 00:11:27.132 "name": "BaseBdev3", 00:11:27.132 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:27.132 "is_configured": true, 00:11:27.132 "data_offset": 0, 00:11:27.132 "data_size": 65536 00:11:27.132 } 00:11:27.132 ] 00:11:27.132 }' 00:11:27.132 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.132 21:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.391 21:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:27.648 [2024-05-14 21:53:28.187833] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.648 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.904 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.905 "name": "Existed_Raid", 00:11:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.905 "strip_size_kb": 64, 00:11:27.905 "state": "configuring", 00:11:27.905 "raid_level": "concat", 00:11:27.905 "superblock": false, 00:11:27.905 "num_base_bdevs": 3, 00:11:27.905 "num_base_bdevs_discovered": 1, 00:11:27.905 "num_base_bdevs_operational": 3, 00:11:27.905 "base_bdevs_list": [ 00:11:27.905 { 00:11:27.905 "name": "BaseBdev1", 00:11:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.905 "is_configured": false, 00:11:27.905 "data_offset": 0, 00:11:27.905 "data_size": 0 00:11:27.905 }, 00:11:27.905 { 00:11:27.905 "name": null, 00:11:27.905 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:27.905 "is_configured": false, 00:11:27.905 "data_offset": 0, 00:11:27.905 "data_size": 65536 00:11:27.905 }, 00:11:27.905 { 00:11:27.905 "name": "BaseBdev3", 00:11:27.905 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:27.905 "is_configured": true, 00:11:27.905 "data_offset": 0, 00:11:27.905 "data_size": 65536 00:11:27.905 } 00:11:27.905 ] 00:11:27.905 }' 00:11:27.905 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.905 21:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.467 21:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.467 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:11:28.467 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.726 [2024-05-14 21:53:29.271986] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.726 BaseBdev1 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:28.726 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:28.984 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.242 [ 00:11:29.242 { 00:11:29.242 "name": "BaseBdev1", 00:11:29.242 "aliases": [ 00:11:29.242 "66238e59-123c-11ef-8c90-4585f0cfab08" 00:11:29.242 ], 00:11:29.242 "product_name": "Malloc disk", 00:11:29.242 "block_size": 512, 00:11:29.242 "num_blocks": 65536, 00:11:29.242 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:29.242 "assigned_rate_limits": { 00:11:29.242 "rw_ios_per_sec": 0, 00:11:29.242 "rw_mbytes_per_sec": 0, 00:11:29.242 "r_mbytes_per_sec": 0, 00:11:29.242 "w_mbytes_per_sec": 0 00:11:29.242 }, 00:11:29.242 "claimed": true, 00:11:29.242 "claim_type": "exclusive_write", 00:11:29.242 "zoned": false, 00:11:29.242 "supported_io_types": { 00:11:29.242 "read": true, 00:11:29.242 "write": true, 00:11:29.242 "unmap": true, 00:11:29.242 "write_zeroes": true, 00:11:29.242 "flush": true, 00:11:29.242 "reset": true, 00:11:29.242 "compare": false, 00:11:29.242 "compare_and_write": false, 00:11:29.242 "abort": true, 00:11:29.242 "nvme_admin": false, 00:11:29.242 "nvme_io": false 00:11:29.242 }, 00:11:29.242 "memory_domains": [ 00:11:29.242 { 00:11:29.242 "dma_device_id": "system", 00:11:29.242 "dma_device_type": 1 00:11:29.242 }, 00:11:29.242 { 00:11:29.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.242 "dma_device_type": 2 00:11:29.242 } 00:11:29.242 ], 00:11:29.242 "driver_specific": {} 00:11:29.242 } 00:11:29.242 ] 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.242 21:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.501 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.501 "name": "Existed_Raid", 00:11:29.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.501 "strip_size_kb": 64, 00:11:29.501 "state": "configuring", 00:11:29.501 "raid_level": "concat", 00:11:29.501 "superblock": false, 00:11:29.501 "num_base_bdevs": 3, 00:11:29.501 "num_base_bdevs_discovered": 2, 00:11:29.501 "num_base_bdevs_operational": 3, 00:11:29.501 "base_bdevs_list": [ 00:11:29.501 { 00:11:29.501 "name": "BaseBdev1", 00:11:29.501 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:29.501 "is_configured": true, 00:11:29.501 "data_offset": 0, 00:11:29.501 "data_size": 65536 00:11:29.501 }, 00:11:29.501 { 00:11:29.501 "name": null, 00:11:29.501 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:29.501 "is_configured": false, 00:11:29.501 "data_offset": 0, 00:11:29.501 "data_size": 65536 00:11:29.501 }, 00:11:29.501 { 00:11:29.501 "name": "BaseBdev3", 00:11:29.501 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:29.501 "is_configured": true, 00:11:29.501 "data_offset": 0, 00:11:29.501 "data_size": 65536 00:11:29.501 } 00:11:29.501 ] 00:11:29.501 }' 00:11:29.501 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.501 21:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.066 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.066 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.066 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.066 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:30.632 [2024-05-14 21:53:30.923902] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.632 21:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.632 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:30.632 "name": "Existed_Raid", 00:11:30.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.632 "strip_size_kb": 64, 00:11:30.632 "state": "configuring", 00:11:30.632 "raid_level": "concat", 00:11:30.632 "superblock": false, 00:11:30.632 "num_base_bdevs": 3, 00:11:30.632 "num_base_bdevs_discovered": 1, 00:11:30.632 "num_base_bdevs_operational": 3, 00:11:30.632 "base_bdevs_list": [ 00:11:30.632 { 00:11:30.632 "name": "BaseBdev1", 00:11:30.632 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:30.632 "is_configured": true, 00:11:30.632 "data_offset": 0, 00:11:30.632 "data_size": 65536 00:11:30.632 }, 00:11:30.632 { 00:11:30.632 "name": null, 00:11:30.632 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:30.632 "is_configured": false, 00:11:30.632 "data_offset": 0, 00:11:30.632 "data_size": 65536 00:11:30.632 }, 00:11:30.632 { 00:11:30.632 "name": null, 00:11:30.632 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:30.632 "is_configured": false, 00:11:30.632 "data_offset": 0, 00:11:30.632 "data_size": 65536 00:11:30.632 } 00:11:30.632 ] 00:11:30.632 }' 00:11:30.632 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:30.632 21:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.198 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.198 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.198 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:11:31.198 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:31.456 [2024-05-14 21:53:31.979935] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:31.457 21:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:31.457 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:31.457 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:31.457 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.457 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.715 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:31.715 "name": "Existed_Raid", 00:11:31.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.715 "strip_size_kb": 64, 00:11:31.715 "state": "configuring", 00:11:31.715 "raid_level": "concat", 00:11:31.715 "superblock": false, 00:11:31.715 "num_base_bdevs": 3, 00:11:31.715 "num_base_bdevs_discovered": 2, 00:11:31.715 "num_base_bdevs_operational": 3, 00:11:31.715 "base_bdevs_list": [ 00:11:31.715 { 00:11:31.715 "name": "BaseBdev1", 00:11:31.715 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:31.715 "is_configured": true, 00:11:31.715 "data_offset": 0, 00:11:31.715 "data_size": 65536 00:11:31.715 }, 00:11:31.715 { 00:11:31.715 "name": null, 00:11:31.715 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:31.715 "is_configured": false, 00:11:31.715 "data_offset": 0, 00:11:31.715 "data_size": 65536 00:11:31.715 }, 00:11:31.715 { 00:11:31.715 "name": "BaseBdev3", 00:11:31.715 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:31.715 "is_configured": true, 00:11:31.715 "data_offset": 0, 00:11:31.715 "data_size": 65536 00:11:31.715 } 00:11:31.715 ] 00:11:31.715 }' 00:11:31.715 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:31.715 21:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.339 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.339 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.339 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:11:32.339 21:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:32.598 [2024-05-14 21:53:33.043966] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.598 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.856 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:32.856 "name": "Existed_Raid", 00:11:32.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.856 "strip_size_kb": 64, 00:11:32.856 "state": "configuring", 00:11:32.856 "raid_level": "concat", 00:11:32.856 "superblock": false, 00:11:32.856 "num_base_bdevs": 3, 00:11:32.856 "num_base_bdevs_discovered": 1, 00:11:32.856 "num_base_bdevs_operational": 3, 00:11:32.856 "base_bdevs_list": [ 00:11:32.856 { 00:11:32.856 "name": null, 00:11:32.856 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:32.856 "is_configured": false, 00:11:32.856 "data_offset": 0, 00:11:32.856 "data_size": 65536 00:11:32.856 }, 00:11:32.856 { 00:11:32.856 "name": null, 00:11:32.856 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:32.856 "is_configured": false, 00:11:32.856 "data_offset": 0, 00:11:32.856 "data_size": 65536 00:11:32.856 }, 00:11:32.856 { 00:11:32.856 "name": "BaseBdev3", 00:11:32.856 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:32.856 "is_configured": true, 00:11:32.856 "data_offset": 0, 00:11:32.856 "data_size": 65536 00:11:32.856 } 00:11:32.856 ] 00:11:32.856 }' 00:11:32.856 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:32.856 21:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.115 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.115 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.681 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:11:33.681 21:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:33.681 [2024-05-14 21:53:34.229776] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:33.681 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.682 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.940 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:33.940 "name": "Existed_Raid", 00:11:33.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.940 "strip_size_kb": 64, 00:11:33.940 "state": "configuring", 00:11:33.940 "raid_level": "concat", 00:11:33.940 "superblock": false, 00:11:33.940 "num_base_bdevs": 3, 00:11:33.940 "num_base_bdevs_discovered": 2, 00:11:33.940 "num_base_bdevs_operational": 3, 00:11:33.940 "base_bdevs_list": [ 00:11:33.940 { 00:11:33.940 "name": null, 00:11:33.940 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:33.940 "is_configured": false, 00:11:33.940 "data_offset": 0, 00:11:33.940 "data_size": 65536 00:11:33.940 }, 00:11:33.940 { 00:11:33.940 "name": "BaseBdev2", 00:11:33.940 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:33.940 "is_configured": true, 00:11:33.940 "data_offset": 0, 00:11:33.940 "data_size": 65536 00:11:33.940 }, 00:11:33.940 { 00:11:33.940 "name": "BaseBdev3", 00:11:33.941 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:33.941 "is_configured": true, 00:11:33.941 "data_offset": 0, 00:11:33.941 "data_size": 65536 00:11:33.941 } 00:11:33.941 ] 00:11:33.941 }' 00:11:33.941 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:33.941 21:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.199 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.199 21:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.764 21:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:11:34.764 21:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.764 21:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.764 21:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 66238e59-123c-11ef-8c90-4585f0cfab08 00:11:35.022 [2024-05-14 21:53:35.593951] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.022 [2024-05-14 21:53:35.593982] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b131300 00:11:35.022 [2024-05-14 21:53:35.593987] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:35.022 [2024-05-14 21:53:35.594011] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b18fe20 00:11:35.022 [2024-05-14 21:53:35.594083] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b131300 00:11:35.022 [2024-05-14 21:53:35.594088] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b131300 00:11:35.022 [2024-05-14 21:53:35.594121] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.022 NewBaseBdev 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.280 21:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.538 [ 00:11:35.538 { 00:11:35.538 "name": "NewBaseBdev", 00:11:35.538 "aliases": [ 00:11:35.538 "66238e59-123c-11ef-8c90-4585f0cfab08" 00:11:35.538 ], 00:11:35.538 "product_name": "Malloc disk", 00:11:35.538 "block_size": 512, 00:11:35.538 "num_blocks": 65536, 00:11:35.538 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:35.538 "assigned_rate_limits": { 00:11:35.538 "rw_ios_per_sec": 0, 00:11:35.538 "rw_mbytes_per_sec": 0, 00:11:35.538 "r_mbytes_per_sec": 0, 00:11:35.538 "w_mbytes_per_sec": 0 00:11:35.538 }, 00:11:35.538 "claimed": true, 00:11:35.538 "claim_type": "exclusive_write", 00:11:35.538 "zoned": false, 00:11:35.538 "supported_io_types": { 00:11:35.538 "read": true, 00:11:35.538 "write": true, 00:11:35.538 "unmap": true, 00:11:35.538 "write_zeroes": true, 00:11:35.538 "flush": true, 00:11:35.538 "reset": true, 00:11:35.538 "compare": false, 00:11:35.538 "compare_and_write": false, 00:11:35.538 "abort": true, 00:11:35.538 "nvme_admin": false, 00:11:35.538 "nvme_io": false 00:11:35.538 }, 00:11:35.538 "memory_domains": [ 00:11:35.538 { 00:11:35.538 "dma_device_id": "system", 00:11:35.538 "dma_device_type": 1 00:11:35.538 }, 00:11:35.538 { 00:11:35.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.538 "dma_device_type": 2 00:11:35.538 } 00:11:35.538 ], 00:11:35.538 "driver_specific": {} 00:11:35.538 } 00:11:35.538 ] 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.538 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.104 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:36.104 "name": "Existed_Raid", 00:11:36.104 "uuid": "69e83c67-123c-11ef-8c90-4585f0cfab08", 00:11:36.104 "strip_size_kb": 64, 00:11:36.104 "state": "online", 00:11:36.104 "raid_level": "concat", 00:11:36.104 "superblock": false, 00:11:36.104 "num_base_bdevs": 3, 00:11:36.104 "num_base_bdevs_discovered": 3, 00:11:36.104 "num_base_bdevs_operational": 3, 00:11:36.104 "base_bdevs_list": [ 00:11:36.104 { 00:11:36.104 "name": "NewBaseBdev", 00:11:36.104 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:36.104 "is_configured": true, 00:11:36.104 "data_offset": 0, 00:11:36.104 "data_size": 65536 00:11:36.104 }, 00:11:36.104 { 00:11:36.104 "name": "BaseBdev2", 00:11:36.104 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:36.104 "is_configured": true, 00:11:36.104 "data_offset": 0, 00:11:36.104 "data_size": 65536 00:11:36.104 }, 00:11:36.104 { 00:11:36.104 "name": "BaseBdev3", 00:11:36.104 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:36.104 "is_configured": true, 00:11:36.104 "data_offset": 0, 00:11:36.104 "data_size": 65536 00:11:36.104 } 00:11:36.104 ] 00:11:36.104 }' 00:11:36.104 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:36.104 21:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:36.362 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:36.620 [2024-05-14 21:53:36.969886] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:36.620 "name": "Existed_Raid", 00:11:36.620 "aliases": [ 00:11:36.620 "69e83c67-123c-11ef-8c90-4585f0cfab08" 00:11:36.620 ], 00:11:36.620 "product_name": "Raid Volume", 00:11:36.620 "block_size": 512, 00:11:36.620 "num_blocks": 196608, 00:11:36.620 "uuid": "69e83c67-123c-11ef-8c90-4585f0cfab08", 00:11:36.620 "assigned_rate_limits": { 00:11:36.620 "rw_ios_per_sec": 0, 00:11:36.620 "rw_mbytes_per_sec": 0, 00:11:36.620 "r_mbytes_per_sec": 0, 00:11:36.620 "w_mbytes_per_sec": 0 00:11:36.620 }, 00:11:36.620 "claimed": false, 00:11:36.620 "zoned": false, 00:11:36.620 "supported_io_types": { 00:11:36.620 "read": true, 00:11:36.620 "write": true, 00:11:36.620 "unmap": true, 00:11:36.620 "write_zeroes": true, 00:11:36.620 "flush": true, 00:11:36.620 "reset": true, 00:11:36.620 "compare": false, 00:11:36.620 "compare_and_write": false, 00:11:36.620 "abort": false, 00:11:36.620 "nvme_admin": false, 00:11:36.620 "nvme_io": false 00:11:36.620 }, 00:11:36.620 "memory_domains": [ 00:11:36.620 { 00:11:36.620 "dma_device_id": "system", 00:11:36.620 "dma_device_type": 1 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.620 "dma_device_type": 2 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "dma_device_id": "system", 00:11:36.620 "dma_device_type": 1 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.620 "dma_device_type": 2 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "dma_device_id": "system", 00:11:36.620 "dma_device_type": 1 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.620 "dma_device_type": 2 00:11:36.620 } 00:11:36.620 ], 00:11:36.620 "driver_specific": { 00:11:36.620 "raid": { 00:11:36.620 "uuid": "69e83c67-123c-11ef-8c90-4585f0cfab08", 00:11:36.620 "strip_size_kb": 64, 00:11:36.620 "state": "online", 00:11:36.620 "raid_level": "concat", 00:11:36.620 "superblock": false, 00:11:36.620 "num_base_bdevs": 3, 00:11:36.620 "num_base_bdevs_discovered": 3, 00:11:36.620 "num_base_bdevs_operational": 3, 00:11:36.620 "base_bdevs_list": [ 00:11:36.620 { 00:11:36.620 "name": "NewBaseBdev", 00:11:36.620 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:36.620 "is_configured": true, 00:11:36.620 "data_offset": 0, 00:11:36.620 "data_size": 65536 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "name": "BaseBdev2", 00:11:36.620 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:36.620 "is_configured": true, 00:11:36.620 "data_offset": 0, 00:11:36.620 "data_size": 65536 00:11:36.620 }, 00:11:36.620 { 00:11:36.620 "name": "BaseBdev3", 00:11:36.620 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:36.620 "is_configured": true, 00:11:36.620 "data_offset": 0, 00:11:36.620 "data_size": 65536 00:11:36.620 } 00:11:36.620 ] 00:11:36.620 } 00:11:36.620 } 00:11:36.620 }' 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:11:36.620 BaseBdev2 00:11:36.620 BaseBdev3' 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:36.620 21:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:36.878 "name": "NewBaseBdev", 00:11:36.878 "aliases": [ 00:11:36.878 "66238e59-123c-11ef-8c90-4585f0cfab08" 00:11:36.878 ], 00:11:36.878 "product_name": "Malloc disk", 00:11:36.878 "block_size": 512, 00:11:36.878 "num_blocks": 65536, 00:11:36.878 "uuid": "66238e59-123c-11ef-8c90-4585f0cfab08", 00:11:36.878 "assigned_rate_limits": { 00:11:36.878 "rw_ios_per_sec": 0, 00:11:36.878 "rw_mbytes_per_sec": 0, 00:11:36.878 "r_mbytes_per_sec": 0, 00:11:36.878 "w_mbytes_per_sec": 0 00:11:36.878 }, 00:11:36.878 "claimed": true, 00:11:36.878 "claim_type": "exclusive_write", 00:11:36.878 "zoned": false, 00:11:36.878 "supported_io_types": { 00:11:36.878 "read": true, 00:11:36.878 "write": true, 00:11:36.878 "unmap": true, 00:11:36.878 "write_zeroes": true, 00:11:36.878 "flush": true, 00:11:36.878 "reset": true, 00:11:36.878 "compare": false, 00:11:36.878 "compare_and_write": false, 00:11:36.878 "abort": true, 00:11:36.878 "nvme_admin": false, 00:11:36.878 "nvme_io": false 00:11:36.878 }, 00:11:36.878 "memory_domains": [ 00:11:36.878 { 00:11:36.878 "dma_device_id": "system", 00:11:36.878 "dma_device_type": 1 00:11:36.878 }, 00:11:36.878 { 00:11:36.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.878 "dma_device_type": 2 00:11:36.878 } 00:11:36.878 ], 00:11:36.878 "driver_specific": {} 00:11:36.878 }' 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:36.878 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:37.136 "name": "BaseBdev2", 00:11:37.136 "aliases": [ 00:11:37.136 "6414cf76-123c-11ef-8c90-4585f0cfab08" 00:11:37.136 ], 00:11:37.136 "product_name": "Malloc disk", 00:11:37.136 "block_size": 512, 00:11:37.136 "num_blocks": 65536, 00:11:37.136 "uuid": "6414cf76-123c-11ef-8c90-4585f0cfab08", 00:11:37.136 "assigned_rate_limits": { 00:11:37.136 "rw_ios_per_sec": 0, 00:11:37.136 "rw_mbytes_per_sec": 0, 00:11:37.136 "r_mbytes_per_sec": 0, 00:11:37.136 "w_mbytes_per_sec": 0 00:11:37.136 }, 00:11:37.136 "claimed": true, 00:11:37.136 "claim_type": "exclusive_write", 00:11:37.136 "zoned": false, 00:11:37.136 "supported_io_types": { 00:11:37.136 "read": true, 00:11:37.136 "write": true, 00:11:37.136 "unmap": true, 00:11:37.136 "write_zeroes": true, 00:11:37.136 "flush": true, 00:11:37.136 "reset": true, 00:11:37.136 "compare": false, 00:11:37.136 "compare_and_write": false, 00:11:37.136 "abort": true, 00:11:37.136 "nvme_admin": false, 00:11:37.136 "nvme_io": false 00:11:37.136 }, 00:11:37.136 "memory_domains": [ 00:11:37.136 { 00:11:37.136 "dma_device_id": "system", 00:11:37.136 "dma_device_type": 1 00:11:37.136 }, 00:11:37.136 { 00:11:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.136 "dma_device_type": 2 00:11:37.136 } 00:11:37.136 ], 00:11:37.136 "driver_specific": {} 00:11:37.136 }' 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:37.136 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:37.394 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:37.394 "name": "BaseBdev3", 00:11:37.394 "aliases": [ 00:11:37.394 "648bd4e0-123c-11ef-8c90-4585f0cfab08" 00:11:37.394 ], 00:11:37.394 "product_name": "Malloc disk", 00:11:37.394 "block_size": 512, 00:11:37.394 "num_blocks": 65536, 00:11:37.394 "uuid": "648bd4e0-123c-11ef-8c90-4585f0cfab08", 00:11:37.394 "assigned_rate_limits": { 00:11:37.394 "rw_ios_per_sec": 0, 00:11:37.394 "rw_mbytes_per_sec": 0, 00:11:37.394 "r_mbytes_per_sec": 0, 00:11:37.394 "w_mbytes_per_sec": 0 00:11:37.394 }, 00:11:37.394 "claimed": true, 00:11:37.394 "claim_type": "exclusive_write", 00:11:37.394 "zoned": false, 00:11:37.394 "supported_io_types": { 00:11:37.394 "read": true, 00:11:37.394 "write": true, 00:11:37.394 "unmap": true, 00:11:37.394 "write_zeroes": true, 00:11:37.394 "flush": true, 00:11:37.394 "reset": true, 00:11:37.394 "compare": false, 00:11:37.394 "compare_and_write": false, 00:11:37.395 "abort": true, 00:11:37.395 "nvme_admin": false, 00:11:37.395 "nvme_io": false 00:11:37.395 }, 00:11:37.395 "memory_domains": [ 00:11:37.395 { 00:11:37.395 "dma_device_id": "system", 00:11:37.395 "dma_device_type": 1 00:11:37.395 }, 00:11:37.395 { 00:11:37.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.395 "dma_device_type": 2 00:11:37.395 } 00:11:37.395 ], 00:11:37.395 "driver_specific": {} 00:11:37.395 }' 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:37.395 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:37.653 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:37.653 21:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:37.653 21:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:37.911 [2024-05-14 21:53:38.269883] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.911 [2024-05-14 21:53:38.269912] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.911 [2024-05-14 21:53:38.269935] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.911 [2024-05-14 21:53:38.269949] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.911 [2024-05-14 21:53:38.269954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b131300 name Existed_Raid, state offline 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 53331 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 53331 ']' 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 53331 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 53331 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:11:37.911 killing process with pid 53331 00:11:37.911 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53331' 00:11:37.912 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 53331 00:11:37.912 [2024-05-14 21:53:38.301279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.912 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 53331 00:11:37.912 [2024-05-14 21:53:38.318921] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.912 21:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:11:38.171 00:11:38.171 real 0m23.720s 00:11:38.171 user 0m43.333s 00:11:38.171 sys 0m3.290s 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 ************************************ 00:11:38.171 END TEST raid_state_function_test 00:11:38.171 ************************************ 00:11:38.171 21:53:38 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:38.171 21:53:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:38.171 21:53:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.171 21:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 ************************************ 00:11:38.171 START TEST raid_state_function_test_sb 00:11:38.171 ************************************ 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=54056 00:11:38.171 Process raid pid: 54056 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54056' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 54056 /var/tmp/spdk-raid.sock 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 54056 ']' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:38.171 21:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 [2024-05-14 21:53:38.558583] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:38.171 [2024-05-14 21:53:38.558846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:38.737 EAL: TSC is not safe to use in SMP mode 00:11:38.737 EAL: TSC is not invariant 00:11:38.737 [2024-05-14 21:53:39.093242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.737 [2024-05-14 21:53:39.199904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:38.737 [2024-05-14 21:53:39.202624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.737 [2024-05-14 21:53:39.203582] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.737 [2024-05-14 21:53:39.203603] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.303 21:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:39.303 21:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:11:39.303 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:39.561 [2024-05-14 21:53:39.929714] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.561 [2024-05-14 21:53:39.929767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.561 [2024-05-14 21:53:39.929773] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.561 [2024-05-14 21:53:39.929782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.561 [2024-05-14 21:53:39.929785] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.561 [2024-05-14 21:53:39.929793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:39.561 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:39.562 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.562 21:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.819 21:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:39.819 "name": "Existed_Raid", 00:11:39.819 "uuid": "6c7dd03a-123c-11ef-8c90-4585f0cfab08", 00:11:39.819 "strip_size_kb": 64, 00:11:39.819 "state": "configuring", 00:11:39.819 "raid_level": "concat", 00:11:39.819 "superblock": true, 00:11:39.819 "num_base_bdevs": 3, 00:11:39.820 "num_base_bdevs_discovered": 0, 00:11:39.820 "num_base_bdevs_operational": 3, 00:11:39.820 "base_bdevs_list": [ 00:11:39.820 { 00:11:39.820 "name": "BaseBdev1", 00:11:39.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.820 "is_configured": false, 00:11:39.820 "data_offset": 0, 00:11:39.820 "data_size": 0 00:11:39.820 }, 00:11:39.820 { 00:11:39.820 "name": "BaseBdev2", 00:11:39.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.820 "is_configured": false, 00:11:39.820 "data_offset": 0, 00:11:39.820 "data_size": 0 00:11:39.820 }, 00:11:39.820 { 00:11:39.820 "name": "BaseBdev3", 00:11:39.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.820 "is_configured": false, 00:11:39.820 "data_offset": 0, 00:11:39.820 "data_size": 0 00:11:39.820 } 00:11:39.820 ] 00:11:39.820 }' 00:11:39.820 21:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:39.820 21:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.126 21:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:40.398 [2024-05-14 21:53:40.889755] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.398 [2024-05-14 21:53:40.889790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e6eb300 name Existed_Raid, state configuring 00:11:40.398 21:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:40.658 [2024-05-14 21:53:41.129763] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.658 [2024-05-14 21:53:41.129850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.658 [2024-05-14 21:53:41.129856] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.658 [2024-05-14 21:53:41.129866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.658 [2024-05-14 21:53:41.129870] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.658 [2024-05-14 21:53:41.129877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.658 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.917 [2024-05-14 21:53:41.438901] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.917 BaseBdev1 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:40.917 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:41.176 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.435 [ 00:11:41.435 { 00:11:41.435 "name": "BaseBdev1", 00:11:41.435 "aliases": [ 00:11:41.435 "6d63ed48-123c-11ef-8c90-4585f0cfab08" 00:11:41.435 ], 00:11:41.435 "product_name": "Malloc disk", 00:11:41.435 "block_size": 512, 00:11:41.435 "num_blocks": 65536, 00:11:41.435 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:41.435 "assigned_rate_limits": { 00:11:41.435 "rw_ios_per_sec": 0, 00:11:41.435 "rw_mbytes_per_sec": 0, 00:11:41.435 "r_mbytes_per_sec": 0, 00:11:41.435 "w_mbytes_per_sec": 0 00:11:41.435 }, 00:11:41.435 "claimed": true, 00:11:41.435 "claim_type": "exclusive_write", 00:11:41.435 "zoned": false, 00:11:41.435 "supported_io_types": { 00:11:41.435 "read": true, 00:11:41.435 "write": true, 00:11:41.435 "unmap": true, 00:11:41.435 "write_zeroes": true, 00:11:41.435 "flush": true, 00:11:41.435 "reset": true, 00:11:41.435 "compare": false, 00:11:41.435 "compare_and_write": false, 00:11:41.435 "abort": true, 00:11:41.435 "nvme_admin": false, 00:11:41.435 "nvme_io": false 00:11:41.435 }, 00:11:41.435 "memory_domains": [ 00:11:41.435 { 00:11:41.435 "dma_device_id": "system", 00:11:41.435 "dma_device_type": 1 00:11:41.435 }, 00:11:41.435 { 00:11:41.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.435 "dma_device_type": 2 00:11:41.435 } 00:11:41.435 ], 00:11:41.435 "driver_specific": {} 00:11:41.435 } 00:11:41.435 ] 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:41.435 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:41.436 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:41.436 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.436 21:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.694 21:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:41.694 "name": "Existed_Raid", 00:11:41.694 "uuid": "6d34ed1f-123c-11ef-8c90-4585f0cfab08", 00:11:41.694 "strip_size_kb": 64, 00:11:41.694 "state": "configuring", 00:11:41.694 "raid_level": "concat", 00:11:41.694 "superblock": true, 00:11:41.694 "num_base_bdevs": 3, 00:11:41.694 "num_base_bdevs_discovered": 1, 00:11:41.694 "num_base_bdevs_operational": 3, 00:11:41.694 "base_bdevs_list": [ 00:11:41.694 { 00:11:41.694 "name": "BaseBdev1", 00:11:41.694 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:41.694 "is_configured": true, 00:11:41.694 "data_offset": 2048, 00:11:41.694 "data_size": 63488 00:11:41.694 }, 00:11:41.694 { 00:11:41.694 "name": "BaseBdev2", 00:11:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.694 "is_configured": false, 00:11:41.694 "data_offset": 0, 00:11:41.694 "data_size": 0 00:11:41.694 }, 00:11:41.694 { 00:11:41.694 "name": "BaseBdev3", 00:11:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.694 "is_configured": false, 00:11:41.694 "data_offset": 0, 00:11:41.694 "data_size": 0 00:11:41.694 } 00:11:41.694 ] 00:11:41.694 }' 00:11:41.694 21:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:41.694 21:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.260 21:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:42.260 [2024-05-14 21:53:42.825903] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.260 [2024-05-14 21:53:42.825951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e6eb300 name Existed_Raid, state configuring 00:11:42.260 21:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:42.519 [2024-05-14 21:53:43.065942] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.519 [2024-05-14 21:53:43.067001] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.519 [2024-05-14 21:53:43.067057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.519 [2024-05-14 21:53:43.067063] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.519 [2024-05-14 21:53:43.067071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.519 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.778 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:42.778 "name": "Existed_Raid", 00:11:42.778 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:42.778 "strip_size_kb": 64, 00:11:42.778 "state": "configuring", 00:11:42.778 "raid_level": "concat", 00:11:42.778 "superblock": true, 00:11:42.778 "num_base_bdevs": 3, 00:11:42.778 "num_base_bdevs_discovered": 1, 00:11:42.778 "num_base_bdevs_operational": 3, 00:11:42.778 "base_bdevs_list": [ 00:11:42.778 { 00:11:42.778 "name": "BaseBdev1", 00:11:42.778 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:42.778 "is_configured": true, 00:11:42.778 "data_offset": 2048, 00:11:42.778 "data_size": 63488 00:11:42.778 }, 00:11:42.778 { 00:11:42.778 "name": "BaseBdev2", 00:11:42.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.778 "is_configured": false, 00:11:42.778 "data_offset": 0, 00:11:42.778 "data_size": 0 00:11:42.778 }, 00:11:42.778 { 00:11:42.778 "name": "BaseBdev3", 00:11:42.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.778 "is_configured": false, 00:11:42.778 "data_offset": 0, 00:11:42.778 "data_size": 0 00:11:42.778 } 00:11:42.778 ] 00:11:42.778 }' 00:11:42.778 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:42.778 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.344 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.602 [2024-05-14 21:53:43.938181] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.602 BaseBdev2 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:43.602 21:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.602 21:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.861 [ 00:11:43.862 { 00:11:43.862 "name": "BaseBdev2", 00:11:43.862 "aliases": [ 00:11:43.862 "6ee16d87-123c-11ef-8c90-4585f0cfab08" 00:11:43.862 ], 00:11:43.862 "product_name": "Malloc disk", 00:11:43.862 "block_size": 512, 00:11:43.862 "num_blocks": 65536, 00:11:43.862 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:43.862 "assigned_rate_limits": { 00:11:43.862 "rw_ios_per_sec": 0, 00:11:43.862 "rw_mbytes_per_sec": 0, 00:11:43.862 "r_mbytes_per_sec": 0, 00:11:43.862 "w_mbytes_per_sec": 0 00:11:43.862 }, 00:11:43.862 "claimed": true, 00:11:43.862 "claim_type": "exclusive_write", 00:11:43.862 "zoned": false, 00:11:43.862 "supported_io_types": { 00:11:43.862 "read": true, 00:11:43.862 "write": true, 00:11:43.862 "unmap": true, 00:11:43.862 "write_zeroes": true, 00:11:43.862 "flush": true, 00:11:43.862 "reset": true, 00:11:43.862 "compare": false, 00:11:43.862 "compare_and_write": false, 00:11:43.862 "abort": true, 00:11:43.862 "nvme_admin": false, 00:11:43.862 "nvme_io": false 00:11:43.862 }, 00:11:43.862 "memory_domains": [ 00:11:43.862 { 00:11:43.862 "dma_device_id": "system", 00:11:43.862 "dma_device_type": 1 00:11:43.862 }, 00:11:43.862 { 00:11:43.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.862 "dma_device_type": 2 00:11:43.862 } 00:11:43.862 ], 00:11:43.862 "driver_specific": {} 00:11:43.862 } 00:11:43.862 ] 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.120 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:44.120 "name": "Existed_Raid", 00:11:44.120 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:44.120 "strip_size_kb": 64, 00:11:44.120 "state": "configuring", 00:11:44.120 "raid_level": "concat", 00:11:44.121 "superblock": true, 00:11:44.121 "num_base_bdevs": 3, 00:11:44.121 "num_base_bdevs_discovered": 2, 00:11:44.121 "num_base_bdevs_operational": 3, 00:11:44.121 "base_bdevs_list": [ 00:11:44.121 { 00:11:44.121 "name": "BaseBdev1", 00:11:44.121 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:44.121 "is_configured": true, 00:11:44.121 "data_offset": 2048, 00:11:44.121 "data_size": 63488 00:11:44.121 }, 00:11:44.121 { 00:11:44.121 "name": "BaseBdev2", 00:11:44.121 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:44.121 "is_configured": true, 00:11:44.121 "data_offset": 2048, 00:11:44.121 "data_size": 63488 00:11:44.121 }, 00:11:44.121 { 00:11:44.121 "name": "BaseBdev3", 00:11:44.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.121 "is_configured": false, 00:11:44.121 "data_offset": 0, 00:11:44.121 "data_size": 0 00:11:44.121 } 00:11:44.121 ] 00:11:44.121 }' 00:11:44.121 21:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:44.121 21:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.687 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.687 [2024-05-14 21:53:45.262224] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.687 [2024-05-14 21:53:45.262339] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e6eb300 00:11:44.687 [2024-05-14 21:53:45.262347] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:44.687 [2024-05-14 21:53:45.262369] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e749ec0 00:11:44.687 [2024-05-14 21:53:45.262435] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e6eb300 00:11:44.687 [2024-05-14 21:53:45.262440] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e6eb300 00:11:44.687 [2024-05-14 21:53:45.262464] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.687 BaseBdev3 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:44.946 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.204 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.204 [ 00:11:45.205 { 00:11:45.205 "name": "BaseBdev3", 00:11:45.205 "aliases": [ 00:11:45.205 "6fab7783-123c-11ef-8c90-4585f0cfab08" 00:11:45.205 ], 00:11:45.205 "product_name": "Malloc disk", 00:11:45.205 "block_size": 512, 00:11:45.205 "num_blocks": 65536, 00:11:45.205 "uuid": "6fab7783-123c-11ef-8c90-4585f0cfab08", 00:11:45.205 "assigned_rate_limits": { 00:11:45.205 "rw_ios_per_sec": 0, 00:11:45.205 "rw_mbytes_per_sec": 0, 00:11:45.205 "r_mbytes_per_sec": 0, 00:11:45.205 "w_mbytes_per_sec": 0 00:11:45.205 }, 00:11:45.205 "claimed": true, 00:11:45.205 "claim_type": "exclusive_write", 00:11:45.205 "zoned": false, 00:11:45.205 "supported_io_types": { 00:11:45.205 "read": true, 00:11:45.205 "write": true, 00:11:45.205 "unmap": true, 00:11:45.205 "write_zeroes": true, 00:11:45.205 "flush": true, 00:11:45.205 "reset": true, 00:11:45.205 "compare": false, 00:11:45.205 "compare_and_write": false, 00:11:45.205 "abort": true, 00:11:45.205 "nvme_admin": false, 00:11:45.205 "nvme_io": false 00:11:45.205 }, 00:11:45.205 "memory_domains": [ 00:11:45.205 { 00:11:45.205 "dma_device_id": "system", 00:11:45.205 "dma_device_type": 1 00:11:45.205 }, 00:11:45.205 { 00:11:45.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.205 "dma_device_type": 2 00:11:45.205 } 00:11:45.205 ], 00:11:45.205 "driver_specific": {} 00:11:45.205 } 00:11:45.205 ] 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.463 21:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.721 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:45.721 "name": "Existed_Raid", 00:11:45.721 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:45.721 "strip_size_kb": 64, 00:11:45.721 "state": "online", 00:11:45.721 "raid_level": "concat", 00:11:45.721 "superblock": true, 00:11:45.721 "num_base_bdevs": 3, 00:11:45.721 "num_base_bdevs_discovered": 3, 00:11:45.721 "num_base_bdevs_operational": 3, 00:11:45.721 "base_bdevs_list": [ 00:11:45.721 { 00:11:45.721 "name": "BaseBdev1", 00:11:45.721 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:45.721 "is_configured": true, 00:11:45.721 "data_offset": 2048, 00:11:45.721 "data_size": 63488 00:11:45.721 }, 00:11:45.721 { 00:11:45.721 "name": "BaseBdev2", 00:11:45.721 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:45.721 "is_configured": true, 00:11:45.721 "data_offset": 2048, 00:11:45.722 "data_size": 63488 00:11:45.722 }, 00:11:45.722 { 00:11:45.722 "name": "BaseBdev3", 00:11:45.722 "uuid": "6fab7783-123c-11ef-8c90-4585f0cfab08", 00:11:45.722 "is_configured": true, 00:11:45.722 "data_offset": 2048, 00:11:45.722 "data_size": 63488 00:11:45.722 } 00:11:45.722 ] 00:11:45.722 }' 00:11:45.722 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:45.722 21:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:45.980 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:11:46.239 [2024-05-14 21:53:46.622134] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:11:46.239 "name": "Existed_Raid", 00:11:46.239 "aliases": [ 00:11:46.239 "6e5c5cb1-123c-11ef-8c90-4585f0cfab08" 00:11:46.239 ], 00:11:46.239 "product_name": "Raid Volume", 00:11:46.239 "block_size": 512, 00:11:46.239 "num_blocks": 190464, 00:11:46.239 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:46.239 "assigned_rate_limits": { 00:11:46.239 "rw_ios_per_sec": 0, 00:11:46.239 "rw_mbytes_per_sec": 0, 00:11:46.239 "r_mbytes_per_sec": 0, 00:11:46.239 "w_mbytes_per_sec": 0 00:11:46.239 }, 00:11:46.239 "claimed": false, 00:11:46.239 "zoned": false, 00:11:46.239 "supported_io_types": { 00:11:46.239 "read": true, 00:11:46.239 "write": true, 00:11:46.239 "unmap": true, 00:11:46.239 "write_zeroes": true, 00:11:46.239 "flush": true, 00:11:46.239 "reset": true, 00:11:46.239 "compare": false, 00:11:46.239 "compare_and_write": false, 00:11:46.239 "abort": false, 00:11:46.239 "nvme_admin": false, 00:11:46.239 "nvme_io": false 00:11:46.239 }, 00:11:46.239 "memory_domains": [ 00:11:46.239 { 00:11:46.239 "dma_device_id": "system", 00:11:46.239 "dma_device_type": 1 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.239 "dma_device_type": 2 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "dma_device_id": "system", 00:11:46.239 "dma_device_type": 1 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.239 "dma_device_type": 2 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "dma_device_id": "system", 00:11:46.239 "dma_device_type": 1 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.239 "dma_device_type": 2 00:11:46.239 } 00:11:46.239 ], 00:11:46.239 "driver_specific": { 00:11:46.239 "raid": { 00:11:46.239 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:46.239 "strip_size_kb": 64, 00:11:46.239 "state": "online", 00:11:46.239 "raid_level": "concat", 00:11:46.239 "superblock": true, 00:11:46.239 "num_base_bdevs": 3, 00:11:46.239 "num_base_bdevs_discovered": 3, 00:11:46.239 "num_base_bdevs_operational": 3, 00:11:46.239 "base_bdevs_list": [ 00:11:46.239 { 00:11:46.239 "name": "BaseBdev1", 00:11:46.239 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:46.239 "is_configured": true, 00:11:46.239 "data_offset": 2048, 00:11:46.239 "data_size": 63488 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "name": "BaseBdev2", 00:11:46.239 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:46.239 "is_configured": true, 00:11:46.239 "data_offset": 2048, 00:11:46.239 "data_size": 63488 00:11:46.239 }, 00:11:46.239 { 00:11:46.239 "name": "BaseBdev3", 00:11:46.239 "uuid": "6fab7783-123c-11ef-8c90-4585f0cfab08", 00:11:46.239 "is_configured": true, 00:11:46.239 "data_offset": 2048, 00:11:46.239 "data_size": 63488 00:11:46.239 } 00:11:46.239 ] 00:11:46.239 } 00:11:46.239 } 00:11:46.239 }' 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:11:46.239 BaseBdev2 00:11:46.239 BaseBdev3' 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:46.239 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:46.498 "name": "BaseBdev1", 00:11:46.498 "aliases": [ 00:11:46.498 "6d63ed48-123c-11ef-8c90-4585f0cfab08" 00:11:46.498 ], 00:11:46.498 "product_name": "Malloc disk", 00:11:46.498 "block_size": 512, 00:11:46.498 "num_blocks": 65536, 00:11:46.498 "uuid": "6d63ed48-123c-11ef-8c90-4585f0cfab08", 00:11:46.498 "assigned_rate_limits": { 00:11:46.498 "rw_ios_per_sec": 0, 00:11:46.498 "rw_mbytes_per_sec": 0, 00:11:46.498 "r_mbytes_per_sec": 0, 00:11:46.498 "w_mbytes_per_sec": 0 00:11:46.498 }, 00:11:46.498 "claimed": true, 00:11:46.498 "claim_type": "exclusive_write", 00:11:46.498 "zoned": false, 00:11:46.498 "supported_io_types": { 00:11:46.498 "read": true, 00:11:46.498 "write": true, 00:11:46.498 "unmap": true, 00:11:46.498 "write_zeroes": true, 00:11:46.498 "flush": true, 00:11:46.498 "reset": true, 00:11:46.498 "compare": false, 00:11:46.498 "compare_and_write": false, 00:11:46.498 "abort": true, 00:11:46.498 "nvme_admin": false, 00:11:46.498 "nvme_io": false 00:11:46.498 }, 00:11:46.498 "memory_domains": [ 00:11:46.498 { 00:11:46.498 "dma_device_id": "system", 00:11:46.498 "dma_device_type": 1 00:11:46.498 }, 00:11:46.498 { 00:11:46.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.498 "dma_device_type": 2 00:11:46.498 } 00:11:46.498 ], 00:11:46.498 "driver_specific": {} 00:11:46.498 }' 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:46.498 21:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:46.757 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:46.758 "name": "BaseBdev2", 00:11:46.758 "aliases": [ 00:11:46.758 "6ee16d87-123c-11ef-8c90-4585f0cfab08" 00:11:46.758 ], 00:11:46.758 "product_name": "Malloc disk", 00:11:46.758 "block_size": 512, 00:11:46.758 "num_blocks": 65536, 00:11:46.758 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:46.758 "assigned_rate_limits": { 00:11:46.758 "rw_ios_per_sec": 0, 00:11:46.758 "rw_mbytes_per_sec": 0, 00:11:46.758 "r_mbytes_per_sec": 0, 00:11:46.758 "w_mbytes_per_sec": 0 00:11:46.758 }, 00:11:46.758 "claimed": true, 00:11:46.758 "claim_type": "exclusive_write", 00:11:46.758 "zoned": false, 00:11:46.758 "supported_io_types": { 00:11:46.758 "read": true, 00:11:46.758 "write": true, 00:11:46.758 "unmap": true, 00:11:46.758 "write_zeroes": true, 00:11:46.758 "flush": true, 00:11:46.758 "reset": true, 00:11:46.758 "compare": false, 00:11:46.758 "compare_and_write": false, 00:11:46.758 "abort": true, 00:11:46.758 "nvme_admin": false, 00:11:46.758 "nvme_io": false 00:11:46.758 }, 00:11:46.758 "memory_domains": [ 00:11:46.758 { 00:11:46.758 "dma_device_id": "system", 00:11:46.758 "dma_device_type": 1 00:11:46.758 }, 00:11:46.758 { 00:11:46.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.758 "dma_device_type": 2 00:11:46.758 } 00:11:46.758 ], 00:11:46.758 "driver_specific": {} 00:11:46.758 }' 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:46.758 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:11:47.326 "name": "BaseBdev3", 00:11:47.326 "aliases": [ 00:11:47.326 "6fab7783-123c-11ef-8c90-4585f0cfab08" 00:11:47.326 ], 00:11:47.326 "product_name": "Malloc disk", 00:11:47.326 "block_size": 512, 00:11:47.326 "num_blocks": 65536, 00:11:47.326 "uuid": "6fab7783-123c-11ef-8c90-4585f0cfab08", 00:11:47.326 "assigned_rate_limits": { 00:11:47.326 "rw_ios_per_sec": 0, 00:11:47.326 "rw_mbytes_per_sec": 0, 00:11:47.326 "r_mbytes_per_sec": 0, 00:11:47.326 "w_mbytes_per_sec": 0 00:11:47.326 }, 00:11:47.326 "claimed": true, 00:11:47.326 "claim_type": "exclusive_write", 00:11:47.326 "zoned": false, 00:11:47.326 "supported_io_types": { 00:11:47.326 "read": true, 00:11:47.326 "write": true, 00:11:47.326 "unmap": true, 00:11:47.326 "write_zeroes": true, 00:11:47.326 "flush": true, 00:11:47.326 "reset": true, 00:11:47.326 "compare": false, 00:11:47.326 "compare_and_write": false, 00:11:47.326 "abort": true, 00:11:47.326 "nvme_admin": false, 00:11:47.326 "nvme_io": false 00:11:47.326 }, 00:11:47.326 "memory_domains": [ 00:11:47.326 { 00:11:47.326 "dma_device_id": "system", 00:11:47.326 "dma_device_type": 1 00:11:47.326 }, 00:11:47.326 { 00:11:47.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.326 "dma_device_type": 2 00:11:47.326 } 00:11:47.326 ], 00:11:47.326 "driver_specific": {} 00:11:47.326 }' 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:11:47.326 21:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:47.585 [2024-05-14 21:53:47.998126] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.585 [2024-05-14 21:53:47.998156] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.585 [2024-05-14 21:53:47.998171] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.585 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.857 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:47.857 "name": "Existed_Raid", 00:11:47.857 "uuid": "6e5c5cb1-123c-11ef-8c90-4585f0cfab08", 00:11:47.857 "strip_size_kb": 64, 00:11:47.857 "state": "offline", 00:11:47.857 "raid_level": "concat", 00:11:47.857 "superblock": true, 00:11:47.857 "num_base_bdevs": 3, 00:11:47.857 "num_base_bdevs_discovered": 2, 00:11:47.857 "num_base_bdevs_operational": 2, 00:11:47.857 "base_bdevs_list": [ 00:11:47.857 { 00:11:47.857 "name": null, 00:11:47.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.857 "is_configured": false, 00:11:47.857 "data_offset": 2048, 00:11:47.857 "data_size": 63488 00:11:47.857 }, 00:11:47.857 { 00:11:47.857 "name": "BaseBdev2", 00:11:47.857 "uuid": "6ee16d87-123c-11ef-8c90-4585f0cfab08", 00:11:47.857 "is_configured": true, 00:11:47.857 "data_offset": 2048, 00:11:47.857 "data_size": 63488 00:11:47.857 }, 00:11:47.857 { 00:11:47.857 "name": "BaseBdev3", 00:11:47.857 "uuid": "6fab7783-123c-11ef-8c90-4585f0cfab08", 00:11:47.857 "is_configured": true, 00:11:47.857 "data_offset": 2048, 00:11:47.857 "data_size": 63488 00:11:47.857 } 00:11:47.857 ] 00:11:47.857 }' 00:11:47.857 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:47.857 21:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.138 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:48.138 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.138 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.138 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:48.396 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:48.396 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.396 21:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:48.654 [2024-05-14 21:53:49.200517] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.654 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.654 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.654 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.654 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:49.222 [2024-05-14 21:53:49.774520] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.222 [2024-05-14 21:53:49.774548] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e6eb300 name Existed_Raid, state offline 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.222 21:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:49.789 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:50.048 BaseBdev2 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:50.048 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.306 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:50.565 [ 00:11:50.565 { 00:11:50.565 "name": "BaseBdev2", 00:11:50.565 "aliases": [ 00:11:50.565 "72b91d3b-123c-11ef-8c90-4585f0cfab08" 00:11:50.565 ], 00:11:50.565 "product_name": "Malloc disk", 00:11:50.565 "block_size": 512, 00:11:50.565 "num_blocks": 65536, 00:11:50.565 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:50.565 "assigned_rate_limits": { 00:11:50.565 "rw_ios_per_sec": 0, 00:11:50.565 "rw_mbytes_per_sec": 0, 00:11:50.565 "r_mbytes_per_sec": 0, 00:11:50.565 "w_mbytes_per_sec": 0 00:11:50.565 }, 00:11:50.565 "claimed": false, 00:11:50.565 "zoned": false, 00:11:50.565 "supported_io_types": { 00:11:50.565 "read": true, 00:11:50.565 "write": true, 00:11:50.565 "unmap": true, 00:11:50.565 "write_zeroes": true, 00:11:50.565 "flush": true, 00:11:50.565 "reset": true, 00:11:50.565 "compare": false, 00:11:50.565 "compare_and_write": false, 00:11:50.565 "abort": true, 00:11:50.565 "nvme_admin": false, 00:11:50.565 "nvme_io": false 00:11:50.565 }, 00:11:50.565 "memory_domains": [ 00:11:50.565 { 00:11:50.565 "dma_device_id": "system", 00:11:50.565 "dma_device_type": 1 00:11:50.565 }, 00:11:50.565 { 00:11:50.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.565 "dma_device_type": 2 00:11:50.565 } 00:11:50.565 ], 00:11:50.565 "driver_specific": {} 00:11:50.565 } 00:11:50.565 ] 00:11:50.565 21:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:50.565 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:50.565 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:50.565 21:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:50.838 BaseBdev3 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:50.838 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.121 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.379 [ 00:11:51.379 { 00:11:51.379 "name": "BaseBdev3", 00:11:51.379 "aliases": [ 00:11:51.379 "733f644c-123c-11ef-8c90-4585f0cfab08" 00:11:51.379 ], 00:11:51.379 "product_name": "Malloc disk", 00:11:51.379 "block_size": 512, 00:11:51.379 "num_blocks": 65536, 00:11:51.379 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:51.379 "assigned_rate_limits": { 00:11:51.379 "rw_ios_per_sec": 0, 00:11:51.379 "rw_mbytes_per_sec": 0, 00:11:51.379 "r_mbytes_per_sec": 0, 00:11:51.379 "w_mbytes_per_sec": 0 00:11:51.379 }, 00:11:51.379 "claimed": false, 00:11:51.379 "zoned": false, 00:11:51.379 "supported_io_types": { 00:11:51.379 "read": true, 00:11:51.379 "write": true, 00:11:51.379 "unmap": true, 00:11:51.379 "write_zeroes": true, 00:11:51.379 "flush": true, 00:11:51.379 "reset": true, 00:11:51.379 "compare": false, 00:11:51.379 "compare_and_write": false, 00:11:51.379 "abort": true, 00:11:51.379 "nvme_admin": false, 00:11:51.379 "nvme_io": false 00:11:51.379 }, 00:11:51.379 "memory_domains": [ 00:11:51.379 { 00:11:51.379 "dma_device_id": "system", 00:11:51.379 "dma_device_type": 1 00:11:51.379 }, 00:11:51.379 { 00:11:51.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.379 "dma_device_type": 2 00:11:51.379 } 00:11:51.379 ], 00:11:51.379 "driver_specific": {} 00:11:51.379 } 00:11:51.379 ] 00:11:51.379 21:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:51.379 21:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:11:51.379 21:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:11:51.379 21:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.638 [2024-05-14 21:53:52.012666] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.638 [2024-05-14 21:53:52.012722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.638 [2024-05-14 21:53:52.012731] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.638 [2024-05-14 21:53:52.013292] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.638 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.896 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:51.896 "name": "Existed_Raid", 00:11:51.896 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:51.896 "strip_size_kb": 64, 00:11:51.896 "state": "configuring", 00:11:51.896 "raid_level": "concat", 00:11:51.896 "superblock": true, 00:11:51.896 "num_base_bdevs": 3, 00:11:51.896 "num_base_bdevs_discovered": 2, 00:11:51.896 "num_base_bdevs_operational": 3, 00:11:51.896 "base_bdevs_list": [ 00:11:51.896 { 00:11:51.896 "name": "BaseBdev1", 00:11:51.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.896 "is_configured": false, 00:11:51.896 "data_offset": 0, 00:11:51.896 "data_size": 0 00:11:51.896 }, 00:11:51.896 { 00:11:51.896 "name": "BaseBdev2", 00:11:51.896 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:51.896 "is_configured": true, 00:11:51.896 "data_offset": 2048, 00:11:51.896 "data_size": 63488 00:11:51.896 }, 00:11:51.896 { 00:11:51.896 "name": "BaseBdev3", 00:11:51.896 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:51.896 "is_configured": true, 00:11:51.896 "data_offset": 2048, 00:11:51.896 "data_size": 63488 00:11:51.896 } 00:11:51.896 ] 00:11:51.896 }' 00:11:51.896 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:51.896 21:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.154 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:52.412 [2024-05-14 21:53:52.880652] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.412 21:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.670 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.670 "name": "Existed_Raid", 00:11:52.670 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:52.670 "strip_size_kb": 64, 00:11:52.670 "state": "configuring", 00:11:52.670 "raid_level": "concat", 00:11:52.670 "superblock": true, 00:11:52.670 "num_base_bdevs": 3, 00:11:52.670 "num_base_bdevs_discovered": 1, 00:11:52.670 "num_base_bdevs_operational": 3, 00:11:52.670 "base_bdevs_list": [ 00:11:52.670 { 00:11:52.670 "name": "BaseBdev1", 00:11:52.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.670 "is_configured": false, 00:11:52.670 "data_offset": 0, 00:11:52.670 "data_size": 0 00:11:52.670 }, 00:11:52.670 { 00:11:52.670 "name": null, 00:11:52.670 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:52.670 "is_configured": false, 00:11:52.670 "data_offset": 2048, 00:11:52.670 "data_size": 63488 00:11:52.670 }, 00:11:52.670 { 00:11:52.670 "name": "BaseBdev3", 00:11:52.670 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:52.670 "is_configured": true, 00:11:52.670 "data_offset": 2048, 00:11:52.670 "data_size": 63488 00:11:52.670 } 00:11:52.670 ] 00:11:52.670 }' 00:11:52.670 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.670 21:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.930 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.930 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.496 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:11:53.496 21:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.496 [2024-05-14 21:53:54.072781] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.496 BaseBdev1 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:53.753 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.011 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.270 [ 00:11:54.270 { 00:11:54.270 "name": "BaseBdev1", 00:11:54.270 "aliases": [ 00:11:54.270 "74ebdb8a-123c-11ef-8c90-4585f0cfab08" 00:11:54.270 ], 00:11:54.270 "product_name": "Malloc disk", 00:11:54.270 "block_size": 512, 00:11:54.270 "num_blocks": 65536, 00:11:54.270 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:54.270 "assigned_rate_limits": { 00:11:54.270 "rw_ios_per_sec": 0, 00:11:54.270 "rw_mbytes_per_sec": 0, 00:11:54.270 "r_mbytes_per_sec": 0, 00:11:54.270 "w_mbytes_per_sec": 0 00:11:54.270 }, 00:11:54.270 "claimed": true, 00:11:54.270 "claim_type": "exclusive_write", 00:11:54.270 "zoned": false, 00:11:54.270 "supported_io_types": { 00:11:54.270 "read": true, 00:11:54.270 "write": true, 00:11:54.270 "unmap": true, 00:11:54.270 "write_zeroes": true, 00:11:54.270 "flush": true, 00:11:54.270 "reset": true, 00:11:54.270 "compare": false, 00:11:54.270 "compare_and_write": false, 00:11:54.270 "abort": true, 00:11:54.270 "nvme_admin": false, 00:11:54.270 "nvme_io": false 00:11:54.270 }, 00:11:54.270 "memory_domains": [ 00:11:54.270 { 00:11:54.270 "dma_device_id": "system", 00:11:54.270 "dma_device_type": 1 00:11:54.270 }, 00:11:54.270 { 00:11:54.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.270 "dma_device_type": 2 00:11:54.270 } 00:11:54.270 ], 00:11:54.270 "driver_specific": {} 00:11:54.270 } 00:11:54.270 ] 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.270 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.528 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:54.528 "name": "Existed_Raid", 00:11:54.528 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:54.528 "strip_size_kb": 64, 00:11:54.528 "state": "configuring", 00:11:54.528 "raid_level": "concat", 00:11:54.528 "superblock": true, 00:11:54.528 "num_base_bdevs": 3, 00:11:54.528 "num_base_bdevs_discovered": 2, 00:11:54.528 "num_base_bdevs_operational": 3, 00:11:54.528 "base_bdevs_list": [ 00:11:54.528 { 00:11:54.528 "name": "BaseBdev1", 00:11:54.528 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:54.528 "is_configured": true, 00:11:54.528 "data_offset": 2048, 00:11:54.528 "data_size": 63488 00:11:54.528 }, 00:11:54.528 { 00:11:54.528 "name": null, 00:11:54.528 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:54.528 "is_configured": false, 00:11:54.528 "data_offset": 2048, 00:11:54.528 "data_size": 63488 00:11:54.528 }, 00:11:54.528 { 00:11:54.528 "name": "BaseBdev3", 00:11:54.528 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:54.528 "is_configured": true, 00:11:54.528 "data_offset": 2048, 00:11:54.528 "data_size": 63488 00:11:54.528 } 00:11:54.528 ] 00:11:54.528 }' 00:11:54.528 21:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:54.528 21:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.786 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:55.044 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.044 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:55.302 [2024-05-14 21:53:55.732659] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.302 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.561 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:55.561 "name": "Existed_Raid", 00:11:55.561 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:55.561 "strip_size_kb": 64, 00:11:55.561 "state": "configuring", 00:11:55.561 "raid_level": "concat", 00:11:55.561 "superblock": true, 00:11:55.561 "num_base_bdevs": 3, 00:11:55.561 "num_base_bdevs_discovered": 1, 00:11:55.561 "num_base_bdevs_operational": 3, 00:11:55.561 "base_bdevs_list": [ 00:11:55.561 { 00:11:55.561 "name": "BaseBdev1", 00:11:55.561 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:55.561 "is_configured": true, 00:11:55.561 "data_offset": 2048, 00:11:55.561 "data_size": 63488 00:11:55.561 }, 00:11:55.561 { 00:11:55.561 "name": null, 00:11:55.561 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:55.561 "is_configured": false, 00:11:55.561 "data_offset": 2048, 00:11:55.561 "data_size": 63488 00:11:55.561 }, 00:11:55.561 { 00:11:55.561 "name": null, 00:11:55.562 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:55.562 "is_configured": false, 00:11:55.562 "data_offset": 2048, 00:11:55.562 "data_size": 63488 00:11:55.562 } 00:11:55.562 ] 00:11:55.562 }' 00:11:55.562 21:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:55.562 21:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.819 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.819 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:56.078 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:11:56.078 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:56.337 [2024-05-14 21:53:56.848667] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.337 21:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.595 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:56.595 "name": "Existed_Raid", 00:11:56.595 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:56.595 "strip_size_kb": 64, 00:11:56.595 "state": "configuring", 00:11:56.595 "raid_level": "concat", 00:11:56.595 "superblock": true, 00:11:56.595 "num_base_bdevs": 3, 00:11:56.595 "num_base_bdevs_discovered": 2, 00:11:56.595 "num_base_bdevs_operational": 3, 00:11:56.596 "base_bdevs_list": [ 00:11:56.596 { 00:11:56.596 "name": "BaseBdev1", 00:11:56.596 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:56.596 "is_configured": true, 00:11:56.596 "data_offset": 2048, 00:11:56.596 "data_size": 63488 00:11:56.596 }, 00:11:56.596 { 00:11:56.596 "name": null, 00:11:56.596 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:56.596 "is_configured": false, 00:11:56.596 "data_offset": 2048, 00:11:56.596 "data_size": 63488 00:11:56.596 }, 00:11:56.596 { 00:11:56.596 "name": "BaseBdev3", 00:11:56.596 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:56.596 "is_configured": true, 00:11:56.596 "data_offset": 2048, 00:11:56.596 "data_size": 63488 00:11:56.596 } 00:11:56.596 ] 00:11:56.596 }' 00:11:56.596 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:56.596 21:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.162 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.162 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.420 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:11:57.420 21:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.420 [2024-05-14 21:53:58.000670] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.679 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.937 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:57.937 "name": "Existed_Raid", 00:11:57.937 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:57.937 "strip_size_kb": 64, 00:11:57.937 "state": "configuring", 00:11:57.938 "raid_level": "concat", 00:11:57.938 "superblock": true, 00:11:57.938 "num_base_bdevs": 3, 00:11:57.938 "num_base_bdevs_discovered": 1, 00:11:57.938 "num_base_bdevs_operational": 3, 00:11:57.938 "base_bdevs_list": [ 00:11:57.938 { 00:11:57.938 "name": null, 00:11:57.938 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:57.938 "is_configured": false, 00:11:57.938 "data_offset": 2048, 00:11:57.938 "data_size": 63488 00:11:57.938 }, 00:11:57.938 { 00:11:57.938 "name": null, 00:11:57.938 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:57.938 "is_configured": false, 00:11:57.938 "data_offset": 2048, 00:11:57.938 "data_size": 63488 00:11:57.938 }, 00:11:57.938 { 00:11:57.938 "name": "BaseBdev3", 00:11:57.938 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:57.938 "is_configured": true, 00:11:57.938 "data_offset": 2048, 00:11:57.938 "data_size": 63488 00:11:57.938 } 00:11:57.938 ] 00:11:57.938 }' 00:11:57.938 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:57.938 21:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.196 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.196 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.455 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:11:58.455 21:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.455 [2024-05-14 21:53:59.030504] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.714 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.973 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:58.973 "name": "Existed_Raid", 00:11:58.973 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:11:58.973 "strip_size_kb": 64, 00:11:58.973 "state": "configuring", 00:11:58.973 "raid_level": "concat", 00:11:58.973 "superblock": true, 00:11:58.973 "num_base_bdevs": 3, 00:11:58.973 "num_base_bdevs_discovered": 2, 00:11:58.973 "num_base_bdevs_operational": 3, 00:11:58.973 "base_bdevs_list": [ 00:11:58.973 { 00:11:58.973 "name": null, 00:11:58.973 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:11:58.973 "is_configured": false, 00:11:58.973 "data_offset": 2048, 00:11:58.973 "data_size": 63488 00:11:58.973 }, 00:11:58.973 { 00:11:58.973 "name": "BaseBdev2", 00:11:58.973 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:11:58.973 "is_configured": true, 00:11:58.973 "data_offset": 2048, 00:11:58.973 "data_size": 63488 00:11:58.973 }, 00:11:58.973 { 00:11:58.973 "name": "BaseBdev3", 00:11:58.973 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:11:58.973 "is_configured": true, 00:11:58.973 "data_offset": 2048, 00:11:58.973 "data_size": 63488 00:11:58.973 } 00:11:58.973 ] 00:11:58.973 }' 00:11:58.973 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:58.973 21:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.232 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.232 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.490 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:11:59.490 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.490 21:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.749 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 74ebdb8a-123c-11ef-8c90-4585f0cfab08 00:12:00.008 [2024-05-14 21:54:00.378643] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:00.008 [2024-05-14 21:54:00.378701] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e6eb300 00:12:00.008 [2024-05-14 21:54:00.378706] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:00.008 [2024-05-14 21:54:00.378726] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e749e20 00:12:00.008 [2024-05-14 21:54:00.378773] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e6eb300 00:12:00.008 [2024-05-14 21:54:00.378778] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e6eb300 00:12:00.008 [2024-05-14 21:54:00.378798] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.008 NewBaseBdev 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:00.008 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.266 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:00.525 [ 00:12:00.525 { 00:12:00.525 "name": "NewBaseBdev", 00:12:00.525 "aliases": [ 00:12:00.525 "74ebdb8a-123c-11ef-8c90-4585f0cfab08" 00:12:00.525 ], 00:12:00.525 "product_name": "Malloc disk", 00:12:00.525 "block_size": 512, 00:12:00.525 "num_blocks": 65536, 00:12:00.525 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:12:00.525 "assigned_rate_limits": { 00:12:00.525 "rw_ios_per_sec": 0, 00:12:00.525 "rw_mbytes_per_sec": 0, 00:12:00.525 "r_mbytes_per_sec": 0, 00:12:00.525 "w_mbytes_per_sec": 0 00:12:00.525 }, 00:12:00.525 "claimed": true, 00:12:00.525 "claim_type": "exclusive_write", 00:12:00.525 "zoned": false, 00:12:00.525 "supported_io_types": { 00:12:00.525 "read": true, 00:12:00.525 "write": true, 00:12:00.525 "unmap": true, 00:12:00.525 "write_zeroes": true, 00:12:00.525 "flush": true, 00:12:00.525 "reset": true, 00:12:00.525 "compare": false, 00:12:00.525 "compare_and_write": false, 00:12:00.525 "abort": true, 00:12:00.525 "nvme_admin": false, 00:12:00.525 "nvme_io": false 00:12:00.525 }, 00:12:00.525 "memory_domains": [ 00:12:00.525 { 00:12:00.525 "dma_device_id": "system", 00:12:00.525 "dma_device_type": 1 00:12:00.525 }, 00:12:00.525 { 00:12:00.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.525 "dma_device_type": 2 00:12:00.525 } 00:12:00.525 ], 00:12:00.525 "driver_specific": {} 00:12:00.525 } 00:12:00.525 ] 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.525 21:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.785 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:00.785 "name": "Existed_Raid", 00:12:00.785 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:12:00.785 "strip_size_kb": 64, 00:12:00.785 "state": "online", 00:12:00.785 "raid_level": "concat", 00:12:00.785 "superblock": true, 00:12:00.785 "num_base_bdevs": 3, 00:12:00.785 "num_base_bdevs_discovered": 3, 00:12:00.785 "num_base_bdevs_operational": 3, 00:12:00.785 "base_bdevs_list": [ 00:12:00.785 { 00:12:00.785 "name": "NewBaseBdev", 00:12:00.785 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:12:00.785 "is_configured": true, 00:12:00.785 "data_offset": 2048, 00:12:00.785 "data_size": 63488 00:12:00.785 }, 00:12:00.785 { 00:12:00.785 "name": "BaseBdev2", 00:12:00.785 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:12:00.785 "is_configured": true, 00:12:00.785 "data_offset": 2048, 00:12:00.785 "data_size": 63488 00:12:00.785 }, 00:12:00.785 { 00:12:00.785 "name": "BaseBdev3", 00:12:00.785 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:12:00.785 "is_configured": true, 00:12:00.785 "data_offset": 2048, 00:12:00.785 "data_size": 63488 00:12:00.785 } 00:12:00.785 ] 00:12:00.785 }' 00:12:00.785 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:00.785 21:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:01.044 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:01.303 [2024-05-14 21:54:01.698572] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:01.303 "name": "Existed_Raid", 00:12:01.303 "aliases": [ 00:12:01.303 "73b1867e-123c-11ef-8c90-4585f0cfab08" 00:12:01.303 ], 00:12:01.303 "product_name": "Raid Volume", 00:12:01.303 "block_size": 512, 00:12:01.303 "num_blocks": 190464, 00:12:01.303 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:12:01.303 "assigned_rate_limits": { 00:12:01.303 "rw_ios_per_sec": 0, 00:12:01.303 "rw_mbytes_per_sec": 0, 00:12:01.303 "r_mbytes_per_sec": 0, 00:12:01.303 "w_mbytes_per_sec": 0 00:12:01.303 }, 00:12:01.303 "claimed": false, 00:12:01.303 "zoned": false, 00:12:01.303 "supported_io_types": { 00:12:01.303 "read": true, 00:12:01.303 "write": true, 00:12:01.303 "unmap": true, 00:12:01.303 "write_zeroes": true, 00:12:01.303 "flush": true, 00:12:01.303 "reset": true, 00:12:01.303 "compare": false, 00:12:01.303 "compare_and_write": false, 00:12:01.303 "abort": false, 00:12:01.303 "nvme_admin": false, 00:12:01.303 "nvme_io": false 00:12:01.303 }, 00:12:01.303 "memory_domains": [ 00:12:01.303 { 00:12:01.303 "dma_device_id": "system", 00:12:01.303 "dma_device_type": 1 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.303 "dma_device_type": 2 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "dma_device_id": "system", 00:12:01.303 "dma_device_type": 1 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.303 "dma_device_type": 2 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "dma_device_id": "system", 00:12:01.303 "dma_device_type": 1 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.303 "dma_device_type": 2 00:12:01.303 } 00:12:01.303 ], 00:12:01.303 "driver_specific": { 00:12:01.303 "raid": { 00:12:01.303 "uuid": "73b1867e-123c-11ef-8c90-4585f0cfab08", 00:12:01.303 "strip_size_kb": 64, 00:12:01.303 "state": "online", 00:12:01.303 "raid_level": "concat", 00:12:01.303 "superblock": true, 00:12:01.303 "num_base_bdevs": 3, 00:12:01.303 "num_base_bdevs_discovered": 3, 00:12:01.303 "num_base_bdevs_operational": 3, 00:12:01.303 "base_bdevs_list": [ 00:12:01.303 { 00:12:01.303 "name": "NewBaseBdev", 00:12:01.303 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:12:01.303 "is_configured": true, 00:12:01.303 "data_offset": 2048, 00:12:01.303 "data_size": 63488 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "name": "BaseBdev2", 00:12:01.303 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:12:01.303 "is_configured": true, 00:12:01.303 "data_offset": 2048, 00:12:01.303 "data_size": 63488 00:12:01.303 }, 00:12:01.303 { 00:12:01.303 "name": "BaseBdev3", 00:12:01.303 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:12:01.303 "is_configured": true, 00:12:01.303 "data_offset": 2048, 00:12:01.303 "data_size": 63488 00:12:01.303 } 00:12:01.303 ] 00:12:01.303 } 00:12:01.303 } 00:12:01.303 }' 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:12:01.303 BaseBdev2 00:12:01.303 BaseBdev3' 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:01.303 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:01.562 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:01.562 "name": "NewBaseBdev", 00:12:01.562 "aliases": [ 00:12:01.562 "74ebdb8a-123c-11ef-8c90-4585f0cfab08" 00:12:01.562 ], 00:12:01.562 "product_name": "Malloc disk", 00:12:01.562 "block_size": 512, 00:12:01.562 "num_blocks": 65536, 00:12:01.562 "uuid": "74ebdb8a-123c-11ef-8c90-4585f0cfab08", 00:12:01.562 "assigned_rate_limits": { 00:12:01.562 "rw_ios_per_sec": 0, 00:12:01.562 "rw_mbytes_per_sec": 0, 00:12:01.562 "r_mbytes_per_sec": 0, 00:12:01.562 "w_mbytes_per_sec": 0 00:12:01.562 }, 00:12:01.562 "claimed": true, 00:12:01.562 "claim_type": "exclusive_write", 00:12:01.562 "zoned": false, 00:12:01.562 "supported_io_types": { 00:12:01.562 "read": true, 00:12:01.562 "write": true, 00:12:01.562 "unmap": true, 00:12:01.562 "write_zeroes": true, 00:12:01.562 "flush": true, 00:12:01.562 "reset": true, 00:12:01.562 "compare": false, 00:12:01.562 "compare_and_write": false, 00:12:01.562 "abort": true, 00:12:01.562 "nvme_admin": false, 00:12:01.562 "nvme_io": false 00:12:01.562 }, 00:12:01.562 "memory_domains": [ 00:12:01.562 { 00:12:01.562 "dma_device_id": "system", 00:12:01.562 "dma_device_type": 1 00:12:01.562 }, 00:12:01.563 { 00:12:01.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.563 "dma_device_type": 2 00:12:01.563 } 00:12:01.563 ], 00:12:01.563 "driver_specific": {} 00:12:01.563 }' 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.563 21:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.563 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.563 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:01.563 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:01.563 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:01.563 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:01.842 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:01.842 "name": "BaseBdev2", 00:12:01.842 "aliases": [ 00:12:01.842 "72b91d3b-123c-11ef-8c90-4585f0cfab08" 00:12:01.842 ], 00:12:01.842 "product_name": "Malloc disk", 00:12:01.842 "block_size": 512, 00:12:01.842 "num_blocks": 65536, 00:12:01.842 "uuid": "72b91d3b-123c-11ef-8c90-4585f0cfab08", 00:12:01.842 "assigned_rate_limits": { 00:12:01.842 "rw_ios_per_sec": 0, 00:12:01.842 "rw_mbytes_per_sec": 0, 00:12:01.842 "r_mbytes_per_sec": 0, 00:12:01.842 "w_mbytes_per_sec": 0 00:12:01.842 }, 00:12:01.842 "claimed": true, 00:12:01.842 "claim_type": "exclusive_write", 00:12:01.842 "zoned": false, 00:12:01.842 "supported_io_types": { 00:12:01.842 "read": true, 00:12:01.842 "write": true, 00:12:01.842 "unmap": true, 00:12:01.842 "write_zeroes": true, 00:12:01.842 "flush": true, 00:12:01.842 "reset": true, 00:12:01.842 "compare": false, 00:12:01.842 "compare_and_write": false, 00:12:01.842 "abort": true, 00:12:01.842 "nvme_admin": false, 00:12:01.842 "nvme_io": false 00:12:01.842 }, 00:12:01.842 "memory_domains": [ 00:12:01.842 { 00:12:01.843 "dma_device_id": "system", 00:12:01.843 "dma_device_type": 1 00:12:01.843 }, 00:12:01.843 { 00:12:01.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.843 "dma_device_type": 2 00:12:01.843 } 00:12:01.843 ], 00:12:01.843 "driver_specific": {} 00:12:01.843 }' 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:01.843 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:02.108 "name": "BaseBdev3", 00:12:02.108 "aliases": [ 00:12:02.108 "733f644c-123c-11ef-8c90-4585f0cfab08" 00:12:02.108 ], 00:12:02.108 "product_name": "Malloc disk", 00:12:02.108 "block_size": 512, 00:12:02.108 "num_blocks": 65536, 00:12:02.108 "uuid": "733f644c-123c-11ef-8c90-4585f0cfab08", 00:12:02.108 "assigned_rate_limits": { 00:12:02.108 "rw_ios_per_sec": 0, 00:12:02.108 "rw_mbytes_per_sec": 0, 00:12:02.108 "r_mbytes_per_sec": 0, 00:12:02.108 "w_mbytes_per_sec": 0 00:12:02.108 }, 00:12:02.108 "claimed": true, 00:12:02.108 "claim_type": "exclusive_write", 00:12:02.108 "zoned": false, 00:12:02.108 "supported_io_types": { 00:12:02.108 "read": true, 00:12:02.108 "write": true, 00:12:02.108 "unmap": true, 00:12:02.108 "write_zeroes": true, 00:12:02.108 "flush": true, 00:12:02.108 "reset": true, 00:12:02.108 "compare": false, 00:12:02.108 "compare_and_write": false, 00:12:02.108 "abort": true, 00:12:02.108 "nvme_admin": false, 00:12:02.108 "nvme_io": false 00:12:02.108 }, 00:12:02.108 "memory_domains": [ 00:12:02.108 { 00:12:02.108 "dma_device_id": "system", 00:12:02.108 "dma_device_type": 1 00:12:02.108 }, 00:12:02.108 { 00:12:02.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.108 "dma_device_type": 2 00:12:02.108 } 00:12:02.108 ], 00:12:02.108 "driver_specific": {} 00:12:02.108 }' 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:02.108 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.367 [2024-05-14 21:54:02.858557] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.367 [2024-05-14 21:54:02.858599] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.367 [2024-05-14 21:54:02.858635] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.367 [2024-05-14 21:54:02.858655] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.367 [2024-05-14 21:54:02.858661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e6eb300 name Existed_Raid, state offline 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 54056 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 54056 ']' 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 54056 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 54056 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:12:02.367 killing process with pid 54056 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54056' 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 54056 00:12:02.367 [2024-05-14 21:54:02.891319] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.367 21:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 54056 00:12:02.367 [2024-05-14 21:54:02.922646] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.627 21:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:12:02.627 ************************************ 00:12:02.627 00:12:02.627 real 0m24.664s 00:12:02.627 user 0m44.722s 00:12:02.627 sys 0m3.667s 00:12:02.627 21:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.627 21:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.627 END TEST raid_state_function_test_sb 00:12:02.627 ************************************ 00:12:02.885 21:54:03 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:02.885 21:54:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:02.885 21:54:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.885 21:54:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.885 ************************************ 00:12:02.885 START TEST raid_superblock_test 00:12:02.885 ************************************ 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:02.885 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=54784 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 54784 /var/tmp/spdk-raid.sock 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 54784 ']' 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.886 21:54:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.886 [2024-05-14 21:54:03.271586] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:02.886 [2024-05-14 21:54:03.271772] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:03.452 EAL: TSC is not safe to use in SMP mode 00:12:03.452 EAL: TSC is not invariant 00:12:03.452 [2024-05-14 21:54:03.823314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.452 [2024-05-14 21:54:03.916995] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:03.452 [2024-05-14 21:54:03.919339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.452 [2024-05-14 21:54:03.920152] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.452 [2024-05-14 21:54:03.920165] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:04.019 malloc1 00:12:04.019 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.277 [2024-05-14 21:54:04.820718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.277 [2024-05-14 21:54:04.820793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.277 [2024-05-14 21:54:04.821425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc66780 00:12:04.277 [2024-05-14 21:54:04.821464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.277 [2024-05-14 21:54:04.822396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.277 [2024-05-14 21:54:04.822427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.277 pt1 00:12:04.277 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.277 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.278 21:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:04.535 malloc2 00:12:04.535 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.794 [2024-05-14 21:54:05.308701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.794 [2024-05-14 21:54:05.308760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.794 [2024-05-14 21:54:05.308788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc66c80 00:12:04.794 [2024-05-14 21:54:05.308797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.794 [2024-05-14 21:54:05.309485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.794 [2024-05-14 21:54:05.309516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.794 pt2 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.794 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:05.053 malloc3 00:12:05.053 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.330 [2024-05-14 21:54:05.780704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.330 [2024-05-14 21:54:05.780780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.330 [2024-05-14 21:54:05.780810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc67180 00:12:05.330 [2024-05-14 21:54:05.780820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.330 [2024-05-14 21:54:05.781519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.330 [2024-05-14 21:54:05.781552] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.330 pt3 00:12:05.330 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.330 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.330 21:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:05.621 [2024-05-14 21:54:06.004712] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.621 [2024-05-14 21:54:06.005322] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.621 [2024-05-14 21:54:06.005356] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.621 [2024-05-14 21:54:06.005409] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc6b300 00:12:05.621 [2024-05-14 21:54:06.005416] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:05.621 [2024-05-14 21:54:06.005452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bcc9e20 00:12:05.621 [2024-05-14 21:54:06.005527] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc6b300 00:12:05.621 [2024-05-14 21:54:06.005532] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bc6b300 00:12:05.621 [2024-05-14 21:54:06.005560] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.621 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.880 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:05.880 "name": "raid_bdev1", 00:12:05.880 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:05.880 "strip_size_kb": 64, 00:12:05.880 "state": "online", 00:12:05.880 "raid_level": "concat", 00:12:05.880 "superblock": true, 00:12:05.880 "num_base_bdevs": 3, 00:12:05.880 "num_base_bdevs_discovered": 3, 00:12:05.880 "num_base_bdevs_operational": 3, 00:12:05.880 "base_bdevs_list": [ 00:12:05.880 { 00:12:05.880 "name": "pt1", 00:12:05.880 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:05.880 "is_configured": true, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 }, 00:12:05.880 { 00:12:05.880 "name": "pt2", 00:12:05.880 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:05.880 "is_configured": true, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 }, 00:12:05.880 { 00:12:05.880 "name": "pt3", 00:12:05.880 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:05.880 "is_configured": true, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 } 00:12:05.880 ] 00:12:05.880 }' 00:12:05.880 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:05.880 21:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:06.138 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:06.396 [2024-05-14 21:54:06.844810] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:06.396 "name": "raid_bdev1", 00:12:06.396 "aliases": [ 00:12:06.396 "7c088aa6-123c-11ef-8c90-4585f0cfab08" 00:12:06.396 ], 00:12:06.396 "product_name": "Raid Volume", 00:12:06.396 "block_size": 512, 00:12:06.396 "num_blocks": 190464, 00:12:06.396 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:06.396 "assigned_rate_limits": { 00:12:06.396 "rw_ios_per_sec": 0, 00:12:06.396 "rw_mbytes_per_sec": 0, 00:12:06.396 "r_mbytes_per_sec": 0, 00:12:06.396 "w_mbytes_per_sec": 0 00:12:06.396 }, 00:12:06.396 "claimed": false, 00:12:06.396 "zoned": false, 00:12:06.396 "supported_io_types": { 00:12:06.396 "read": true, 00:12:06.396 "write": true, 00:12:06.396 "unmap": true, 00:12:06.396 "write_zeroes": true, 00:12:06.396 "flush": true, 00:12:06.396 "reset": true, 00:12:06.396 "compare": false, 00:12:06.396 "compare_and_write": false, 00:12:06.396 "abort": false, 00:12:06.396 "nvme_admin": false, 00:12:06.396 "nvme_io": false 00:12:06.396 }, 00:12:06.396 "memory_domains": [ 00:12:06.396 { 00:12:06.396 "dma_device_id": "system", 00:12:06.396 "dma_device_type": 1 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.396 "dma_device_type": 2 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "dma_device_id": "system", 00:12:06.396 "dma_device_type": 1 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.396 "dma_device_type": 2 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "dma_device_id": "system", 00:12:06.396 "dma_device_type": 1 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.396 "dma_device_type": 2 00:12:06.396 } 00:12:06.396 ], 00:12:06.396 "driver_specific": { 00:12:06.396 "raid": { 00:12:06.396 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:06.396 "strip_size_kb": 64, 00:12:06.396 "state": "online", 00:12:06.396 "raid_level": "concat", 00:12:06.396 "superblock": true, 00:12:06.396 "num_base_bdevs": 3, 00:12:06.396 "num_base_bdevs_discovered": 3, 00:12:06.396 "num_base_bdevs_operational": 3, 00:12:06.396 "base_bdevs_list": [ 00:12:06.396 { 00:12:06.396 "name": "pt1", 00:12:06.396 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:06.396 "is_configured": true, 00:12:06.396 "data_offset": 2048, 00:12:06.396 "data_size": 63488 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "name": "pt2", 00:12:06.396 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:06.396 "is_configured": true, 00:12:06.396 "data_offset": 2048, 00:12:06.396 "data_size": 63488 00:12:06.396 }, 00:12:06.396 { 00:12:06.396 "name": "pt3", 00:12:06.396 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:06.396 "is_configured": true, 00:12:06.396 "data_offset": 2048, 00:12:06.396 "data_size": 63488 00:12:06.396 } 00:12:06.396 ] 00:12:06.396 } 00:12:06.396 } 00:12:06.396 }' 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:06.396 pt2 00:12:06.396 pt3' 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:06.396 21:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:06.654 "name": "pt1", 00:12:06.654 "aliases": [ 00:12:06.654 "8de4c567-964a-b55a-a5f0-3b1b6e60540c" 00:12:06.654 ], 00:12:06.654 "product_name": "passthru", 00:12:06.654 "block_size": 512, 00:12:06.654 "num_blocks": 65536, 00:12:06.654 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:06.654 "assigned_rate_limits": { 00:12:06.654 "rw_ios_per_sec": 0, 00:12:06.654 "rw_mbytes_per_sec": 0, 00:12:06.654 "r_mbytes_per_sec": 0, 00:12:06.654 "w_mbytes_per_sec": 0 00:12:06.654 }, 00:12:06.654 "claimed": true, 00:12:06.654 "claim_type": "exclusive_write", 00:12:06.654 "zoned": false, 00:12:06.654 "supported_io_types": { 00:12:06.654 "read": true, 00:12:06.654 "write": true, 00:12:06.654 "unmap": true, 00:12:06.654 "write_zeroes": true, 00:12:06.654 "flush": true, 00:12:06.654 "reset": true, 00:12:06.654 "compare": false, 00:12:06.654 "compare_and_write": false, 00:12:06.654 "abort": true, 00:12:06.654 "nvme_admin": false, 00:12:06.654 "nvme_io": false 00:12:06.654 }, 00:12:06.654 "memory_domains": [ 00:12:06.654 { 00:12:06.654 "dma_device_id": "system", 00:12:06.654 "dma_device_type": 1 00:12:06.654 }, 00:12:06.654 { 00:12:06.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.654 "dma_device_type": 2 00:12:06.654 } 00:12:06.654 ], 00:12:06.654 "driver_specific": { 00:12:06.654 "passthru": { 00:12:06.654 "name": "pt1", 00:12:06.654 "base_bdev_name": "malloc1" 00:12:06.654 } 00:12:06.654 } 00:12:06.654 }' 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:06.654 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:07.217 "name": "pt2", 00:12:07.217 "aliases": [ 00:12:07.217 "c6d1939c-f67c-4c52-813c-44d5eaf65d25" 00:12:07.217 ], 00:12:07.217 "product_name": "passthru", 00:12:07.217 "block_size": 512, 00:12:07.217 "num_blocks": 65536, 00:12:07.217 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:07.217 "assigned_rate_limits": { 00:12:07.217 "rw_ios_per_sec": 0, 00:12:07.217 "rw_mbytes_per_sec": 0, 00:12:07.217 "r_mbytes_per_sec": 0, 00:12:07.217 "w_mbytes_per_sec": 0 00:12:07.217 }, 00:12:07.217 "claimed": true, 00:12:07.217 "claim_type": "exclusive_write", 00:12:07.217 "zoned": false, 00:12:07.217 "supported_io_types": { 00:12:07.217 "read": true, 00:12:07.217 "write": true, 00:12:07.217 "unmap": true, 00:12:07.217 "write_zeroes": true, 00:12:07.217 "flush": true, 00:12:07.217 "reset": true, 00:12:07.217 "compare": false, 00:12:07.217 "compare_and_write": false, 00:12:07.217 "abort": true, 00:12:07.217 "nvme_admin": false, 00:12:07.217 "nvme_io": false 00:12:07.217 }, 00:12:07.217 "memory_domains": [ 00:12:07.217 { 00:12:07.217 "dma_device_id": "system", 00:12:07.217 "dma_device_type": 1 00:12:07.217 }, 00:12:07.217 { 00:12:07.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.217 "dma_device_type": 2 00:12:07.217 } 00:12:07.217 ], 00:12:07.217 "driver_specific": { 00:12:07.217 "passthru": { 00:12:07.217 "name": "pt2", 00:12:07.217 "base_bdev_name": "malloc2" 00:12:07.217 } 00:12:07.217 } 00:12:07.217 }' 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:07.217 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:07.475 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:07.475 "name": "pt3", 00:12:07.475 "aliases": [ 00:12:07.475 "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4" 00:12:07.475 ], 00:12:07.475 "product_name": "passthru", 00:12:07.475 "block_size": 512, 00:12:07.475 "num_blocks": 65536, 00:12:07.475 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:07.475 "assigned_rate_limits": { 00:12:07.475 "rw_ios_per_sec": 0, 00:12:07.475 "rw_mbytes_per_sec": 0, 00:12:07.475 "r_mbytes_per_sec": 0, 00:12:07.475 "w_mbytes_per_sec": 0 00:12:07.475 }, 00:12:07.475 "claimed": true, 00:12:07.475 "claim_type": "exclusive_write", 00:12:07.475 "zoned": false, 00:12:07.475 "supported_io_types": { 00:12:07.476 "read": true, 00:12:07.476 "write": true, 00:12:07.476 "unmap": true, 00:12:07.476 "write_zeroes": true, 00:12:07.476 "flush": true, 00:12:07.476 "reset": true, 00:12:07.476 "compare": false, 00:12:07.476 "compare_and_write": false, 00:12:07.476 "abort": true, 00:12:07.476 "nvme_admin": false, 00:12:07.476 "nvme_io": false 00:12:07.476 }, 00:12:07.476 "memory_domains": [ 00:12:07.476 { 00:12:07.476 "dma_device_id": "system", 00:12:07.476 "dma_device_type": 1 00:12:07.476 }, 00:12:07.476 { 00:12:07.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.476 "dma_device_type": 2 00:12:07.476 } 00:12:07.476 ], 00:12:07.476 "driver_specific": { 00:12:07.476 "passthru": { 00:12:07.476 "name": "pt3", 00:12:07.476 "base_bdev_name": "malloc3" 00:12:07.476 } 00:12:07.476 } 00:12:07.476 }' 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.476 21:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:07.734 [2024-05-14 21:54:08.152851] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.734 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7c088aa6-123c-11ef-8c90-4585f0cfab08 00:12:07.734 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7c088aa6-123c-11ef-8c90-4585f0cfab08 ']' 00:12:07.734 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:07.992 [2024-05-14 21:54:08.404798] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.992 [2024-05-14 21:54:08.404838] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.992 [2024-05-14 21:54:08.404868] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.992 [2024-05-14 21:54:08.404887] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.992 [2024-05-14 21:54:08.404892] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc6b300 name raid_bdev1, state offline 00:12:07.992 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.992 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:08.251 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:08.251 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:08.251 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.251 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:08.510 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.510 21:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:08.768 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.768 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:09.026 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:09.026 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:09.284 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:09.542 [2024-05-14 21:54:09.884862] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:09.542 [2024-05-14 21:54:09.885671] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:09.542 [2024-05-14 21:54:09.885693] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:09.542 [2024-05-14 21:54:09.885711] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:09.542 [2024-05-14 21:54:09.885762] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:09.542 [2024-05-14 21:54:09.885775] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:09.542 [2024-05-14 21:54:09.885785] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.542 [2024-05-14 21:54:09.885790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc6b300 name raid_bdev1, state configuring 00:12:09.542 request: 00:12:09.542 { 00:12:09.542 "name": "raid_bdev1", 00:12:09.542 "raid_level": "concat", 00:12:09.542 "base_bdevs": [ 00:12:09.542 "malloc1", 00:12:09.542 "malloc2", 00:12:09.542 "malloc3" 00:12:09.542 ], 00:12:09.542 "superblock": false, 00:12:09.542 "strip_size_kb": 64, 00:12:09.542 "method": "bdev_raid_create", 00:12:09.542 "req_id": 1 00:12:09.542 } 00:12:09.542 Got JSON-RPC error response 00:12:09.542 response: 00:12:09.542 { 00:12:09.542 "code": -17, 00:12:09.542 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:09.542 } 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.542 21:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.801 [2024-05-14 21:54:10.344905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.801 [2024-05-14 21:54:10.344995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.801 [2024-05-14 21:54:10.345031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc67180 00:12:09.801 [2024-05-14 21:54:10.345040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.801 [2024-05-14 21:54:10.345944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.801 [2024-05-14 21:54:10.345975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.801 [2024-05-14 21:54:10.346018] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:09.801 [2024-05-14 21:54:10.346033] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:09.801 pt1 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:09.801 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:09.802 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:09.802 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.802 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.368 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:10.368 "name": "raid_bdev1", 00:12:10.368 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:10.368 "strip_size_kb": 64, 00:12:10.368 "state": "configuring", 00:12:10.368 "raid_level": "concat", 00:12:10.368 "superblock": true, 00:12:10.368 "num_base_bdevs": 3, 00:12:10.368 "num_base_bdevs_discovered": 1, 00:12:10.368 "num_base_bdevs_operational": 3, 00:12:10.368 "base_bdevs_list": [ 00:12:10.368 { 00:12:10.368 "name": "pt1", 00:12:10.368 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:10.368 "is_configured": true, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 }, 00:12:10.368 { 00:12:10.368 "name": null, 00:12:10.368 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:10.368 "is_configured": false, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 }, 00:12:10.368 { 00:12:10.368 "name": null, 00:12:10.368 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:10.368 "is_configured": false, 00:12:10.368 "data_offset": 2048, 00:12:10.368 "data_size": 63488 00:12:10.368 } 00:12:10.368 ] 00:12:10.368 }' 00:12:10.368 21:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:10.368 21:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.626 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:10.626 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.884 [2024-05-14 21:54:11.292941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.884 [2024-05-14 21:54:11.293058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.884 [2024-05-14 21:54:11.293094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc66780 00:12:10.884 [2024-05-14 21:54:11.293104] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.884 [2024-05-14 21:54:11.293262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.884 [2024-05-14 21:54:11.293275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.884 [2024-05-14 21:54:11.293305] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:10.884 [2024-05-14 21:54:11.293314] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.884 pt2 00:12:10.884 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:11.142 [2024-05-14 21:54:11.576955] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.142 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.400 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:11.400 "name": "raid_bdev1", 00:12:11.400 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:11.400 "strip_size_kb": 64, 00:12:11.400 "state": "configuring", 00:12:11.400 "raid_level": "concat", 00:12:11.400 "superblock": true, 00:12:11.400 "num_base_bdevs": 3, 00:12:11.400 "num_base_bdevs_discovered": 1, 00:12:11.400 "num_base_bdevs_operational": 3, 00:12:11.400 "base_bdevs_list": [ 00:12:11.400 { 00:12:11.400 "name": "pt1", 00:12:11.400 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:11.400 "is_configured": true, 00:12:11.400 "data_offset": 2048, 00:12:11.400 "data_size": 63488 00:12:11.400 }, 00:12:11.400 { 00:12:11.400 "name": null, 00:12:11.400 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:11.400 "is_configured": false, 00:12:11.400 "data_offset": 2048, 00:12:11.400 "data_size": 63488 00:12:11.400 }, 00:12:11.400 { 00:12:11.400 "name": null, 00:12:11.400 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:11.400 "is_configured": false, 00:12:11.400 "data_offset": 2048, 00:12:11.400 "data_size": 63488 00:12:11.400 } 00:12:11.400 ] 00:12:11.400 }' 00:12:11.400 21:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:11.400 21:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.658 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:11.658 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:11.658 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:11.916 [2024-05-14 21:54:12.436982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:11.916 [2024-05-14 21:54:12.437065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.916 [2024-05-14 21:54:12.437101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc66780 00:12:11.916 [2024-05-14 21:54:12.437110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.916 [2024-05-14 21:54:12.437292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.916 [2024-05-14 21:54:12.437306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:11.916 [2024-05-14 21:54:12.437334] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:11.916 [2024-05-14 21:54:12.437344] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:11.916 pt2 00:12:11.916 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:11.916 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:11.916 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.175 [2024-05-14 21:54:12.660993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.175 [2024-05-14 21:54:12.661077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.175 [2024-05-14 21:54:12.661115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc67400 00:12:12.175 [2024-05-14 21:54:12.661124] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.175 [2024-05-14 21:54:12.661284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.175 [2024-05-14 21:54:12.661298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.175 [2024-05-14 21:54:12.661326] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:12.175 [2024-05-14 21:54:12.661335] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.175 [2024-05-14 21:54:12.661373] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc6b300 00:12:12.175 [2024-05-14 21:54:12.661378] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:12.175 [2024-05-14 21:54:12.661401] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bcc9e20 00:12:12.175 [2024-05-14 21:54:12.661482] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc6b300 00:12:12.175 [2024-05-14 21:54:12.661487] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bc6b300 00:12:12.175 [2024-05-14 21:54:12.661510] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.175 pt3 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.175 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.434 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:12.434 "name": "raid_bdev1", 00:12:12.434 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:12.434 "strip_size_kb": 64, 00:12:12.434 "state": "online", 00:12:12.434 "raid_level": "concat", 00:12:12.434 "superblock": true, 00:12:12.434 "num_base_bdevs": 3, 00:12:12.434 "num_base_bdevs_discovered": 3, 00:12:12.434 "num_base_bdevs_operational": 3, 00:12:12.434 "base_bdevs_list": [ 00:12:12.434 { 00:12:12.434 "name": "pt1", 00:12:12.434 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:12.434 "is_configured": true, 00:12:12.434 "data_offset": 2048, 00:12:12.434 "data_size": 63488 00:12:12.434 }, 00:12:12.434 { 00:12:12.434 "name": "pt2", 00:12:12.434 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:12.434 "is_configured": true, 00:12:12.434 "data_offset": 2048, 00:12:12.434 "data_size": 63488 00:12:12.434 }, 00:12:12.434 { 00:12:12.434 "name": "pt3", 00:12:12.434 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:12.434 "is_configured": true, 00:12:12.434 "data_offset": 2048, 00:12:12.434 "data_size": 63488 00:12:12.434 } 00:12:12.434 ] 00:12:12.434 }' 00:12:12.434 21:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:12.434 21:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:12.692 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:12.950 [2024-05-14 21:54:13.517065] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.950 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:12.950 "name": "raid_bdev1", 00:12:12.950 "aliases": [ 00:12:12.950 "7c088aa6-123c-11ef-8c90-4585f0cfab08" 00:12:12.950 ], 00:12:12.950 "product_name": "Raid Volume", 00:12:12.950 "block_size": 512, 00:12:12.950 "num_blocks": 190464, 00:12:12.950 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:12.950 "assigned_rate_limits": { 00:12:12.950 "rw_ios_per_sec": 0, 00:12:12.950 "rw_mbytes_per_sec": 0, 00:12:12.950 "r_mbytes_per_sec": 0, 00:12:12.950 "w_mbytes_per_sec": 0 00:12:12.950 }, 00:12:12.950 "claimed": false, 00:12:12.950 "zoned": false, 00:12:12.950 "supported_io_types": { 00:12:12.950 "read": true, 00:12:12.950 "write": true, 00:12:12.950 "unmap": true, 00:12:12.950 "write_zeroes": true, 00:12:12.950 "flush": true, 00:12:12.950 "reset": true, 00:12:12.950 "compare": false, 00:12:12.950 "compare_and_write": false, 00:12:12.950 "abort": false, 00:12:12.950 "nvme_admin": false, 00:12:12.950 "nvme_io": false 00:12:12.950 }, 00:12:12.950 "memory_domains": [ 00:12:12.950 { 00:12:12.950 "dma_device_id": "system", 00:12:12.950 "dma_device_type": 1 00:12:12.950 }, 00:12:12.950 { 00:12:12.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.950 "dma_device_type": 2 00:12:12.950 }, 00:12:12.950 { 00:12:12.950 "dma_device_id": "system", 00:12:12.950 "dma_device_type": 1 00:12:12.950 }, 00:12:12.951 { 00:12:12.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.951 "dma_device_type": 2 00:12:12.951 }, 00:12:12.951 { 00:12:12.951 "dma_device_id": "system", 00:12:12.951 "dma_device_type": 1 00:12:12.951 }, 00:12:12.951 { 00:12:12.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.951 "dma_device_type": 2 00:12:12.951 } 00:12:12.951 ], 00:12:12.951 "driver_specific": { 00:12:12.951 "raid": { 00:12:12.951 "uuid": "7c088aa6-123c-11ef-8c90-4585f0cfab08", 00:12:12.951 "strip_size_kb": 64, 00:12:12.951 "state": "online", 00:12:12.951 "raid_level": "concat", 00:12:12.951 "superblock": true, 00:12:12.951 "num_base_bdevs": 3, 00:12:12.951 "num_base_bdevs_discovered": 3, 00:12:12.951 "num_base_bdevs_operational": 3, 00:12:12.951 "base_bdevs_list": [ 00:12:12.951 { 00:12:12.951 "name": "pt1", 00:12:12.951 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:12.951 "is_configured": true, 00:12:12.951 "data_offset": 2048, 00:12:12.951 "data_size": 63488 00:12:12.951 }, 00:12:12.951 { 00:12:12.951 "name": "pt2", 00:12:12.951 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:12.951 "is_configured": true, 00:12:12.951 "data_offset": 2048, 00:12:12.951 "data_size": 63488 00:12:12.951 }, 00:12:12.951 { 00:12:12.951 "name": "pt3", 00:12:12.951 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:12.951 "is_configured": true, 00:12:12.951 "data_offset": 2048, 00:12:12.951 "data_size": 63488 00:12:12.951 } 00:12:12.951 ] 00:12:12.951 } 00:12:12.951 } 00:12:12.951 }' 00:12:12.951 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.209 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:13.209 pt2 00:12:13.209 pt3' 00:12:13.209 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:13.209 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:13.209 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:13.468 "name": "pt1", 00:12:13.468 "aliases": [ 00:12:13.468 "8de4c567-964a-b55a-a5f0-3b1b6e60540c" 00:12:13.468 ], 00:12:13.468 "product_name": "passthru", 00:12:13.468 "block_size": 512, 00:12:13.468 "num_blocks": 65536, 00:12:13.468 "uuid": "8de4c567-964a-b55a-a5f0-3b1b6e60540c", 00:12:13.468 "assigned_rate_limits": { 00:12:13.468 "rw_ios_per_sec": 0, 00:12:13.468 "rw_mbytes_per_sec": 0, 00:12:13.468 "r_mbytes_per_sec": 0, 00:12:13.468 "w_mbytes_per_sec": 0 00:12:13.468 }, 00:12:13.468 "claimed": true, 00:12:13.468 "claim_type": "exclusive_write", 00:12:13.468 "zoned": false, 00:12:13.468 "supported_io_types": { 00:12:13.468 "read": true, 00:12:13.468 "write": true, 00:12:13.468 "unmap": true, 00:12:13.468 "write_zeroes": true, 00:12:13.468 "flush": true, 00:12:13.468 "reset": true, 00:12:13.468 "compare": false, 00:12:13.468 "compare_and_write": false, 00:12:13.468 "abort": true, 00:12:13.468 "nvme_admin": false, 00:12:13.468 "nvme_io": false 00:12:13.468 }, 00:12:13.468 "memory_domains": [ 00:12:13.468 { 00:12:13.468 "dma_device_id": "system", 00:12:13.468 "dma_device_type": 1 00:12:13.468 }, 00:12:13.468 { 00:12:13.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.468 "dma_device_type": 2 00:12:13.468 } 00:12:13.468 ], 00:12:13.468 "driver_specific": { 00:12:13.468 "passthru": { 00:12:13.468 "name": "pt1", 00:12:13.468 "base_bdev_name": "malloc1" 00:12:13.468 } 00:12:13.468 } 00:12:13.468 }' 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:13.468 21:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:13.727 "name": "pt2", 00:12:13.727 "aliases": [ 00:12:13.727 "c6d1939c-f67c-4c52-813c-44d5eaf65d25" 00:12:13.727 ], 00:12:13.727 "product_name": "passthru", 00:12:13.727 "block_size": 512, 00:12:13.727 "num_blocks": 65536, 00:12:13.727 "uuid": "c6d1939c-f67c-4c52-813c-44d5eaf65d25", 00:12:13.727 "assigned_rate_limits": { 00:12:13.727 "rw_ios_per_sec": 0, 00:12:13.727 "rw_mbytes_per_sec": 0, 00:12:13.727 "r_mbytes_per_sec": 0, 00:12:13.727 "w_mbytes_per_sec": 0 00:12:13.727 }, 00:12:13.727 "claimed": true, 00:12:13.727 "claim_type": "exclusive_write", 00:12:13.727 "zoned": false, 00:12:13.727 "supported_io_types": { 00:12:13.727 "read": true, 00:12:13.727 "write": true, 00:12:13.727 "unmap": true, 00:12:13.727 "write_zeroes": true, 00:12:13.727 "flush": true, 00:12:13.727 "reset": true, 00:12:13.727 "compare": false, 00:12:13.727 "compare_and_write": false, 00:12:13.727 "abort": true, 00:12:13.727 "nvme_admin": false, 00:12:13.727 "nvme_io": false 00:12:13.727 }, 00:12:13.727 "memory_domains": [ 00:12:13.727 { 00:12:13.727 "dma_device_id": "system", 00:12:13.727 "dma_device_type": 1 00:12:13.727 }, 00:12:13.727 { 00:12:13.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.727 "dma_device_type": 2 00:12:13.727 } 00:12:13.727 ], 00:12:13.727 "driver_specific": { 00:12:13.727 "passthru": { 00:12:13.727 "name": "pt2", 00:12:13.727 "base_bdev_name": "malloc2" 00:12:13.727 } 00:12:13.727 } 00:12:13.727 }' 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:13.727 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:13.986 "name": "pt3", 00:12:13.986 "aliases": [ 00:12:13.986 "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4" 00:12:13.986 ], 00:12:13.986 "product_name": "passthru", 00:12:13.986 "block_size": 512, 00:12:13.986 "num_blocks": 65536, 00:12:13.986 "uuid": "2732976b-1b1e-6a5d-9b87-ebbd8d866bd4", 00:12:13.986 "assigned_rate_limits": { 00:12:13.986 "rw_ios_per_sec": 0, 00:12:13.986 "rw_mbytes_per_sec": 0, 00:12:13.986 "r_mbytes_per_sec": 0, 00:12:13.986 "w_mbytes_per_sec": 0 00:12:13.986 }, 00:12:13.986 "claimed": true, 00:12:13.986 "claim_type": "exclusive_write", 00:12:13.986 "zoned": false, 00:12:13.986 "supported_io_types": { 00:12:13.986 "read": true, 00:12:13.986 "write": true, 00:12:13.986 "unmap": true, 00:12:13.986 "write_zeroes": true, 00:12:13.986 "flush": true, 00:12:13.986 "reset": true, 00:12:13.986 "compare": false, 00:12:13.986 "compare_and_write": false, 00:12:13.986 "abort": true, 00:12:13.986 "nvme_admin": false, 00:12:13.986 "nvme_io": false 00:12:13.986 }, 00:12:13.986 "memory_domains": [ 00:12:13.986 { 00:12:13.986 "dma_device_id": "system", 00:12:13.986 "dma_device_type": 1 00:12:13.986 }, 00:12:13.986 { 00:12:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.986 "dma_device_type": 2 00:12:13.986 } 00:12:13.986 ], 00:12:13.986 "driver_specific": { 00:12:13.986 "passthru": { 00:12:13.986 "name": "pt3", 00:12:13.986 "base_bdev_name": "malloc3" 00:12:13.986 } 00:12:13.986 } 00:12:13.986 }' 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.986 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:14.244 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:14.245 [2024-05-14 21:54:14.809146] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7c088aa6-123c-11ef-8c90-4585f0cfab08 '!=' 7c088aa6-123c-11ef-8c90-4585f0cfab08 ']' 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 54784 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 54784 ']' 00:12:14.245 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 54784 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 54784 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:12:14.503 killing process with pid 54784 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54784' 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 54784 00:12:14.503 [2024-05-14 21:54:14.845046] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.503 [2024-05-14 21:54:14.845072] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.503 [2024-05-14 21:54:14.845091] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.503 [2024-05-14 21:54:14.845095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc6b300 name raid_bdev1, state offline 00:12:14.503 21:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 54784 00:12:14.503 [2024-05-14 21:54:14.870353] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.762 21:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:12:14.762 00:12:14.762 real 0m11.859s 00:12:14.762 user 0m20.980s 00:12:14.762 sys 0m1.873s 00:12:14.762 21:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:14.762 21:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 ************************************ 00:12:14.762 END TEST raid_superblock_test 00:12:14.762 ************************************ 00:12:14.762 21:54:15 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:12:14.762 21:54:15 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:14.762 21:54:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:14.762 21:54:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.762 21:54:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 ************************************ 00:12:14.762 START TEST raid_state_function_test 00:12:14.762 ************************************ 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=55137 00:12:14.762 Process raid pid: 55137 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55137' 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 55137 /var/tmp/spdk-raid.sock 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 55137 ']' 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.762 21:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 [2024-05-14 21:54:15.183712] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:14.762 [2024-05-14 21:54:15.183915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:15.329 EAL: TSC is not safe to use in SMP mode 00:12:15.329 EAL: TSC is not invariant 00:12:15.329 [2024-05-14 21:54:15.909113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.587 [2024-05-14 21:54:16.013038] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:15.587 [2024-05-14 21:54:16.015746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.587 [2024-05-14 21:54:16.016719] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.587 [2024-05-14 21:54:16.016730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.845 21:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.845 21:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:15.845 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:16.104 [2024-05-14 21:54:16.486831] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.104 [2024-05-14 21:54:16.486893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.104 [2024-05-14 21:54:16.486899] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.104 [2024-05-14 21:54:16.486908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.104 [2024-05-14 21:54:16.486911] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.104 [2024-05-14 21:54:16.486918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.104 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:16.104 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:16.104 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.105 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.363 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:16.363 "name": "Existed_Raid", 00:12:16.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.363 "strip_size_kb": 0, 00:12:16.364 "state": "configuring", 00:12:16.364 "raid_level": "raid1", 00:12:16.364 "superblock": false, 00:12:16.364 "num_base_bdevs": 3, 00:12:16.364 "num_base_bdevs_discovered": 0, 00:12:16.364 "num_base_bdevs_operational": 3, 00:12:16.364 "base_bdevs_list": [ 00:12:16.364 { 00:12:16.364 "name": "BaseBdev1", 00:12:16.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.364 "is_configured": false, 00:12:16.364 "data_offset": 0, 00:12:16.364 "data_size": 0 00:12:16.364 }, 00:12:16.364 { 00:12:16.364 "name": "BaseBdev2", 00:12:16.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.364 "is_configured": false, 00:12:16.364 "data_offset": 0, 00:12:16.364 "data_size": 0 00:12:16.364 }, 00:12:16.364 { 00:12:16.364 "name": "BaseBdev3", 00:12:16.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.364 "is_configured": false, 00:12:16.364 "data_offset": 0, 00:12:16.364 "data_size": 0 00:12:16.364 } 00:12:16.364 ] 00:12:16.364 }' 00:12:16.364 21:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:16.364 21:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.622 21:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:16.887 [2024-05-14 21:54:17.346817] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.887 [2024-05-14 21:54:17.346853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e174300 name Existed_Raid, state configuring 00:12:16.887 21:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:17.153 [2024-05-14 21:54:17.694823] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.153 [2024-05-14 21:54:17.694875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.153 [2024-05-14 21:54:17.694881] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.153 [2024-05-14 21:54:17.694890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.153 [2024-05-14 21:54:17.694893] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.153 [2024-05-14 21:54:17.694900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.153 21:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.411 [2024-05-14 21:54:17.931839] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.411 BaseBdev1 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:17.411 21:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:17.669 21:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.928 [ 00:12:17.928 { 00:12:17.928 "name": "BaseBdev1", 00:12:17.928 "aliases": [ 00:12:17.928 "83245379-123c-11ef-8c90-4585f0cfab08" 00:12:17.928 ], 00:12:17.928 "product_name": "Malloc disk", 00:12:17.928 "block_size": 512, 00:12:17.928 "num_blocks": 65536, 00:12:17.928 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:17.928 "assigned_rate_limits": { 00:12:17.928 "rw_ios_per_sec": 0, 00:12:17.928 "rw_mbytes_per_sec": 0, 00:12:17.928 "r_mbytes_per_sec": 0, 00:12:17.928 "w_mbytes_per_sec": 0 00:12:17.928 }, 00:12:17.928 "claimed": true, 00:12:17.928 "claim_type": "exclusive_write", 00:12:17.928 "zoned": false, 00:12:17.928 "supported_io_types": { 00:12:17.928 "read": true, 00:12:17.928 "write": true, 00:12:17.928 "unmap": true, 00:12:17.928 "write_zeroes": true, 00:12:17.928 "flush": true, 00:12:17.928 "reset": true, 00:12:17.928 "compare": false, 00:12:17.928 "compare_and_write": false, 00:12:17.928 "abort": true, 00:12:17.928 "nvme_admin": false, 00:12:17.928 "nvme_io": false 00:12:17.928 }, 00:12:17.928 "memory_domains": [ 00:12:17.928 { 00:12:17.928 "dma_device_id": "system", 00:12:17.928 "dma_device_type": 1 00:12:17.928 }, 00:12:17.928 { 00:12:17.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.928 "dma_device_type": 2 00:12:17.928 } 00:12:17.928 ], 00:12:17.928 "driver_specific": {} 00:12:17.928 } 00:12:17.928 ] 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.928 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.496 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:18.496 "name": "Existed_Raid", 00:12:18.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.496 "strip_size_kb": 0, 00:12:18.496 "state": "configuring", 00:12:18.496 "raid_level": "raid1", 00:12:18.496 "superblock": false, 00:12:18.496 "num_base_bdevs": 3, 00:12:18.496 "num_base_bdevs_discovered": 1, 00:12:18.496 "num_base_bdevs_operational": 3, 00:12:18.496 "base_bdevs_list": [ 00:12:18.496 { 00:12:18.496 "name": "BaseBdev1", 00:12:18.496 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:18.496 "is_configured": true, 00:12:18.496 "data_offset": 0, 00:12:18.496 "data_size": 65536 00:12:18.496 }, 00:12:18.496 { 00:12:18.496 "name": "BaseBdev2", 00:12:18.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.496 "is_configured": false, 00:12:18.496 "data_offset": 0, 00:12:18.496 "data_size": 0 00:12:18.496 }, 00:12:18.496 { 00:12:18.496 "name": "BaseBdev3", 00:12:18.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.496 "is_configured": false, 00:12:18.496 "data_offset": 0, 00:12:18.496 "data_size": 0 00:12:18.496 } 00:12:18.496 ] 00:12:18.496 }' 00:12:18.496 21:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:18.496 21:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.755 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:19.015 [2024-05-14 21:54:19.358824] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.015 [2024-05-14 21:54:19.358855] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e174300 name Existed_Raid, state configuring 00:12:19.015 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:19.015 [2024-05-14 21:54:19.590833] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.015 [2024-05-14 21:54:19.591649] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.015 [2024-05-14 21:54:19.591694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.015 [2024-05-14 21:54:19.591699] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.015 [2024-05-14 21:54:19.591708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.275 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.534 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:19.534 "name": "Existed_Raid", 00:12:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.534 "strip_size_kb": 0, 00:12:19.534 "state": "configuring", 00:12:19.534 "raid_level": "raid1", 00:12:19.534 "superblock": false, 00:12:19.534 "num_base_bdevs": 3, 00:12:19.534 "num_base_bdevs_discovered": 1, 00:12:19.534 "num_base_bdevs_operational": 3, 00:12:19.534 "base_bdevs_list": [ 00:12:19.534 { 00:12:19.534 "name": "BaseBdev1", 00:12:19.534 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:19.534 "is_configured": true, 00:12:19.534 "data_offset": 0, 00:12:19.534 "data_size": 65536 00:12:19.534 }, 00:12:19.534 { 00:12:19.534 "name": "BaseBdev2", 00:12:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.534 "is_configured": false, 00:12:19.534 "data_offset": 0, 00:12:19.534 "data_size": 0 00:12:19.534 }, 00:12:19.534 { 00:12:19.534 "name": "BaseBdev3", 00:12:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.534 "is_configured": false, 00:12:19.534 "data_offset": 0, 00:12:19.534 "data_size": 0 00:12:19.534 } 00:12:19.534 ] 00:12:19.534 }' 00:12:19.534 21:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:19.534 21:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.802 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.061 [2024-05-14 21:54:20.426969] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.061 BaseBdev2 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:20.061 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.342 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.628 [ 00:12:20.628 { 00:12:20.628 "name": "BaseBdev2", 00:12:20.628 "aliases": [ 00:12:20.628 "84a12f58-123c-11ef-8c90-4585f0cfab08" 00:12:20.628 ], 00:12:20.628 "product_name": "Malloc disk", 00:12:20.628 "block_size": 512, 00:12:20.628 "num_blocks": 65536, 00:12:20.628 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:20.628 "assigned_rate_limits": { 00:12:20.628 "rw_ios_per_sec": 0, 00:12:20.628 "rw_mbytes_per_sec": 0, 00:12:20.628 "r_mbytes_per_sec": 0, 00:12:20.628 "w_mbytes_per_sec": 0 00:12:20.628 }, 00:12:20.628 "claimed": true, 00:12:20.628 "claim_type": "exclusive_write", 00:12:20.628 "zoned": false, 00:12:20.628 "supported_io_types": { 00:12:20.628 "read": true, 00:12:20.628 "write": true, 00:12:20.628 "unmap": true, 00:12:20.628 "write_zeroes": true, 00:12:20.628 "flush": true, 00:12:20.628 "reset": true, 00:12:20.628 "compare": false, 00:12:20.628 "compare_and_write": false, 00:12:20.628 "abort": true, 00:12:20.628 "nvme_admin": false, 00:12:20.628 "nvme_io": false 00:12:20.628 }, 00:12:20.628 "memory_domains": [ 00:12:20.628 { 00:12:20.628 "dma_device_id": "system", 00:12:20.628 "dma_device_type": 1 00:12:20.628 }, 00:12:20.628 { 00:12:20.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.628 "dma_device_type": 2 00:12:20.628 } 00:12:20.628 ], 00:12:20.628 "driver_specific": {} 00:12:20.628 } 00:12:20.628 ] 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.628 21:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.628 21:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:20.628 "name": "Existed_Raid", 00:12:20.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.628 "strip_size_kb": 0, 00:12:20.628 "state": "configuring", 00:12:20.628 "raid_level": "raid1", 00:12:20.628 "superblock": false, 00:12:20.628 "num_base_bdevs": 3, 00:12:20.628 "num_base_bdevs_discovered": 2, 00:12:20.628 "num_base_bdevs_operational": 3, 00:12:20.628 "base_bdevs_list": [ 00:12:20.628 { 00:12:20.628 "name": "BaseBdev1", 00:12:20.628 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:20.628 "is_configured": true, 00:12:20.628 "data_offset": 0, 00:12:20.628 "data_size": 65536 00:12:20.628 }, 00:12:20.628 { 00:12:20.628 "name": "BaseBdev2", 00:12:20.628 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:20.628 "is_configured": true, 00:12:20.628 "data_offset": 0, 00:12:20.628 "data_size": 65536 00:12:20.628 }, 00:12:20.628 { 00:12:20.628 "name": "BaseBdev3", 00:12:20.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.628 "is_configured": false, 00:12:20.628 "data_offset": 0, 00:12:20.628 "data_size": 0 00:12:20.628 } 00:12:20.628 ] 00:12:20.628 }' 00:12:20.628 21:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:20.628 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.196 [2024-05-14 21:54:21.742960] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.196 [2024-05-14 21:54:21.742991] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e174300 00:12:21.196 [2024-05-14 21:54:21.742995] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:21.196 [2024-05-14 21:54:21.743017] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e1d2ec0 00:12:21.196 [2024-05-14 21:54:21.743116] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e174300 00:12:21.196 [2024-05-14 21:54:21.743121] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e174300 00:12:21.196 [2024-05-14 21:54:21.743155] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.196 BaseBdev3 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:21.196 21:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:21.454 21:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:21.712 [ 00:12:21.712 { 00:12:21.712 "name": "BaseBdev3", 00:12:21.712 "aliases": [ 00:12:21.712 "8569fdbc-123c-11ef-8c90-4585f0cfab08" 00:12:21.712 ], 00:12:21.712 "product_name": "Malloc disk", 00:12:21.712 "block_size": 512, 00:12:21.712 "num_blocks": 65536, 00:12:21.712 "uuid": "8569fdbc-123c-11ef-8c90-4585f0cfab08", 00:12:21.712 "assigned_rate_limits": { 00:12:21.712 "rw_ios_per_sec": 0, 00:12:21.712 "rw_mbytes_per_sec": 0, 00:12:21.712 "r_mbytes_per_sec": 0, 00:12:21.712 "w_mbytes_per_sec": 0 00:12:21.712 }, 00:12:21.712 "claimed": true, 00:12:21.712 "claim_type": "exclusive_write", 00:12:21.712 "zoned": false, 00:12:21.712 "supported_io_types": { 00:12:21.712 "read": true, 00:12:21.712 "write": true, 00:12:21.712 "unmap": true, 00:12:21.712 "write_zeroes": true, 00:12:21.712 "flush": true, 00:12:21.712 "reset": true, 00:12:21.712 "compare": false, 00:12:21.712 "compare_and_write": false, 00:12:21.712 "abort": true, 00:12:21.712 "nvme_admin": false, 00:12:21.712 "nvme_io": false 00:12:21.713 }, 00:12:21.713 "memory_domains": [ 00:12:21.713 { 00:12:21.713 "dma_device_id": "system", 00:12:21.713 "dma_device_type": 1 00:12:21.713 }, 00:12:21.713 { 00:12:21.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.713 "dma_device_type": 2 00:12:21.713 } 00:12:21.713 ], 00:12:21.713 "driver_specific": {} 00:12:21.713 } 00:12:21.713 ] 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.713 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.971 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:21.971 "name": "Existed_Raid", 00:12:21.971 "uuid": "856a03f9-123c-11ef-8c90-4585f0cfab08", 00:12:21.971 "strip_size_kb": 0, 00:12:21.971 "state": "online", 00:12:21.971 "raid_level": "raid1", 00:12:21.971 "superblock": false, 00:12:21.971 "num_base_bdevs": 3, 00:12:21.971 "num_base_bdevs_discovered": 3, 00:12:21.971 "num_base_bdevs_operational": 3, 00:12:21.971 "base_bdevs_list": [ 00:12:21.971 { 00:12:21.971 "name": "BaseBdev1", 00:12:21.971 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:21.971 "is_configured": true, 00:12:21.971 "data_offset": 0, 00:12:21.971 "data_size": 65536 00:12:21.971 }, 00:12:21.971 { 00:12:21.971 "name": "BaseBdev2", 00:12:21.971 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:21.971 "is_configured": true, 00:12:21.971 "data_offset": 0, 00:12:21.971 "data_size": 65536 00:12:21.971 }, 00:12:21.971 { 00:12:21.971 "name": "BaseBdev3", 00:12:21.971 "uuid": "8569fdbc-123c-11ef-8c90-4585f0cfab08", 00:12:21.971 "is_configured": true, 00:12:21.971 "data_offset": 0, 00:12:21.971 "data_size": 65536 00:12:21.971 } 00:12:21.971 ] 00:12:21.971 }' 00:12:21.971 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:21.972 21:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:22.230 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:22.489 [2024-05-14 21:54:22.962873] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:22.489 "name": "Existed_Raid", 00:12:22.489 "aliases": [ 00:12:22.489 "856a03f9-123c-11ef-8c90-4585f0cfab08" 00:12:22.489 ], 00:12:22.489 "product_name": "Raid Volume", 00:12:22.489 "block_size": 512, 00:12:22.489 "num_blocks": 65536, 00:12:22.489 "uuid": "856a03f9-123c-11ef-8c90-4585f0cfab08", 00:12:22.489 "assigned_rate_limits": { 00:12:22.489 "rw_ios_per_sec": 0, 00:12:22.489 "rw_mbytes_per_sec": 0, 00:12:22.489 "r_mbytes_per_sec": 0, 00:12:22.489 "w_mbytes_per_sec": 0 00:12:22.489 }, 00:12:22.489 "claimed": false, 00:12:22.489 "zoned": false, 00:12:22.489 "supported_io_types": { 00:12:22.489 "read": true, 00:12:22.489 "write": true, 00:12:22.489 "unmap": false, 00:12:22.489 "write_zeroes": true, 00:12:22.489 "flush": false, 00:12:22.489 "reset": true, 00:12:22.489 "compare": false, 00:12:22.489 "compare_and_write": false, 00:12:22.489 "abort": false, 00:12:22.489 "nvme_admin": false, 00:12:22.489 "nvme_io": false 00:12:22.489 }, 00:12:22.489 "memory_domains": [ 00:12:22.489 { 00:12:22.489 "dma_device_id": "system", 00:12:22.489 "dma_device_type": 1 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.489 "dma_device_type": 2 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "dma_device_id": "system", 00:12:22.489 "dma_device_type": 1 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.489 "dma_device_type": 2 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "dma_device_id": "system", 00:12:22.489 "dma_device_type": 1 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.489 "dma_device_type": 2 00:12:22.489 } 00:12:22.489 ], 00:12:22.489 "driver_specific": { 00:12:22.489 "raid": { 00:12:22.489 "uuid": "856a03f9-123c-11ef-8c90-4585f0cfab08", 00:12:22.489 "strip_size_kb": 0, 00:12:22.489 "state": "online", 00:12:22.489 "raid_level": "raid1", 00:12:22.489 "superblock": false, 00:12:22.489 "num_base_bdevs": 3, 00:12:22.489 "num_base_bdevs_discovered": 3, 00:12:22.489 "num_base_bdevs_operational": 3, 00:12:22.489 "base_bdevs_list": [ 00:12:22.489 { 00:12:22.489 "name": "BaseBdev1", 00:12:22.489 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:22.489 "is_configured": true, 00:12:22.489 "data_offset": 0, 00:12:22.489 "data_size": 65536 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "name": "BaseBdev2", 00:12:22.489 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:22.489 "is_configured": true, 00:12:22.489 "data_offset": 0, 00:12:22.489 "data_size": 65536 00:12:22.489 }, 00:12:22.489 { 00:12:22.489 "name": "BaseBdev3", 00:12:22.489 "uuid": "8569fdbc-123c-11ef-8c90-4585f0cfab08", 00:12:22.489 "is_configured": true, 00:12:22.489 "data_offset": 0, 00:12:22.489 "data_size": 65536 00:12:22.489 } 00:12:22.489 ] 00:12:22.489 } 00:12:22.489 } 00:12:22.489 }' 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:22.489 BaseBdev2 00:12:22.489 BaseBdev3' 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:22.489 21:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:22.748 "name": "BaseBdev1", 00:12:22.748 "aliases": [ 00:12:22.748 "83245379-123c-11ef-8c90-4585f0cfab08" 00:12:22.748 ], 00:12:22.748 "product_name": "Malloc disk", 00:12:22.748 "block_size": 512, 00:12:22.748 "num_blocks": 65536, 00:12:22.748 "uuid": "83245379-123c-11ef-8c90-4585f0cfab08", 00:12:22.748 "assigned_rate_limits": { 00:12:22.748 "rw_ios_per_sec": 0, 00:12:22.748 "rw_mbytes_per_sec": 0, 00:12:22.748 "r_mbytes_per_sec": 0, 00:12:22.748 "w_mbytes_per_sec": 0 00:12:22.748 }, 00:12:22.748 "claimed": true, 00:12:22.748 "claim_type": "exclusive_write", 00:12:22.748 "zoned": false, 00:12:22.748 "supported_io_types": { 00:12:22.748 "read": true, 00:12:22.748 "write": true, 00:12:22.748 "unmap": true, 00:12:22.748 "write_zeroes": true, 00:12:22.748 "flush": true, 00:12:22.748 "reset": true, 00:12:22.748 "compare": false, 00:12:22.748 "compare_and_write": false, 00:12:22.748 "abort": true, 00:12:22.748 "nvme_admin": false, 00:12:22.748 "nvme_io": false 00:12:22.748 }, 00:12:22.748 "memory_domains": [ 00:12:22.748 { 00:12:22.748 "dma_device_id": "system", 00:12:22.748 "dma_device_type": 1 00:12:22.748 }, 00:12:22.748 { 00:12:22.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.748 "dma_device_type": 2 00:12:22.748 } 00:12:22.748 ], 00:12:22.748 "driver_specific": {} 00:12:22.748 }' 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:22.748 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:23.318 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:23.318 "name": "BaseBdev2", 00:12:23.318 "aliases": [ 00:12:23.318 "84a12f58-123c-11ef-8c90-4585f0cfab08" 00:12:23.318 ], 00:12:23.318 "product_name": "Malloc disk", 00:12:23.318 "block_size": 512, 00:12:23.318 "num_blocks": 65536, 00:12:23.319 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:23.319 "assigned_rate_limits": { 00:12:23.319 "rw_ios_per_sec": 0, 00:12:23.319 "rw_mbytes_per_sec": 0, 00:12:23.319 "r_mbytes_per_sec": 0, 00:12:23.319 "w_mbytes_per_sec": 0 00:12:23.319 }, 00:12:23.319 "claimed": true, 00:12:23.319 "claim_type": "exclusive_write", 00:12:23.319 "zoned": false, 00:12:23.319 "supported_io_types": { 00:12:23.319 "read": true, 00:12:23.319 "write": true, 00:12:23.319 "unmap": true, 00:12:23.319 "write_zeroes": true, 00:12:23.319 "flush": true, 00:12:23.319 "reset": true, 00:12:23.319 "compare": false, 00:12:23.319 "compare_and_write": false, 00:12:23.319 "abort": true, 00:12:23.319 "nvme_admin": false, 00:12:23.319 "nvme_io": false 00:12:23.319 }, 00:12:23.319 "memory_domains": [ 00:12:23.319 { 00:12:23.319 "dma_device_id": "system", 00:12:23.319 "dma_device_type": 1 00:12:23.319 }, 00:12:23.319 { 00:12:23.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.319 "dma_device_type": 2 00:12:23.319 } 00:12:23.319 ], 00:12:23.319 "driver_specific": {} 00:12:23.319 }' 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:23.319 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:23.577 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:23.577 "name": "BaseBdev3", 00:12:23.577 "aliases": [ 00:12:23.577 "8569fdbc-123c-11ef-8c90-4585f0cfab08" 00:12:23.577 ], 00:12:23.577 "product_name": "Malloc disk", 00:12:23.577 "block_size": 512, 00:12:23.577 "num_blocks": 65536, 00:12:23.577 "uuid": "8569fdbc-123c-11ef-8c90-4585f0cfab08", 00:12:23.578 "assigned_rate_limits": { 00:12:23.578 "rw_ios_per_sec": 0, 00:12:23.578 "rw_mbytes_per_sec": 0, 00:12:23.578 "r_mbytes_per_sec": 0, 00:12:23.578 "w_mbytes_per_sec": 0 00:12:23.578 }, 00:12:23.578 "claimed": true, 00:12:23.578 "claim_type": "exclusive_write", 00:12:23.578 "zoned": false, 00:12:23.578 "supported_io_types": { 00:12:23.578 "read": true, 00:12:23.578 "write": true, 00:12:23.578 "unmap": true, 00:12:23.578 "write_zeroes": true, 00:12:23.578 "flush": true, 00:12:23.578 "reset": true, 00:12:23.578 "compare": false, 00:12:23.578 "compare_and_write": false, 00:12:23.578 "abort": true, 00:12:23.578 "nvme_admin": false, 00:12:23.578 "nvme_io": false 00:12:23.578 }, 00:12:23.578 "memory_domains": [ 00:12:23.578 { 00:12:23.578 "dma_device_id": "system", 00:12:23.578 "dma_device_type": 1 00:12:23.578 }, 00:12:23.578 { 00:12:23.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.578 "dma_device_type": 2 00:12:23.578 } 00:12:23.578 ], 00:12:23.578 "driver_specific": {} 00:12:23.578 }' 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:23.578 21:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:23.836 [2024-05-14 21:54:24.254892] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.836 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.094 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:24.094 "name": "Existed_Raid", 00:12:24.094 "uuid": "856a03f9-123c-11ef-8c90-4585f0cfab08", 00:12:24.094 "strip_size_kb": 0, 00:12:24.094 "state": "online", 00:12:24.094 "raid_level": "raid1", 00:12:24.094 "superblock": false, 00:12:24.094 "num_base_bdevs": 3, 00:12:24.094 "num_base_bdevs_discovered": 2, 00:12:24.094 "num_base_bdevs_operational": 2, 00:12:24.094 "base_bdevs_list": [ 00:12:24.094 { 00:12:24.094 "name": null, 00:12:24.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.094 "is_configured": false, 00:12:24.094 "data_offset": 0, 00:12:24.094 "data_size": 65536 00:12:24.094 }, 00:12:24.094 { 00:12:24.094 "name": "BaseBdev2", 00:12:24.094 "uuid": "84a12f58-123c-11ef-8c90-4585f0cfab08", 00:12:24.094 "is_configured": true, 00:12:24.094 "data_offset": 0, 00:12:24.094 "data_size": 65536 00:12:24.094 }, 00:12:24.094 { 00:12:24.094 "name": "BaseBdev3", 00:12:24.094 "uuid": "8569fdbc-123c-11ef-8c90-4585f0cfab08", 00:12:24.094 "is_configured": true, 00:12:24.094 "data_offset": 0, 00:12:24.094 "data_size": 65536 00:12:24.094 } 00:12:24.094 ] 00:12:24.094 }' 00:12:24.094 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:24.094 21:54:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.352 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.352 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.352 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.352 21:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:24.611 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:24.611 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.611 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:24.871 [2024-05-14 21:54:25.389570] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.871 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.871 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.871 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.871 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:25.129 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:25.129 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.129 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:25.388 [2024-05-14 21:54:25.899418] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.388 [2024-05-14 21:54:25.899458] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.388 [2024-05-14 21:54:25.905340] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.388 [2024-05-14 21:54:25.905383] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.388 [2024-05-14 21:54:25.905389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e174300 name Existed_Raid, state offline 00:12:25.388 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:25.388 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:25.388 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.388 21:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:25.647 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.905 BaseBdev2 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:25.905 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:26.164 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:26.423 [ 00:12:26.423 { 00:12:26.423 "name": "BaseBdev2", 00:12:26.423 "aliases": [ 00:12:26.423 "882b5571-123c-11ef-8c90-4585f0cfab08" 00:12:26.423 ], 00:12:26.423 "product_name": "Malloc disk", 00:12:26.423 "block_size": 512, 00:12:26.423 "num_blocks": 65536, 00:12:26.423 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:26.423 "assigned_rate_limits": { 00:12:26.423 "rw_ios_per_sec": 0, 00:12:26.423 "rw_mbytes_per_sec": 0, 00:12:26.423 "r_mbytes_per_sec": 0, 00:12:26.423 "w_mbytes_per_sec": 0 00:12:26.423 }, 00:12:26.423 "claimed": false, 00:12:26.423 "zoned": false, 00:12:26.423 "supported_io_types": { 00:12:26.423 "read": true, 00:12:26.423 "write": true, 00:12:26.423 "unmap": true, 00:12:26.423 "write_zeroes": true, 00:12:26.423 "flush": true, 00:12:26.423 "reset": true, 00:12:26.423 "compare": false, 00:12:26.423 "compare_and_write": false, 00:12:26.423 "abort": true, 00:12:26.423 "nvme_admin": false, 00:12:26.423 "nvme_io": false 00:12:26.423 }, 00:12:26.423 "memory_domains": [ 00:12:26.423 { 00:12:26.423 "dma_device_id": "system", 00:12:26.423 "dma_device_type": 1 00:12:26.423 }, 00:12:26.423 { 00:12:26.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.423 "dma_device_type": 2 00:12:26.423 } 00:12:26.423 ], 00:12:26.423 "driver_specific": {} 00:12:26.423 } 00:12:26.423 ] 00:12:26.423 21:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:26.423 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:26.423 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:26.423 21:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.681 BaseBdev3 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:26.681 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:26.940 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:27.199 [ 00:12:27.199 { 00:12:27.199 "name": "BaseBdev3", 00:12:27.199 "aliases": [ 00:12:27.199 "88a7d7fe-123c-11ef-8c90-4585f0cfab08" 00:12:27.199 ], 00:12:27.199 "product_name": "Malloc disk", 00:12:27.199 "block_size": 512, 00:12:27.199 "num_blocks": 65536, 00:12:27.199 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:27.199 "assigned_rate_limits": { 00:12:27.199 "rw_ios_per_sec": 0, 00:12:27.199 "rw_mbytes_per_sec": 0, 00:12:27.199 "r_mbytes_per_sec": 0, 00:12:27.199 "w_mbytes_per_sec": 0 00:12:27.199 }, 00:12:27.199 "claimed": false, 00:12:27.199 "zoned": false, 00:12:27.199 "supported_io_types": { 00:12:27.199 "read": true, 00:12:27.199 "write": true, 00:12:27.199 "unmap": true, 00:12:27.199 "write_zeroes": true, 00:12:27.199 "flush": true, 00:12:27.199 "reset": true, 00:12:27.199 "compare": false, 00:12:27.199 "compare_and_write": false, 00:12:27.199 "abort": true, 00:12:27.199 "nvme_admin": false, 00:12:27.199 "nvme_io": false 00:12:27.199 }, 00:12:27.199 "memory_domains": [ 00:12:27.199 { 00:12:27.199 "dma_device_id": "system", 00:12:27.199 "dma_device_type": 1 00:12:27.199 }, 00:12:27.199 { 00:12:27.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.199 "dma_device_type": 2 00:12:27.199 } 00:12:27.199 ], 00:12:27.199 "driver_specific": {} 00:12:27.199 } 00:12:27.199 ] 00:12:27.199 21:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:27.199 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:27.199 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:27.199 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:27.457 [2024-05-14 21:54:27.969370] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.457 [2024-05-14 21:54:27.969442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.457 [2024-05-14 21:54:27.969453] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.457 [2024-05-14 21:54:27.970033] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.457 21:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.715 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:27.715 "name": "Existed_Raid", 00:12:27.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.715 "strip_size_kb": 0, 00:12:27.715 "state": "configuring", 00:12:27.715 "raid_level": "raid1", 00:12:27.715 "superblock": false, 00:12:27.715 "num_base_bdevs": 3, 00:12:27.715 "num_base_bdevs_discovered": 2, 00:12:27.715 "num_base_bdevs_operational": 3, 00:12:27.715 "base_bdevs_list": [ 00:12:27.715 { 00:12:27.715 "name": "BaseBdev1", 00:12:27.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.715 "is_configured": false, 00:12:27.715 "data_offset": 0, 00:12:27.715 "data_size": 0 00:12:27.715 }, 00:12:27.715 { 00:12:27.715 "name": "BaseBdev2", 00:12:27.715 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:27.715 "is_configured": true, 00:12:27.715 "data_offset": 0, 00:12:27.715 "data_size": 65536 00:12:27.715 }, 00:12:27.715 { 00:12:27.715 "name": "BaseBdev3", 00:12:27.715 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:27.715 "is_configured": true, 00:12:27.715 "data_offset": 0, 00:12:27.715 "data_size": 65536 00:12:27.715 } 00:12:27.715 ] 00:12:27.715 }' 00:12:27.715 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:27.715 21:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:28.281 [2024-05-14 21:54:28.833389] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.281 21:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.564 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:28.564 "name": "Existed_Raid", 00:12:28.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.564 "strip_size_kb": 0, 00:12:28.564 "state": "configuring", 00:12:28.564 "raid_level": "raid1", 00:12:28.564 "superblock": false, 00:12:28.564 "num_base_bdevs": 3, 00:12:28.564 "num_base_bdevs_discovered": 1, 00:12:28.564 "num_base_bdevs_operational": 3, 00:12:28.564 "base_bdevs_list": [ 00:12:28.564 { 00:12:28.564 "name": "BaseBdev1", 00:12:28.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.564 "is_configured": false, 00:12:28.564 "data_offset": 0, 00:12:28.564 "data_size": 0 00:12:28.564 }, 00:12:28.564 { 00:12:28.564 "name": null, 00:12:28.564 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:28.564 "is_configured": false, 00:12:28.564 "data_offset": 0, 00:12:28.564 "data_size": 65536 00:12:28.564 }, 00:12:28.564 { 00:12:28.564 "name": "BaseBdev3", 00:12:28.564 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:28.564 "is_configured": true, 00:12:28.564 "data_offset": 0, 00:12:28.564 "data_size": 65536 00:12:28.564 } 00:12:28.564 ] 00:12:28.564 }' 00:12:28.564 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:28.564 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.844 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.844 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:29.102 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:12:29.102 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:29.361 [2024-05-14 21:54:29.853629] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.361 BaseBdev1 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:29.361 21:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:29.620 21:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.879 [ 00:12:29.879 { 00:12:29.879 "name": "BaseBdev1", 00:12:29.879 "aliases": [ 00:12:29.879 "8a3f946d-123c-11ef-8c90-4585f0cfab08" 00:12:29.879 ], 00:12:29.879 "product_name": "Malloc disk", 00:12:29.879 "block_size": 512, 00:12:29.879 "num_blocks": 65536, 00:12:29.879 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:29.879 "assigned_rate_limits": { 00:12:29.879 "rw_ios_per_sec": 0, 00:12:29.879 "rw_mbytes_per_sec": 0, 00:12:29.879 "r_mbytes_per_sec": 0, 00:12:29.879 "w_mbytes_per_sec": 0 00:12:29.879 }, 00:12:29.879 "claimed": true, 00:12:29.879 "claim_type": "exclusive_write", 00:12:29.879 "zoned": false, 00:12:29.879 "supported_io_types": { 00:12:29.879 "read": true, 00:12:29.879 "write": true, 00:12:29.879 "unmap": true, 00:12:29.879 "write_zeroes": true, 00:12:29.879 "flush": true, 00:12:29.879 "reset": true, 00:12:29.879 "compare": false, 00:12:29.879 "compare_and_write": false, 00:12:29.879 "abort": true, 00:12:29.879 "nvme_admin": false, 00:12:29.879 "nvme_io": false 00:12:29.879 }, 00:12:29.879 "memory_domains": [ 00:12:29.879 { 00:12:29.879 "dma_device_id": "system", 00:12:29.879 "dma_device_type": 1 00:12:29.879 }, 00:12:29.879 { 00:12:29.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.879 "dma_device_type": 2 00:12:29.879 } 00:12:29.879 ], 00:12:29.879 "driver_specific": {} 00:12:29.879 } 00:12:29.879 ] 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.879 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.138 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:30.138 "name": "Existed_Raid", 00:12:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.138 "strip_size_kb": 0, 00:12:30.138 "state": "configuring", 00:12:30.138 "raid_level": "raid1", 00:12:30.138 "superblock": false, 00:12:30.138 "num_base_bdevs": 3, 00:12:30.138 "num_base_bdevs_discovered": 2, 00:12:30.138 "num_base_bdevs_operational": 3, 00:12:30.138 "base_bdevs_list": [ 00:12:30.138 { 00:12:30.138 "name": "BaseBdev1", 00:12:30.138 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:30.138 "is_configured": true, 00:12:30.138 "data_offset": 0, 00:12:30.138 "data_size": 65536 00:12:30.138 }, 00:12:30.138 { 00:12:30.138 "name": null, 00:12:30.138 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:30.138 "is_configured": false, 00:12:30.138 "data_offset": 0, 00:12:30.138 "data_size": 65536 00:12:30.138 }, 00:12:30.138 { 00:12:30.138 "name": "BaseBdev3", 00:12:30.138 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:30.138 "is_configured": true, 00:12:30.138 "data_offset": 0, 00:12:30.138 "data_size": 65536 00:12:30.138 } 00:12:30.138 ] 00:12:30.138 }' 00:12:30.138 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:30.138 21:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.706 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.706 21:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.706 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:30.706 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:30.964 [2024-05-14 21:54:31.517502] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.964 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.530 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:31.530 "name": "Existed_Raid", 00:12:31.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.530 "strip_size_kb": 0, 00:12:31.530 "state": "configuring", 00:12:31.530 "raid_level": "raid1", 00:12:31.530 "superblock": false, 00:12:31.530 "num_base_bdevs": 3, 00:12:31.530 "num_base_bdevs_discovered": 1, 00:12:31.530 "num_base_bdevs_operational": 3, 00:12:31.530 "base_bdevs_list": [ 00:12:31.530 { 00:12:31.530 "name": "BaseBdev1", 00:12:31.530 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:31.530 "is_configured": true, 00:12:31.530 "data_offset": 0, 00:12:31.530 "data_size": 65536 00:12:31.530 }, 00:12:31.530 { 00:12:31.530 "name": null, 00:12:31.530 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:31.530 "is_configured": false, 00:12:31.530 "data_offset": 0, 00:12:31.530 "data_size": 65536 00:12:31.530 }, 00:12:31.530 { 00:12:31.530 "name": null, 00:12:31.530 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:31.530 "is_configured": false, 00:12:31.530 "data_offset": 0, 00:12:31.530 "data_size": 65536 00:12:31.530 } 00:12:31.530 ] 00:12:31.530 }' 00:12:31.530 21:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:31.530 21:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.788 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.788 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:32.046 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:12:32.046 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:32.305 [2024-05-14 21:54:32.649527] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.305 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.564 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:32.564 "name": "Existed_Raid", 00:12:32.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.564 "strip_size_kb": 0, 00:12:32.564 "state": "configuring", 00:12:32.564 "raid_level": "raid1", 00:12:32.564 "superblock": false, 00:12:32.564 "num_base_bdevs": 3, 00:12:32.564 "num_base_bdevs_discovered": 2, 00:12:32.564 "num_base_bdevs_operational": 3, 00:12:32.564 "base_bdevs_list": [ 00:12:32.564 { 00:12:32.564 "name": "BaseBdev1", 00:12:32.564 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:32.564 "is_configured": true, 00:12:32.564 "data_offset": 0, 00:12:32.564 "data_size": 65536 00:12:32.564 }, 00:12:32.564 { 00:12:32.564 "name": null, 00:12:32.564 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:32.564 "is_configured": false, 00:12:32.564 "data_offset": 0, 00:12:32.564 "data_size": 65536 00:12:32.564 }, 00:12:32.564 { 00:12:32.564 "name": "BaseBdev3", 00:12:32.564 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:32.564 "is_configured": true, 00:12:32.564 "data_offset": 0, 00:12:32.564 "data_size": 65536 00:12:32.564 } 00:12:32.564 ] 00:12:32.564 }' 00:12:32.564 21:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:32.564 21:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.823 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.080 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:12:33.080 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:33.338 [2024-05-14 21:54:33.757549] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.338 21:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.596 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:33.596 "name": "Existed_Raid", 00:12:33.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.596 "strip_size_kb": 0, 00:12:33.596 "state": "configuring", 00:12:33.596 "raid_level": "raid1", 00:12:33.596 "superblock": false, 00:12:33.596 "num_base_bdevs": 3, 00:12:33.596 "num_base_bdevs_discovered": 1, 00:12:33.596 "num_base_bdevs_operational": 3, 00:12:33.596 "base_bdevs_list": [ 00:12:33.596 { 00:12:33.596 "name": null, 00:12:33.596 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:33.596 "is_configured": false, 00:12:33.596 "data_offset": 0, 00:12:33.596 "data_size": 65536 00:12:33.596 }, 00:12:33.596 { 00:12:33.596 "name": null, 00:12:33.596 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:33.596 "is_configured": false, 00:12:33.596 "data_offset": 0, 00:12:33.596 "data_size": 65536 00:12:33.596 }, 00:12:33.596 { 00:12:33.596 "name": "BaseBdev3", 00:12:33.596 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:33.596 "is_configured": true, 00:12:33.596 "data_offset": 0, 00:12:33.596 "data_size": 65536 00:12:33.596 } 00:12:33.596 ] 00:12:33.596 }' 00:12:33.596 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:33.596 21:54:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.855 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.855 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.113 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:12:34.113 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:34.372 [2024-05-14 21:54:34.767459] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.372 21:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.630 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:34.630 "name": "Existed_Raid", 00:12:34.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.630 "strip_size_kb": 0, 00:12:34.630 "state": "configuring", 00:12:34.630 "raid_level": "raid1", 00:12:34.630 "superblock": false, 00:12:34.630 "num_base_bdevs": 3, 00:12:34.630 "num_base_bdevs_discovered": 2, 00:12:34.630 "num_base_bdevs_operational": 3, 00:12:34.630 "base_bdevs_list": [ 00:12:34.630 { 00:12:34.630 "name": null, 00:12:34.630 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:34.630 "is_configured": false, 00:12:34.630 "data_offset": 0, 00:12:34.630 "data_size": 65536 00:12:34.630 }, 00:12:34.630 { 00:12:34.630 "name": "BaseBdev2", 00:12:34.630 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:34.630 "is_configured": true, 00:12:34.630 "data_offset": 0, 00:12:34.630 "data_size": 65536 00:12:34.630 }, 00:12:34.630 { 00:12:34.630 "name": "BaseBdev3", 00:12:34.630 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:34.630 "is_configured": true, 00:12:34.630 "data_offset": 0, 00:12:34.630 "data_size": 65536 00:12:34.630 } 00:12:34.630 ] 00:12:34.630 }' 00:12:34.630 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:34.630 21:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.889 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.889 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.147 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:12:35.147 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.147 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:35.406 21:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8a3f946d-123c-11ef-8c90-4585f0cfab08 00:12:35.678 [2024-05-14 21:54:36.091601] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:35.678 [2024-05-14 21:54:36.091634] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e174300 00:12:35.678 [2024-05-14 21:54:36.091639] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:35.678 [2024-05-14 21:54:36.091671] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e1d2e20 00:12:35.678 [2024-05-14 21:54:36.091753] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e174300 00:12:35.678 [2024-05-14 21:54:36.091758] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e174300 00:12:35.678 [2024-05-14 21:54:36.091794] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.678 NewBaseBdev 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:35.678 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.966 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:36.226 [ 00:12:36.226 { 00:12:36.226 "name": "NewBaseBdev", 00:12:36.226 "aliases": [ 00:12:36.226 "8a3f946d-123c-11ef-8c90-4585f0cfab08" 00:12:36.226 ], 00:12:36.226 "product_name": "Malloc disk", 00:12:36.226 "block_size": 512, 00:12:36.226 "num_blocks": 65536, 00:12:36.226 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:36.226 "assigned_rate_limits": { 00:12:36.226 "rw_ios_per_sec": 0, 00:12:36.226 "rw_mbytes_per_sec": 0, 00:12:36.226 "r_mbytes_per_sec": 0, 00:12:36.226 "w_mbytes_per_sec": 0 00:12:36.226 }, 00:12:36.226 "claimed": true, 00:12:36.226 "claim_type": "exclusive_write", 00:12:36.226 "zoned": false, 00:12:36.226 "supported_io_types": { 00:12:36.226 "read": true, 00:12:36.226 "write": true, 00:12:36.226 "unmap": true, 00:12:36.226 "write_zeroes": true, 00:12:36.226 "flush": true, 00:12:36.226 "reset": true, 00:12:36.226 "compare": false, 00:12:36.226 "compare_and_write": false, 00:12:36.226 "abort": true, 00:12:36.226 "nvme_admin": false, 00:12:36.226 "nvme_io": false 00:12:36.226 }, 00:12:36.226 "memory_domains": [ 00:12:36.226 { 00:12:36.226 "dma_device_id": "system", 00:12:36.226 "dma_device_type": 1 00:12:36.226 }, 00:12:36.226 { 00:12:36.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.226 "dma_device_type": 2 00:12:36.226 } 00:12:36.226 ], 00:12:36.226 "driver_specific": {} 00:12:36.226 } 00:12:36.226 ] 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.226 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.486 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:36.486 "name": "Existed_Raid", 00:12:36.486 "uuid": "8df771f1-123c-11ef-8c90-4585f0cfab08", 00:12:36.486 "strip_size_kb": 0, 00:12:36.486 "state": "online", 00:12:36.486 "raid_level": "raid1", 00:12:36.486 "superblock": false, 00:12:36.486 "num_base_bdevs": 3, 00:12:36.486 "num_base_bdevs_discovered": 3, 00:12:36.486 "num_base_bdevs_operational": 3, 00:12:36.486 "base_bdevs_list": [ 00:12:36.486 { 00:12:36.486 "name": "NewBaseBdev", 00:12:36.486 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:36.486 "is_configured": true, 00:12:36.486 "data_offset": 0, 00:12:36.486 "data_size": 65536 00:12:36.486 }, 00:12:36.486 { 00:12:36.486 "name": "BaseBdev2", 00:12:36.486 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:36.486 "is_configured": true, 00:12:36.486 "data_offset": 0, 00:12:36.486 "data_size": 65536 00:12:36.486 }, 00:12:36.486 { 00:12:36.486 "name": "BaseBdev3", 00:12:36.486 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:36.486 "is_configured": true, 00:12:36.486 "data_offset": 0, 00:12:36.486 "data_size": 65536 00:12:36.486 } 00:12:36.486 ] 00:12:36.486 }' 00:12:36.486 21:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:36.486 21:54:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:36.745 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:37.004 [2024-05-14 21:54:37.351510] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:37.004 "name": "Existed_Raid", 00:12:37.004 "aliases": [ 00:12:37.004 "8df771f1-123c-11ef-8c90-4585f0cfab08" 00:12:37.004 ], 00:12:37.004 "product_name": "Raid Volume", 00:12:37.004 "block_size": 512, 00:12:37.004 "num_blocks": 65536, 00:12:37.004 "uuid": "8df771f1-123c-11ef-8c90-4585f0cfab08", 00:12:37.004 "assigned_rate_limits": { 00:12:37.004 "rw_ios_per_sec": 0, 00:12:37.004 "rw_mbytes_per_sec": 0, 00:12:37.004 "r_mbytes_per_sec": 0, 00:12:37.004 "w_mbytes_per_sec": 0 00:12:37.004 }, 00:12:37.004 "claimed": false, 00:12:37.004 "zoned": false, 00:12:37.004 "supported_io_types": { 00:12:37.004 "read": true, 00:12:37.004 "write": true, 00:12:37.004 "unmap": false, 00:12:37.004 "write_zeroes": true, 00:12:37.004 "flush": false, 00:12:37.004 "reset": true, 00:12:37.004 "compare": false, 00:12:37.004 "compare_and_write": false, 00:12:37.004 "abort": false, 00:12:37.004 "nvme_admin": false, 00:12:37.004 "nvme_io": false 00:12:37.004 }, 00:12:37.004 "memory_domains": [ 00:12:37.004 { 00:12:37.004 "dma_device_id": "system", 00:12:37.004 "dma_device_type": 1 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.004 "dma_device_type": 2 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "dma_device_id": "system", 00:12:37.004 "dma_device_type": 1 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.004 "dma_device_type": 2 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "dma_device_id": "system", 00:12:37.004 "dma_device_type": 1 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.004 "dma_device_type": 2 00:12:37.004 } 00:12:37.004 ], 00:12:37.004 "driver_specific": { 00:12:37.004 "raid": { 00:12:37.004 "uuid": "8df771f1-123c-11ef-8c90-4585f0cfab08", 00:12:37.004 "strip_size_kb": 0, 00:12:37.004 "state": "online", 00:12:37.004 "raid_level": "raid1", 00:12:37.004 "superblock": false, 00:12:37.004 "num_base_bdevs": 3, 00:12:37.004 "num_base_bdevs_discovered": 3, 00:12:37.004 "num_base_bdevs_operational": 3, 00:12:37.004 "base_bdevs_list": [ 00:12:37.004 { 00:12:37.004 "name": "NewBaseBdev", 00:12:37.004 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:37.004 "is_configured": true, 00:12:37.004 "data_offset": 0, 00:12:37.004 "data_size": 65536 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "name": "BaseBdev2", 00:12:37.004 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:37.004 "is_configured": true, 00:12:37.004 "data_offset": 0, 00:12:37.004 "data_size": 65536 00:12:37.004 }, 00:12:37.004 { 00:12:37.004 "name": "BaseBdev3", 00:12:37.004 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:37.004 "is_configured": true, 00:12:37.004 "data_offset": 0, 00:12:37.004 "data_size": 65536 00:12:37.004 } 00:12:37.004 ] 00:12:37.004 } 00:12:37.004 } 00:12:37.004 }' 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:12:37.004 BaseBdev2 00:12:37.004 BaseBdev3' 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:37.004 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:37.264 "name": "NewBaseBdev", 00:12:37.264 "aliases": [ 00:12:37.264 "8a3f946d-123c-11ef-8c90-4585f0cfab08" 00:12:37.264 ], 00:12:37.264 "product_name": "Malloc disk", 00:12:37.264 "block_size": 512, 00:12:37.264 "num_blocks": 65536, 00:12:37.264 "uuid": "8a3f946d-123c-11ef-8c90-4585f0cfab08", 00:12:37.264 "assigned_rate_limits": { 00:12:37.264 "rw_ios_per_sec": 0, 00:12:37.264 "rw_mbytes_per_sec": 0, 00:12:37.264 "r_mbytes_per_sec": 0, 00:12:37.264 "w_mbytes_per_sec": 0 00:12:37.264 }, 00:12:37.264 "claimed": true, 00:12:37.264 "claim_type": "exclusive_write", 00:12:37.264 "zoned": false, 00:12:37.264 "supported_io_types": { 00:12:37.264 "read": true, 00:12:37.264 "write": true, 00:12:37.264 "unmap": true, 00:12:37.264 "write_zeroes": true, 00:12:37.264 "flush": true, 00:12:37.264 "reset": true, 00:12:37.264 "compare": false, 00:12:37.264 "compare_and_write": false, 00:12:37.264 "abort": true, 00:12:37.264 "nvme_admin": false, 00:12:37.264 "nvme_io": false 00:12:37.264 }, 00:12:37.264 "memory_domains": [ 00:12:37.264 { 00:12:37.264 "dma_device_id": "system", 00:12:37.264 "dma_device_type": 1 00:12:37.264 }, 00:12:37.264 { 00:12:37.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.264 "dma_device_type": 2 00:12:37.264 } 00:12:37.264 ], 00:12:37.264 "driver_specific": {} 00:12:37.264 }' 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:37.264 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:37.524 "name": "BaseBdev2", 00:12:37.524 "aliases": [ 00:12:37.524 "882b5571-123c-11ef-8c90-4585f0cfab08" 00:12:37.524 ], 00:12:37.524 "product_name": "Malloc disk", 00:12:37.524 "block_size": 512, 00:12:37.524 "num_blocks": 65536, 00:12:37.524 "uuid": "882b5571-123c-11ef-8c90-4585f0cfab08", 00:12:37.524 "assigned_rate_limits": { 00:12:37.524 "rw_ios_per_sec": 0, 00:12:37.524 "rw_mbytes_per_sec": 0, 00:12:37.524 "r_mbytes_per_sec": 0, 00:12:37.524 "w_mbytes_per_sec": 0 00:12:37.524 }, 00:12:37.524 "claimed": true, 00:12:37.524 "claim_type": "exclusive_write", 00:12:37.524 "zoned": false, 00:12:37.524 "supported_io_types": { 00:12:37.524 "read": true, 00:12:37.524 "write": true, 00:12:37.524 "unmap": true, 00:12:37.524 "write_zeroes": true, 00:12:37.524 "flush": true, 00:12:37.524 "reset": true, 00:12:37.524 "compare": false, 00:12:37.524 "compare_and_write": false, 00:12:37.524 "abort": true, 00:12:37.524 "nvme_admin": false, 00:12:37.524 "nvme_io": false 00:12:37.524 }, 00:12:37.524 "memory_domains": [ 00:12:37.524 { 00:12:37.524 "dma_device_id": "system", 00:12:37.524 "dma_device_type": 1 00:12:37.524 }, 00:12:37.524 { 00:12:37.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.524 "dma_device_type": 2 00:12:37.524 } 00:12:37.524 ], 00:12:37.524 "driver_specific": {} 00:12:37.524 }' 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.524 21:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:37.524 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:37.524 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:37.524 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:37.524 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:37.783 "name": "BaseBdev3", 00:12:37.783 "aliases": [ 00:12:37.783 "88a7d7fe-123c-11ef-8c90-4585f0cfab08" 00:12:37.783 ], 00:12:37.783 "product_name": "Malloc disk", 00:12:37.783 "block_size": 512, 00:12:37.783 "num_blocks": 65536, 00:12:37.783 "uuid": "88a7d7fe-123c-11ef-8c90-4585f0cfab08", 00:12:37.783 "assigned_rate_limits": { 00:12:37.783 "rw_ios_per_sec": 0, 00:12:37.783 "rw_mbytes_per_sec": 0, 00:12:37.783 "r_mbytes_per_sec": 0, 00:12:37.783 "w_mbytes_per_sec": 0 00:12:37.783 }, 00:12:37.783 "claimed": true, 00:12:37.783 "claim_type": "exclusive_write", 00:12:37.783 "zoned": false, 00:12:37.783 "supported_io_types": { 00:12:37.783 "read": true, 00:12:37.783 "write": true, 00:12:37.783 "unmap": true, 00:12:37.783 "write_zeroes": true, 00:12:37.783 "flush": true, 00:12:37.783 "reset": true, 00:12:37.783 "compare": false, 00:12:37.783 "compare_and_write": false, 00:12:37.783 "abort": true, 00:12:37.783 "nvme_admin": false, 00:12:37.783 "nvme_io": false 00:12:37.783 }, 00:12:37.783 "memory_domains": [ 00:12:37.783 { 00:12:37.783 "dma_device_id": "system", 00:12:37.783 "dma_device_type": 1 00:12:37.783 }, 00:12:37.783 { 00:12:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.783 "dma_device_type": 2 00:12:37.783 } 00:12:37.783 ], 00:12:37.783 "driver_specific": {} 00:12:37.783 }' 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.783 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:38.067 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:38.067 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:38.067 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:38.325 [2024-05-14 21:54:38.699483] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.325 [2024-05-14 21:54:38.699514] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.325 [2024-05-14 21:54:38.699540] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.325 [2024-05-14 21:54:38.699608] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.325 [2024-05-14 21:54:38.699622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e174300 name Existed_Raid, state offline 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 55137 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 55137 ']' 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 55137 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 55137 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:12:38.325 killing process with pid 55137 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55137' 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 55137 00:12:38.325 [2024-05-14 21:54:38.731675] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.325 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 55137 00:12:38.325 [2024-05-14 21:54:38.749022] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:12:38.584 00:12:38.584 real 0m23.767s 00:12:38.584 user 0m43.116s 00:12:38.584 sys 0m3.569s 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.584 ************************************ 00:12:38.584 END TEST raid_state_function_test 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.584 ************************************ 00:12:38.584 21:54:38 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:38.584 21:54:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:38.584 21:54:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.584 21:54:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.584 ************************************ 00:12:38.584 START TEST raid_state_function_test_sb 00:12:38.584 ************************************ 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=55866 00:12:38.584 Process raid pid: 55866 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55866' 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 55866 /var/tmp/spdk-raid.sock 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 55866 ']' 00:12:38.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.584 21:54:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.584 [2024-05-14 21:54:38.995437] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:38.584 [2024-05-14 21:54:38.995718] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:39.150 EAL: TSC is not safe to use in SMP mode 00:12:39.150 EAL: TSC is not invariant 00:12:39.150 [2024-05-14 21:54:39.537309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.150 [2024-05-14 21:54:39.625408] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:39.150 [2024-05-14 21:54:39.627726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.150 [2024-05-14 21:54:39.628499] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.150 [2024-05-14 21:54:39.628514] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.716 21:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.716 21:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:39.716 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:39.974 [2024-05-14 21:54:40.320792] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.974 [2024-05-14 21:54:40.320848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.974 [2024-05-14 21:54:40.320853] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.974 [2024-05-14 21:54:40.320862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.974 [2024-05-14 21:54:40.320866] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.974 [2024-05-14 21:54:40.320873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.974 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.232 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:40.232 "name": "Existed_Raid", 00:12:40.232 "uuid": "907cc2af-123c-11ef-8c90-4585f0cfab08", 00:12:40.232 "strip_size_kb": 0, 00:12:40.232 "state": "configuring", 00:12:40.232 "raid_level": "raid1", 00:12:40.232 "superblock": true, 00:12:40.232 "num_base_bdevs": 3, 00:12:40.232 "num_base_bdevs_discovered": 0, 00:12:40.232 "num_base_bdevs_operational": 3, 00:12:40.232 "base_bdevs_list": [ 00:12:40.232 { 00:12:40.232 "name": "BaseBdev1", 00:12:40.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.232 "is_configured": false, 00:12:40.232 "data_offset": 0, 00:12:40.232 "data_size": 0 00:12:40.232 }, 00:12:40.232 { 00:12:40.232 "name": "BaseBdev2", 00:12:40.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.232 "is_configured": false, 00:12:40.232 "data_offset": 0, 00:12:40.232 "data_size": 0 00:12:40.232 }, 00:12:40.232 { 00:12:40.232 "name": "BaseBdev3", 00:12:40.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.232 "is_configured": false, 00:12:40.232 "data_offset": 0, 00:12:40.232 "data_size": 0 00:12:40.232 } 00:12:40.232 ] 00:12:40.232 }' 00:12:40.232 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:40.232 21:54:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.490 21:54:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:40.749 [2024-05-14 21:54:41.196779] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.749 [2024-05-14 21:54:41.196811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829466300 name Existed_Raid, state configuring 00:12:40.749 21:54:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:41.007 [2024-05-14 21:54:41.476782] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.007 [2024-05-14 21:54:41.476833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.007 [2024-05-14 21:54:41.476838] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.007 [2024-05-14 21:54:41.476863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.007 [2024-05-14 21:54:41.476867] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.007 [2024-05-14 21:54:41.476874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.007 21:54:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:41.264 [2024-05-14 21:54:41.741936] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.264 BaseBdev1 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:41.264 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.523 21:54:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:41.781 [ 00:12:41.781 { 00:12:41.781 "name": "BaseBdev1", 00:12:41.781 "aliases": [ 00:12:41.781 "91556ff6-123c-11ef-8c90-4585f0cfab08" 00:12:41.781 ], 00:12:41.781 "product_name": "Malloc disk", 00:12:41.781 "block_size": 512, 00:12:41.781 "num_blocks": 65536, 00:12:41.781 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:41.781 "assigned_rate_limits": { 00:12:41.781 "rw_ios_per_sec": 0, 00:12:41.781 "rw_mbytes_per_sec": 0, 00:12:41.781 "r_mbytes_per_sec": 0, 00:12:41.781 "w_mbytes_per_sec": 0 00:12:41.781 }, 00:12:41.781 "claimed": true, 00:12:41.781 "claim_type": "exclusive_write", 00:12:41.781 "zoned": false, 00:12:41.781 "supported_io_types": { 00:12:41.781 "read": true, 00:12:41.781 "write": true, 00:12:41.781 "unmap": true, 00:12:41.781 "write_zeroes": true, 00:12:41.781 "flush": true, 00:12:41.781 "reset": true, 00:12:41.781 "compare": false, 00:12:41.781 "compare_and_write": false, 00:12:41.781 "abort": true, 00:12:41.781 "nvme_admin": false, 00:12:41.781 "nvme_io": false 00:12:41.781 }, 00:12:41.781 "memory_domains": [ 00:12:41.781 { 00:12:41.781 "dma_device_id": "system", 00:12:41.781 "dma_device_type": 1 00:12:41.781 }, 00:12:41.781 { 00:12:41.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.781 "dma_device_type": 2 00:12:41.782 } 00:12:41.782 ], 00:12:41.782 "driver_specific": {} 00:12:41.782 } 00:12:41.782 ] 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.782 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.039 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.039 "name": "Existed_Raid", 00:12:42.039 "uuid": "912d26a6-123c-11ef-8c90-4585f0cfab08", 00:12:42.039 "strip_size_kb": 0, 00:12:42.039 "state": "configuring", 00:12:42.039 "raid_level": "raid1", 00:12:42.039 "superblock": true, 00:12:42.039 "num_base_bdevs": 3, 00:12:42.039 "num_base_bdevs_discovered": 1, 00:12:42.039 "num_base_bdevs_operational": 3, 00:12:42.039 "base_bdevs_list": [ 00:12:42.039 { 00:12:42.039 "name": "BaseBdev1", 00:12:42.039 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:42.039 "is_configured": true, 00:12:42.039 "data_offset": 2048, 00:12:42.039 "data_size": 63488 00:12:42.039 }, 00:12:42.039 { 00:12:42.039 "name": "BaseBdev2", 00:12:42.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.039 "is_configured": false, 00:12:42.039 "data_offset": 0, 00:12:42.039 "data_size": 0 00:12:42.039 }, 00:12:42.039 { 00:12:42.039 "name": "BaseBdev3", 00:12:42.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.039 "is_configured": false, 00:12:42.039 "data_offset": 0, 00:12:42.039 "data_size": 0 00:12:42.039 } 00:12:42.039 ] 00:12:42.039 }' 00:12:42.039 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.039 21:54:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.300 21:54:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:42.559 [2024-05-14 21:54:43.076794] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.559 [2024-05-14 21:54:43.076831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829466300 name Existed_Raid, state configuring 00:12:42.559 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:42.816 [2024-05-14 21:54:43.340815] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.816 [2024-05-14 21:54:43.341648] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.816 [2024-05-14 21:54:43.341703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.816 [2024-05-14 21:54:43.341708] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.816 [2024-05-14 21:54:43.341717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.816 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.077 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:43.077 "name": "Existed_Raid", 00:12:43.077 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:43.077 "strip_size_kb": 0, 00:12:43.077 "state": "configuring", 00:12:43.077 "raid_level": "raid1", 00:12:43.077 "superblock": true, 00:12:43.077 "num_base_bdevs": 3, 00:12:43.077 "num_base_bdevs_discovered": 1, 00:12:43.077 "num_base_bdevs_operational": 3, 00:12:43.077 "base_bdevs_list": [ 00:12:43.077 { 00:12:43.077 "name": "BaseBdev1", 00:12:43.077 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:43.077 "is_configured": true, 00:12:43.077 "data_offset": 2048, 00:12:43.077 "data_size": 63488 00:12:43.077 }, 00:12:43.077 { 00:12:43.077 "name": "BaseBdev2", 00:12:43.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.077 "is_configured": false, 00:12:43.077 "data_offset": 0, 00:12:43.077 "data_size": 0 00:12:43.077 }, 00:12:43.077 { 00:12:43.077 "name": "BaseBdev3", 00:12:43.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.077 "is_configured": false, 00:12:43.077 "data_offset": 0, 00:12:43.077 "data_size": 0 00:12:43.077 } 00:12:43.077 ] 00:12:43.077 }' 00:12:43.077 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:43.077 21:54:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.645 21:54:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.903 [2024-05-14 21:54:44.260999] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.903 BaseBdev2 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:43.903 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:44.161 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.420 [ 00:12:44.420 { 00:12:44.420 "name": "BaseBdev2", 00:12:44.420 "aliases": [ 00:12:44.420 "92d5f707-123c-11ef-8c90-4585f0cfab08" 00:12:44.420 ], 00:12:44.420 "product_name": "Malloc disk", 00:12:44.420 "block_size": 512, 00:12:44.420 "num_blocks": 65536, 00:12:44.420 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:44.420 "assigned_rate_limits": { 00:12:44.420 "rw_ios_per_sec": 0, 00:12:44.420 "rw_mbytes_per_sec": 0, 00:12:44.420 "r_mbytes_per_sec": 0, 00:12:44.420 "w_mbytes_per_sec": 0 00:12:44.420 }, 00:12:44.420 "claimed": true, 00:12:44.420 "claim_type": "exclusive_write", 00:12:44.420 "zoned": false, 00:12:44.420 "supported_io_types": { 00:12:44.420 "read": true, 00:12:44.420 "write": true, 00:12:44.420 "unmap": true, 00:12:44.420 "write_zeroes": true, 00:12:44.420 "flush": true, 00:12:44.420 "reset": true, 00:12:44.420 "compare": false, 00:12:44.420 "compare_and_write": false, 00:12:44.420 "abort": true, 00:12:44.420 "nvme_admin": false, 00:12:44.420 "nvme_io": false 00:12:44.420 }, 00:12:44.420 "memory_domains": [ 00:12:44.420 { 00:12:44.420 "dma_device_id": "system", 00:12:44.420 "dma_device_type": 1 00:12:44.420 }, 00:12:44.420 { 00:12:44.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.420 "dma_device_type": 2 00:12:44.420 } 00:12:44.420 ], 00:12:44.420 "driver_specific": {} 00:12:44.420 } 00:12:44.420 ] 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.420 21:54:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.679 21:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:44.679 "name": "Existed_Raid", 00:12:44.679 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:44.679 "strip_size_kb": 0, 00:12:44.679 "state": "configuring", 00:12:44.679 "raid_level": "raid1", 00:12:44.679 "superblock": true, 00:12:44.679 "num_base_bdevs": 3, 00:12:44.679 "num_base_bdevs_discovered": 2, 00:12:44.679 "num_base_bdevs_operational": 3, 00:12:44.679 "base_bdevs_list": [ 00:12:44.679 { 00:12:44.679 "name": "BaseBdev1", 00:12:44.679 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:44.679 "is_configured": true, 00:12:44.679 "data_offset": 2048, 00:12:44.679 "data_size": 63488 00:12:44.679 }, 00:12:44.679 { 00:12:44.679 "name": "BaseBdev2", 00:12:44.679 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:44.679 "is_configured": true, 00:12:44.679 "data_offset": 2048, 00:12:44.679 "data_size": 63488 00:12:44.679 }, 00:12:44.679 { 00:12:44.679 "name": "BaseBdev3", 00:12:44.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.679 "is_configured": false, 00:12:44.679 "data_offset": 0, 00:12:44.679 "data_size": 0 00:12:44.679 } 00:12:44.679 ] 00:12:44.679 }' 00:12:44.679 21:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:44.679 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 21:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.196 [2024-05-14 21:54:45.597101] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.196 [2024-05-14 21:54:45.597229] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x829466300 00:12:45.196 [2024-05-14 21:54:45.597237] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.196 [2024-05-14 21:54:45.597259] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8294c4ec0 00:12:45.196 [2024-05-14 21:54:45.597323] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829466300 00:12:45.196 [2024-05-14 21:54:45.597329] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829466300 00:12:45.196 [2024-05-14 21:54:45.597351] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.196 BaseBdev3 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:45.196 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:45.454 21:54:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.713 [ 00:12:45.713 { 00:12:45.713 "name": "BaseBdev3", 00:12:45.713 "aliases": [ 00:12:45.713 "93a1d641-123c-11ef-8c90-4585f0cfab08" 00:12:45.713 ], 00:12:45.713 "product_name": "Malloc disk", 00:12:45.713 "block_size": 512, 00:12:45.713 "num_blocks": 65536, 00:12:45.713 "uuid": "93a1d641-123c-11ef-8c90-4585f0cfab08", 00:12:45.713 "assigned_rate_limits": { 00:12:45.713 "rw_ios_per_sec": 0, 00:12:45.713 "rw_mbytes_per_sec": 0, 00:12:45.713 "r_mbytes_per_sec": 0, 00:12:45.713 "w_mbytes_per_sec": 0 00:12:45.713 }, 00:12:45.713 "claimed": true, 00:12:45.713 "claim_type": "exclusive_write", 00:12:45.713 "zoned": false, 00:12:45.713 "supported_io_types": { 00:12:45.713 "read": true, 00:12:45.713 "write": true, 00:12:45.713 "unmap": true, 00:12:45.713 "write_zeroes": true, 00:12:45.713 "flush": true, 00:12:45.713 "reset": true, 00:12:45.713 "compare": false, 00:12:45.713 "compare_and_write": false, 00:12:45.713 "abort": true, 00:12:45.713 "nvme_admin": false, 00:12:45.713 "nvme_io": false 00:12:45.713 }, 00:12:45.713 "memory_domains": [ 00:12:45.713 { 00:12:45.713 "dma_device_id": "system", 00:12:45.713 "dma_device_type": 1 00:12:45.713 }, 00:12:45.713 { 00:12:45.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.713 "dma_device_type": 2 00:12:45.713 } 00:12:45.713 ], 00:12:45.713 "driver_specific": {} 00:12:45.713 } 00:12:45.713 ] 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.713 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.971 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:45.971 "name": "Existed_Raid", 00:12:45.971 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:45.971 "strip_size_kb": 0, 00:12:45.971 "state": "online", 00:12:45.971 "raid_level": "raid1", 00:12:45.971 "superblock": true, 00:12:45.971 "num_base_bdevs": 3, 00:12:45.971 "num_base_bdevs_discovered": 3, 00:12:45.971 "num_base_bdevs_operational": 3, 00:12:45.971 "base_bdevs_list": [ 00:12:45.971 { 00:12:45.971 "name": "BaseBdev1", 00:12:45.971 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "name": "BaseBdev2", 00:12:45.971 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "name": "BaseBdev3", 00:12:45.971 "uuid": "93a1d641-123c-11ef-8c90-4585f0cfab08", 00:12:45.971 "is_configured": true, 00:12:45.971 "data_offset": 2048, 00:12:45.971 "data_size": 63488 00:12:45.971 } 00:12:45.971 ] 00:12:45.971 }' 00:12:45.971 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:45.971 21:54:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:46.228 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:46.486 [2024-05-14 21:54:46.917104] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.486 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:46.486 "name": "Existed_Raid", 00:12:46.486 "aliases": [ 00:12:46.486 "9249942c-123c-11ef-8c90-4585f0cfab08" 00:12:46.486 ], 00:12:46.486 "product_name": "Raid Volume", 00:12:46.486 "block_size": 512, 00:12:46.486 "num_blocks": 63488, 00:12:46.486 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:46.486 "assigned_rate_limits": { 00:12:46.486 "rw_ios_per_sec": 0, 00:12:46.486 "rw_mbytes_per_sec": 0, 00:12:46.486 "r_mbytes_per_sec": 0, 00:12:46.486 "w_mbytes_per_sec": 0 00:12:46.486 }, 00:12:46.486 "claimed": false, 00:12:46.486 "zoned": false, 00:12:46.486 "supported_io_types": { 00:12:46.486 "read": true, 00:12:46.486 "write": true, 00:12:46.486 "unmap": false, 00:12:46.486 "write_zeroes": true, 00:12:46.486 "flush": false, 00:12:46.486 "reset": true, 00:12:46.486 "compare": false, 00:12:46.486 "compare_and_write": false, 00:12:46.486 "abort": false, 00:12:46.486 "nvme_admin": false, 00:12:46.486 "nvme_io": false 00:12:46.486 }, 00:12:46.486 "memory_domains": [ 00:12:46.486 { 00:12:46.486 "dma_device_id": "system", 00:12:46.486 "dma_device_type": 1 00:12:46.486 }, 00:12:46.486 { 00:12:46.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.486 "dma_device_type": 2 00:12:46.486 }, 00:12:46.486 { 00:12:46.486 "dma_device_id": "system", 00:12:46.486 "dma_device_type": 1 00:12:46.486 }, 00:12:46.486 { 00:12:46.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.486 "dma_device_type": 2 00:12:46.487 }, 00:12:46.487 { 00:12:46.487 "dma_device_id": "system", 00:12:46.487 "dma_device_type": 1 00:12:46.487 }, 00:12:46.487 { 00:12:46.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.487 "dma_device_type": 2 00:12:46.487 } 00:12:46.487 ], 00:12:46.487 "driver_specific": { 00:12:46.487 "raid": { 00:12:46.487 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:46.487 "strip_size_kb": 0, 00:12:46.487 "state": "online", 00:12:46.487 "raid_level": "raid1", 00:12:46.487 "superblock": true, 00:12:46.487 "num_base_bdevs": 3, 00:12:46.487 "num_base_bdevs_discovered": 3, 00:12:46.487 "num_base_bdevs_operational": 3, 00:12:46.487 "base_bdevs_list": [ 00:12:46.487 { 00:12:46.487 "name": "BaseBdev1", 00:12:46.487 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:46.487 "is_configured": true, 00:12:46.487 "data_offset": 2048, 00:12:46.487 "data_size": 63488 00:12:46.487 }, 00:12:46.487 { 00:12:46.487 "name": "BaseBdev2", 00:12:46.487 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:46.487 "is_configured": true, 00:12:46.487 "data_offset": 2048, 00:12:46.487 "data_size": 63488 00:12:46.487 }, 00:12:46.487 { 00:12:46.487 "name": "BaseBdev3", 00:12:46.487 "uuid": "93a1d641-123c-11ef-8c90-4585f0cfab08", 00:12:46.487 "is_configured": true, 00:12:46.487 "data_offset": 2048, 00:12:46.487 "data_size": 63488 00:12:46.487 } 00:12:46.487 ] 00:12:46.487 } 00:12:46.487 } 00:12:46.487 }' 00:12:46.487 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.487 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:46.487 BaseBdev2 00:12:46.487 BaseBdev3' 00:12:46.487 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:46.487 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:46.487 21:54:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:46.745 "name": "BaseBdev1", 00:12:46.745 "aliases": [ 00:12:46.745 "91556ff6-123c-11ef-8c90-4585f0cfab08" 00:12:46.745 ], 00:12:46.745 "product_name": "Malloc disk", 00:12:46.745 "block_size": 512, 00:12:46.745 "num_blocks": 65536, 00:12:46.745 "uuid": "91556ff6-123c-11ef-8c90-4585f0cfab08", 00:12:46.745 "assigned_rate_limits": { 00:12:46.745 "rw_ios_per_sec": 0, 00:12:46.745 "rw_mbytes_per_sec": 0, 00:12:46.745 "r_mbytes_per_sec": 0, 00:12:46.745 "w_mbytes_per_sec": 0 00:12:46.745 }, 00:12:46.745 "claimed": true, 00:12:46.745 "claim_type": "exclusive_write", 00:12:46.745 "zoned": false, 00:12:46.745 "supported_io_types": { 00:12:46.745 "read": true, 00:12:46.745 "write": true, 00:12:46.745 "unmap": true, 00:12:46.745 "write_zeroes": true, 00:12:46.745 "flush": true, 00:12:46.745 "reset": true, 00:12:46.745 "compare": false, 00:12:46.745 "compare_and_write": false, 00:12:46.745 "abort": true, 00:12:46.745 "nvme_admin": false, 00:12:46.745 "nvme_io": false 00:12:46.745 }, 00:12:46.745 "memory_domains": [ 00:12:46.745 { 00:12:46.745 "dma_device_id": "system", 00:12:46.745 "dma_device_type": 1 00:12:46.745 }, 00:12:46.745 { 00:12:46.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.745 "dma_device_type": 2 00:12:46.745 } 00:12:46.745 ], 00:12:46.745 "driver_specific": {} 00:12:46.745 }' 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:46.745 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:47.005 "name": "BaseBdev2", 00:12:47.005 "aliases": [ 00:12:47.005 "92d5f707-123c-11ef-8c90-4585f0cfab08" 00:12:47.005 ], 00:12:47.005 "product_name": "Malloc disk", 00:12:47.005 "block_size": 512, 00:12:47.005 "num_blocks": 65536, 00:12:47.005 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:47.005 "assigned_rate_limits": { 00:12:47.005 "rw_ios_per_sec": 0, 00:12:47.005 "rw_mbytes_per_sec": 0, 00:12:47.005 "r_mbytes_per_sec": 0, 00:12:47.005 "w_mbytes_per_sec": 0 00:12:47.005 }, 00:12:47.005 "claimed": true, 00:12:47.005 "claim_type": "exclusive_write", 00:12:47.005 "zoned": false, 00:12:47.005 "supported_io_types": { 00:12:47.005 "read": true, 00:12:47.005 "write": true, 00:12:47.005 "unmap": true, 00:12:47.005 "write_zeroes": true, 00:12:47.005 "flush": true, 00:12:47.005 "reset": true, 00:12:47.005 "compare": false, 00:12:47.005 "compare_and_write": false, 00:12:47.005 "abort": true, 00:12:47.005 "nvme_admin": false, 00:12:47.005 "nvme_io": false 00:12:47.005 }, 00:12:47.005 "memory_domains": [ 00:12:47.005 { 00:12:47.005 "dma_device_id": "system", 00:12:47.005 "dma_device_type": 1 00:12:47.005 }, 00:12:47.005 { 00:12:47.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.005 "dma_device_type": 2 00:12:47.005 } 00:12:47.005 ], 00:12:47.005 "driver_specific": {} 00:12:47.005 }' 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:47.005 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:47.573 "name": "BaseBdev3", 00:12:47.573 "aliases": [ 00:12:47.573 "93a1d641-123c-11ef-8c90-4585f0cfab08" 00:12:47.573 ], 00:12:47.573 "product_name": "Malloc disk", 00:12:47.573 "block_size": 512, 00:12:47.573 "num_blocks": 65536, 00:12:47.573 "uuid": "93a1d641-123c-11ef-8c90-4585f0cfab08", 00:12:47.573 "assigned_rate_limits": { 00:12:47.573 "rw_ios_per_sec": 0, 00:12:47.573 "rw_mbytes_per_sec": 0, 00:12:47.573 "r_mbytes_per_sec": 0, 00:12:47.573 "w_mbytes_per_sec": 0 00:12:47.573 }, 00:12:47.573 "claimed": true, 00:12:47.573 "claim_type": "exclusive_write", 00:12:47.573 "zoned": false, 00:12:47.573 "supported_io_types": { 00:12:47.573 "read": true, 00:12:47.573 "write": true, 00:12:47.573 "unmap": true, 00:12:47.573 "write_zeroes": true, 00:12:47.573 "flush": true, 00:12:47.573 "reset": true, 00:12:47.573 "compare": false, 00:12:47.573 "compare_and_write": false, 00:12:47.573 "abort": true, 00:12:47.573 "nvme_admin": false, 00:12:47.573 "nvme_io": false 00:12:47.573 }, 00:12:47.573 "memory_domains": [ 00:12:47.573 { 00:12:47.573 "dma_device_id": "system", 00:12:47.573 "dma_device_type": 1 00:12:47.573 }, 00:12:47.573 { 00:12:47.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.573 "dma_device_type": 2 00:12:47.573 } 00:12:47.573 ], 00:12:47.573 "driver_specific": {} 00:12:47.573 }' 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:47.573 21:54:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:47.573 [2024-05-14 21:54:48.149025] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.831 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:47.831 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:47.832 "name": "Existed_Raid", 00:12:47.832 "uuid": "9249942c-123c-11ef-8c90-4585f0cfab08", 00:12:47.832 "strip_size_kb": 0, 00:12:47.832 "state": "online", 00:12:47.832 "raid_level": "raid1", 00:12:47.832 "superblock": true, 00:12:47.832 "num_base_bdevs": 3, 00:12:47.832 "num_base_bdevs_discovered": 2, 00:12:47.832 "num_base_bdevs_operational": 2, 00:12:47.832 "base_bdevs_list": [ 00:12:47.832 { 00:12:47.832 "name": null, 00:12:47.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.832 "is_configured": false, 00:12:47.832 "data_offset": 2048, 00:12:47.832 "data_size": 63488 00:12:47.832 }, 00:12:47.832 { 00:12:47.832 "name": "BaseBdev2", 00:12:47.832 "uuid": "92d5f707-123c-11ef-8c90-4585f0cfab08", 00:12:47.832 "is_configured": true, 00:12:47.832 "data_offset": 2048, 00:12:47.832 "data_size": 63488 00:12:47.832 }, 00:12:47.832 { 00:12:47.832 "name": "BaseBdev3", 00:12:47.832 "uuid": "93a1d641-123c-11ef-8c90-4585f0cfab08", 00:12:47.832 "is_configured": true, 00:12:47.832 "data_offset": 2048, 00:12:47.832 "data_size": 63488 00:12:47.832 } 00:12:47.832 ] 00:12:47.832 }' 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:47.832 21:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.399 21:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:48.658 [2024-05-14 21:54:49.226212] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.916 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.916 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.916 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.916 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:49.175 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:49.175 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.175 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:49.433 [2024-05-14 21:54:49.815548] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.433 [2024-05-14 21:54:49.815608] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.433 [2024-05-14 21:54:49.824637] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.433 [2024-05-14 21:54:49.824700] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.434 [2024-05-14 21:54:49.824705] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829466300 name Existed_Raid, state offline 00:12:49.434 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.434 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.434 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.434 21:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:49.693 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.952 BaseBdev2 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:49.952 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.211 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.470 [ 00:12:50.470 { 00:12:50.470 "name": "BaseBdev2", 00:12:50.470 "aliases": [ 00:12:50.470 "967c637e-123c-11ef-8c90-4585f0cfab08" 00:12:50.470 ], 00:12:50.470 "product_name": "Malloc disk", 00:12:50.470 "block_size": 512, 00:12:50.470 "num_blocks": 65536, 00:12:50.470 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:50.470 "assigned_rate_limits": { 00:12:50.470 "rw_ios_per_sec": 0, 00:12:50.470 "rw_mbytes_per_sec": 0, 00:12:50.470 "r_mbytes_per_sec": 0, 00:12:50.470 "w_mbytes_per_sec": 0 00:12:50.470 }, 00:12:50.470 "claimed": false, 00:12:50.470 "zoned": false, 00:12:50.470 "supported_io_types": { 00:12:50.470 "read": true, 00:12:50.470 "write": true, 00:12:50.470 "unmap": true, 00:12:50.470 "write_zeroes": true, 00:12:50.470 "flush": true, 00:12:50.470 "reset": true, 00:12:50.470 "compare": false, 00:12:50.470 "compare_and_write": false, 00:12:50.470 "abort": true, 00:12:50.470 "nvme_admin": false, 00:12:50.470 "nvme_io": false 00:12:50.470 }, 00:12:50.470 "memory_domains": [ 00:12:50.470 { 00:12:50.470 "dma_device_id": "system", 00:12:50.470 "dma_device_type": 1 00:12:50.470 }, 00:12:50.470 { 00:12:50.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.470 "dma_device_type": 2 00:12:50.470 } 00:12:50.470 ], 00:12:50.470 "driver_specific": {} 00:12:50.470 } 00:12:50.470 ] 00:12:50.470 21:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:50.470 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:50.470 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:50.470 21:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.728 BaseBdev3 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:50.728 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.986 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.245 [ 00:12:51.245 { 00:12:51.245 "name": "BaseBdev3", 00:12:51.245 "aliases": [ 00:12:51.245 "96fabb8d-123c-11ef-8c90-4585f0cfab08" 00:12:51.245 ], 00:12:51.245 "product_name": "Malloc disk", 00:12:51.245 "block_size": 512, 00:12:51.245 "num_blocks": 65536, 00:12:51.245 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:51.245 "assigned_rate_limits": { 00:12:51.245 "rw_ios_per_sec": 0, 00:12:51.245 "rw_mbytes_per_sec": 0, 00:12:51.245 "r_mbytes_per_sec": 0, 00:12:51.245 "w_mbytes_per_sec": 0 00:12:51.245 }, 00:12:51.245 "claimed": false, 00:12:51.245 "zoned": false, 00:12:51.245 "supported_io_types": { 00:12:51.245 "read": true, 00:12:51.245 "write": true, 00:12:51.245 "unmap": true, 00:12:51.245 "write_zeroes": true, 00:12:51.245 "flush": true, 00:12:51.245 "reset": true, 00:12:51.245 "compare": false, 00:12:51.245 "compare_and_write": false, 00:12:51.245 "abort": true, 00:12:51.245 "nvme_admin": false, 00:12:51.245 "nvme_io": false 00:12:51.245 }, 00:12:51.245 "memory_domains": [ 00:12:51.245 { 00:12:51.245 "dma_device_id": "system", 00:12:51.245 "dma_device_type": 1 00:12:51.245 }, 00:12:51.245 { 00:12:51.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.245 "dma_device_type": 2 00:12:51.245 } 00:12:51.245 ], 00:12:51.245 "driver_specific": {} 00:12:51.245 } 00:12:51.245 ] 00:12:51.245 21:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:51.245 21:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:12:51.245 21:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:12:51.245 21:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:51.503 [2024-05-14 21:54:52.028721] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.503 [2024-05-14 21:54:52.028801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.503 [2024-05-14 21:54:52.028813] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.503 [2024-05-14 21:54:52.029556] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.503 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.762 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.762 "name": "Existed_Raid", 00:12:51.762 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:51.762 "strip_size_kb": 0, 00:12:51.762 "state": "configuring", 00:12:51.762 "raid_level": "raid1", 00:12:51.762 "superblock": true, 00:12:51.762 "num_base_bdevs": 3, 00:12:51.762 "num_base_bdevs_discovered": 2, 00:12:51.762 "num_base_bdevs_operational": 3, 00:12:51.762 "base_bdevs_list": [ 00:12:51.762 { 00:12:51.762 "name": "BaseBdev1", 00:12:51.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.762 "is_configured": false, 00:12:51.762 "data_offset": 0, 00:12:51.762 "data_size": 0 00:12:51.762 }, 00:12:51.762 { 00:12:51.762 "name": "BaseBdev2", 00:12:51.762 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:51.762 "is_configured": true, 00:12:51.762 "data_offset": 2048, 00:12:51.762 "data_size": 63488 00:12:51.762 }, 00:12:51.762 { 00:12:51.762 "name": "BaseBdev3", 00:12:51.762 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:51.762 "is_configured": true, 00:12:51.762 "data_offset": 2048, 00:12:51.762 "data_size": 63488 00:12:51.762 } 00:12:51.762 ] 00:12:51.762 }' 00:12:51.762 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.762 21:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:52.377 [2024-05-14 21:54:52.932761] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.377 21:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.635 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.635 "name": "Existed_Raid", 00:12:52.635 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:52.635 "strip_size_kb": 0, 00:12:52.635 "state": "configuring", 00:12:52.635 "raid_level": "raid1", 00:12:52.635 "superblock": true, 00:12:52.635 "num_base_bdevs": 3, 00:12:52.635 "num_base_bdevs_discovered": 1, 00:12:52.635 "num_base_bdevs_operational": 3, 00:12:52.635 "base_bdevs_list": [ 00:12:52.635 { 00:12:52.635 "name": "BaseBdev1", 00:12:52.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.635 "is_configured": false, 00:12:52.635 "data_offset": 0, 00:12:52.635 "data_size": 0 00:12:52.635 }, 00:12:52.635 { 00:12:52.635 "name": null, 00:12:52.635 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:52.635 "is_configured": false, 00:12:52.635 "data_offset": 2048, 00:12:52.635 "data_size": 63488 00:12:52.635 }, 00:12:52.635 { 00:12:52.635 "name": "BaseBdev3", 00:12:52.635 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:52.635 "is_configured": true, 00:12:52.635 "data_offset": 2048, 00:12:52.635 "data_size": 63488 00:12:52.635 } 00:12:52.635 ] 00:12:52.635 }' 00:12:52.635 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.635 21:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.203 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.203 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.203 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:12:53.203 21:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.461 [2024-05-14 21:54:53.996948] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.461 BaseBdev1 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:53.461 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:53.719 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.978 [ 00:12:53.978 { 00:12:53.978 "name": "BaseBdev1", 00:12:53.978 "aliases": [ 00:12:53.978 "98a38d5f-123c-11ef-8c90-4585f0cfab08" 00:12:53.978 ], 00:12:53.978 "product_name": "Malloc disk", 00:12:53.978 "block_size": 512, 00:12:53.978 "num_blocks": 65536, 00:12:53.978 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:53.978 "assigned_rate_limits": { 00:12:53.978 "rw_ios_per_sec": 0, 00:12:53.978 "rw_mbytes_per_sec": 0, 00:12:53.978 "r_mbytes_per_sec": 0, 00:12:53.978 "w_mbytes_per_sec": 0 00:12:53.978 }, 00:12:53.978 "claimed": true, 00:12:53.978 "claim_type": "exclusive_write", 00:12:53.978 "zoned": false, 00:12:53.978 "supported_io_types": { 00:12:53.978 "read": true, 00:12:53.978 "write": true, 00:12:53.978 "unmap": true, 00:12:53.978 "write_zeroes": true, 00:12:53.978 "flush": true, 00:12:53.978 "reset": true, 00:12:53.978 "compare": false, 00:12:53.978 "compare_and_write": false, 00:12:53.978 "abort": true, 00:12:53.978 "nvme_admin": false, 00:12:53.978 "nvme_io": false 00:12:53.978 }, 00:12:53.978 "memory_domains": [ 00:12:53.978 { 00:12:53.978 "dma_device_id": "system", 00:12:53.978 "dma_device_type": 1 00:12:53.978 }, 00:12:53.978 { 00:12:53.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.978 "dma_device_type": 2 00:12:53.978 } 00:12:53.978 ], 00:12:53.978 "driver_specific": {} 00:12:53.978 } 00:12:53.978 ] 00:12:53.978 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:53.978 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.978 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:53.978 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.979 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.236 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.237 "name": "Existed_Raid", 00:12:54.237 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:54.237 "strip_size_kb": 0, 00:12:54.237 "state": "configuring", 00:12:54.237 "raid_level": "raid1", 00:12:54.237 "superblock": true, 00:12:54.237 "num_base_bdevs": 3, 00:12:54.237 "num_base_bdevs_discovered": 2, 00:12:54.237 "num_base_bdevs_operational": 3, 00:12:54.237 "base_bdevs_list": [ 00:12:54.237 { 00:12:54.237 "name": "BaseBdev1", 00:12:54.237 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:54.237 "is_configured": true, 00:12:54.237 "data_offset": 2048, 00:12:54.237 "data_size": 63488 00:12:54.237 }, 00:12:54.237 { 00:12:54.237 "name": null, 00:12:54.237 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:54.237 "is_configured": false, 00:12:54.237 "data_offset": 2048, 00:12:54.237 "data_size": 63488 00:12:54.237 }, 00:12:54.237 { 00:12:54.237 "name": "BaseBdev3", 00:12:54.237 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:54.237 "is_configured": true, 00:12:54.237 "data_offset": 2048, 00:12:54.237 "data_size": 63488 00:12:54.237 } 00:12:54.237 ] 00:12:54.237 }' 00:12:54.237 21:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.237 21:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.803 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.803 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.061 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.061 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:55.320 [2024-05-14 21:54:55.692838] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.320 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.578 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:55.578 "name": "Existed_Raid", 00:12:55.578 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:55.578 "strip_size_kb": 0, 00:12:55.578 "state": "configuring", 00:12:55.578 "raid_level": "raid1", 00:12:55.578 "superblock": true, 00:12:55.578 "num_base_bdevs": 3, 00:12:55.578 "num_base_bdevs_discovered": 1, 00:12:55.578 "num_base_bdevs_operational": 3, 00:12:55.578 "base_bdevs_list": [ 00:12:55.578 { 00:12:55.578 "name": "BaseBdev1", 00:12:55.578 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:55.578 "is_configured": true, 00:12:55.578 "data_offset": 2048, 00:12:55.578 "data_size": 63488 00:12:55.578 }, 00:12:55.578 { 00:12:55.578 "name": null, 00:12:55.578 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:55.578 "is_configured": false, 00:12:55.578 "data_offset": 2048, 00:12:55.578 "data_size": 63488 00:12:55.578 }, 00:12:55.578 { 00:12:55.578 "name": null, 00:12:55.578 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:55.578 "is_configured": false, 00:12:55.578 "data_offset": 2048, 00:12:55.578 "data_size": 63488 00:12:55.578 } 00:12:55.578 ] 00:12:55.578 }' 00:12:55.578 21:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:55.578 21:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.836 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:55.836 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.093 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:12:56.093 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:56.351 [2024-05-14 21:54:56.828924] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.351 21:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.608 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:56.608 "name": "Existed_Raid", 00:12:56.608 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:56.608 "strip_size_kb": 0, 00:12:56.608 "state": "configuring", 00:12:56.608 "raid_level": "raid1", 00:12:56.608 "superblock": true, 00:12:56.608 "num_base_bdevs": 3, 00:12:56.608 "num_base_bdevs_discovered": 2, 00:12:56.608 "num_base_bdevs_operational": 3, 00:12:56.608 "base_bdevs_list": [ 00:12:56.608 { 00:12:56.608 "name": "BaseBdev1", 00:12:56.608 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:56.608 "is_configured": true, 00:12:56.608 "data_offset": 2048, 00:12:56.608 "data_size": 63488 00:12:56.608 }, 00:12:56.608 { 00:12:56.608 "name": null, 00:12:56.608 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:56.608 "is_configured": false, 00:12:56.608 "data_offset": 2048, 00:12:56.608 "data_size": 63488 00:12:56.608 }, 00:12:56.608 { 00:12:56.608 "name": "BaseBdev3", 00:12:56.608 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:56.608 "is_configured": true, 00:12:56.608 "data_offset": 2048, 00:12:56.608 "data_size": 63488 00:12:56.608 } 00:12:56.608 ] 00:12:56.608 }' 00:12:56.608 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:56.608 21:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.173 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.173 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.173 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:12:57.173 21:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:57.431 [2024-05-14 21:54:57.980945] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.431 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:57.431 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:57.431 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:57.431 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.432 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.999 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:57.999 "name": "Existed_Raid", 00:12:57.999 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:57.999 "strip_size_kb": 0, 00:12:57.999 "state": "configuring", 00:12:57.999 "raid_level": "raid1", 00:12:57.999 "superblock": true, 00:12:57.999 "num_base_bdevs": 3, 00:12:57.999 "num_base_bdevs_discovered": 1, 00:12:57.999 "num_base_bdevs_operational": 3, 00:12:57.999 "base_bdevs_list": [ 00:12:57.999 { 00:12:57.999 "name": null, 00:12:57.999 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:57.999 "is_configured": false, 00:12:57.999 "data_offset": 2048, 00:12:57.999 "data_size": 63488 00:12:57.999 }, 00:12:57.999 { 00:12:57.999 "name": null, 00:12:57.999 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:57.999 "is_configured": false, 00:12:57.999 "data_offset": 2048, 00:12:57.999 "data_size": 63488 00:12:57.999 }, 00:12:57.999 { 00:12:57.999 "name": "BaseBdev3", 00:12:57.999 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:57.999 "is_configured": true, 00:12:57.999 "data_offset": 2048, 00:12:57.999 "data_size": 63488 00:12:57.999 } 00:12:57.999 ] 00:12:57.999 }' 00:12:57.999 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:57.999 21:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.257 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.257 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.515 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:12:58.515 21:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.773 [2024-05-14 21:54:59.253706] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.773 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:58.773 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.774 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.032 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.032 "name": "Existed_Raid", 00:12:59.032 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:12:59.032 "strip_size_kb": 0, 00:12:59.032 "state": "configuring", 00:12:59.032 "raid_level": "raid1", 00:12:59.032 "superblock": true, 00:12:59.032 "num_base_bdevs": 3, 00:12:59.032 "num_base_bdevs_discovered": 2, 00:12:59.032 "num_base_bdevs_operational": 3, 00:12:59.032 "base_bdevs_list": [ 00:12:59.032 { 00:12:59.032 "name": null, 00:12:59.032 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:12:59.032 "is_configured": false, 00:12:59.032 "data_offset": 2048, 00:12:59.032 "data_size": 63488 00:12:59.032 }, 00:12:59.032 { 00:12:59.032 "name": "BaseBdev2", 00:12:59.032 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:12:59.032 "is_configured": true, 00:12:59.032 "data_offset": 2048, 00:12:59.032 "data_size": 63488 00:12:59.032 }, 00:12:59.032 { 00:12:59.032 "name": "BaseBdev3", 00:12:59.032 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:12:59.032 "is_configured": true, 00:12:59.032 "data_offset": 2048, 00:12:59.032 "data_size": 63488 00:12:59.032 } 00:12:59.032 ] 00:12:59.032 }' 00:12:59.032 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.032 21:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.598 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.598 21:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.856 21:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:12:59.856 21:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.856 21:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:00.114 21:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 98a38d5f-123c-11ef-8c90-4585f0cfab08 00:13:00.373 [2024-05-14 21:55:00.797902] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:00.373 [2024-05-14 21:55:00.797968] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x829466300 00:13:00.373 [2024-05-14 21:55:00.797974] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.373 [2024-05-14 21:55:00.797996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8294c4e20 00:13:00.373 [2024-05-14 21:55:00.798050] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829466300 00:13:00.373 [2024-05-14 21:55:00.798055] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829466300 00:13:00.373 [2024-05-14 21:55:00.798077] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.373 NewBaseBdev 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:00.373 21:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:00.631 21:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:00.889 [ 00:13:00.889 { 00:13:00.889 "name": "NewBaseBdev", 00:13:00.889 "aliases": [ 00:13:00.889 "98a38d5f-123c-11ef-8c90-4585f0cfab08" 00:13:00.889 ], 00:13:00.889 "product_name": "Malloc disk", 00:13:00.889 "block_size": 512, 00:13:00.889 "num_blocks": 65536, 00:13:00.889 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:13:00.889 "assigned_rate_limits": { 00:13:00.889 "rw_ios_per_sec": 0, 00:13:00.889 "rw_mbytes_per_sec": 0, 00:13:00.889 "r_mbytes_per_sec": 0, 00:13:00.889 "w_mbytes_per_sec": 0 00:13:00.889 }, 00:13:00.889 "claimed": true, 00:13:00.889 "claim_type": "exclusive_write", 00:13:00.889 "zoned": false, 00:13:00.889 "supported_io_types": { 00:13:00.889 "read": true, 00:13:00.889 "write": true, 00:13:00.889 "unmap": true, 00:13:00.889 "write_zeroes": true, 00:13:00.889 "flush": true, 00:13:00.889 "reset": true, 00:13:00.889 "compare": false, 00:13:00.889 "compare_and_write": false, 00:13:00.889 "abort": true, 00:13:00.889 "nvme_admin": false, 00:13:00.889 "nvme_io": false 00:13:00.889 }, 00:13:00.889 "memory_domains": [ 00:13:00.889 { 00:13:00.889 "dma_device_id": "system", 00:13:00.889 "dma_device_type": 1 00:13:00.889 }, 00:13:00.889 { 00:13:00.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.889 "dma_device_type": 2 00:13:00.889 } 00:13:00.889 ], 00:13:00.889 "driver_specific": {} 00:13:00.889 } 00:13:00.889 ] 00:13:00.889 21:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:00.889 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:00.889 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:00.889 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.890 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.148 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:01.148 "name": "Existed_Raid", 00:13:01.148 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:13:01.148 "strip_size_kb": 0, 00:13:01.148 "state": "online", 00:13:01.148 "raid_level": "raid1", 00:13:01.148 "superblock": true, 00:13:01.148 "num_base_bdevs": 3, 00:13:01.148 "num_base_bdevs_discovered": 3, 00:13:01.148 "num_base_bdevs_operational": 3, 00:13:01.148 "base_bdevs_list": [ 00:13:01.148 { 00:13:01.148 "name": "NewBaseBdev", 00:13:01.148 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:13:01.148 "is_configured": true, 00:13:01.148 "data_offset": 2048, 00:13:01.148 "data_size": 63488 00:13:01.148 }, 00:13:01.148 { 00:13:01.148 "name": "BaseBdev2", 00:13:01.148 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:13:01.148 "is_configured": true, 00:13:01.148 "data_offset": 2048, 00:13:01.148 "data_size": 63488 00:13:01.148 }, 00:13:01.148 { 00:13:01.148 "name": "BaseBdev3", 00:13:01.148 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:13:01.148 "is_configured": true, 00:13:01.148 "data_offset": 2048, 00:13:01.148 "data_size": 63488 00:13:01.148 } 00:13:01.148 ] 00:13:01.148 }' 00:13:01.148 21:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:01.148 21:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:01.714 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:01.972 [2024-05-14 21:55:02.305795] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:01.972 "name": "Existed_Raid", 00:13:01.972 "aliases": [ 00:13:01.972 "97773f78-123c-11ef-8c90-4585f0cfab08" 00:13:01.972 ], 00:13:01.972 "product_name": "Raid Volume", 00:13:01.972 "block_size": 512, 00:13:01.972 "num_blocks": 63488, 00:13:01.972 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:13:01.972 "assigned_rate_limits": { 00:13:01.972 "rw_ios_per_sec": 0, 00:13:01.972 "rw_mbytes_per_sec": 0, 00:13:01.972 "r_mbytes_per_sec": 0, 00:13:01.972 "w_mbytes_per_sec": 0 00:13:01.972 }, 00:13:01.972 "claimed": false, 00:13:01.972 "zoned": false, 00:13:01.972 "supported_io_types": { 00:13:01.972 "read": true, 00:13:01.972 "write": true, 00:13:01.972 "unmap": false, 00:13:01.972 "write_zeroes": true, 00:13:01.972 "flush": false, 00:13:01.972 "reset": true, 00:13:01.972 "compare": false, 00:13:01.972 "compare_and_write": false, 00:13:01.972 "abort": false, 00:13:01.972 "nvme_admin": false, 00:13:01.972 "nvme_io": false 00:13:01.972 }, 00:13:01.972 "memory_domains": [ 00:13:01.972 { 00:13:01.972 "dma_device_id": "system", 00:13:01.972 "dma_device_type": 1 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.972 "dma_device_type": 2 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "dma_device_id": "system", 00:13:01.972 "dma_device_type": 1 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.972 "dma_device_type": 2 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "dma_device_id": "system", 00:13:01.972 "dma_device_type": 1 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.972 "dma_device_type": 2 00:13:01.972 } 00:13:01.972 ], 00:13:01.972 "driver_specific": { 00:13:01.972 "raid": { 00:13:01.972 "uuid": "97773f78-123c-11ef-8c90-4585f0cfab08", 00:13:01.972 "strip_size_kb": 0, 00:13:01.972 "state": "online", 00:13:01.972 "raid_level": "raid1", 00:13:01.972 "superblock": true, 00:13:01.972 "num_base_bdevs": 3, 00:13:01.972 "num_base_bdevs_discovered": 3, 00:13:01.972 "num_base_bdevs_operational": 3, 00:13:01.972 "base_bdevs_list": [ 00:13:01.972 { 00:13:01.972 "name": "NewBaseBdev", 00:13:01.972 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:13:01.972 "is_configured": true, 00:13:01.972 "data_offset": 2048, 00:13:01.972 "data_size": 63488 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "name": "BaseBdev2", 00:13:01.972 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:13:01.972 "is_configured": true, 00:13:01.972 "data_offset": 2048, 00:13:01.972 "data_size": 63488 00:13:01.972 }, 00:13:01.972 { 00:13:01.972 "name": "BaseBdev3", 00:13:01.972 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:13:01.972 "is_configured": true, 00:13:01.972 "data_offset": 2048, 00:13:01.972 "data_size": 63488 00:13:01.972 } 00:13:01.972 ] 00:13:01.972 } 00:13:01.972 } 00:13:01.972 }' 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:13:01.972 BaseBdev2 00:13:01.972 BaseBdev3' 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:01.972 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:02.231 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:02.231 "name": "NewBaseBdev", 00:13:02.231 "aliases": [ 00:13:02.231 "98a38d5f-123c-11ef-8c90-4585f0cfab08" 00:13:02.231 ], 00:13:02.231 "product_name": "Malloc disk", 00:13:02.231 "block_size": 512, 00:13:02.231 "num_blocks": 65536, 00:13:02.231 "uuid": "98a38d5f-123c-11ef-8c90-4585f0cfab08", 00:13:02.231 "assigned_rate_limits": { 00:13:02.231 "rw_ios_per_sec": 0, 00:13:02.231 "rw_mbytes_per_sec": 0, 00:13:02.231 "r_mbytes_per_sec": 0, 00:13:02.231 "w_mbytes_per_sec": 0 00:13:02.231 }, 00:13:02.231 "claimed": true, 00:13:02.231 "claim_type": "exclusive_write", 00:13:02.231 "zoned": false, 00:13:02.231 "supported_io_types": { 00:13:02.231 "read": true, 00:13:02.231 "write": true, 00:13:02.231 "unmap": true, 00:13:02.231 "write_zeroes": true, 00:13:02.232 "flush": true, 00:13:02.232 "reset": true, 00:13:02.232 "compare": false, 00:13:02.232 "compare_and_write": false, 00:13:02.232 "abort": true, 00:13:02.232 "nvme_admin": false, 00:13:02.232 "nvme_io": false 00:13:02.232 }, 00:13:02.232 "memory_domains": [ 00:13:02.232 { 00:13:02.232 "dma_device_id": "system", 00:13:02.232 "dma_device_type": 1 00:13:02.232 }, 00:13:02.232 { 00:13:02.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.232 "dma_device_type": 2 00:13:02.232 } 00:13:02.232 ], 00:13:02.232 "driver_specific": {} 00:13:02.232 }' 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:02.232 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:02.490 "name": "BaseBdev2", 00:13:02.490 "aliases": [ 00:13:02.490 "967c637e-123c-11ef-8c90-4585f0cfab08" 00:13:02.490 ], 00:13:02.490 "product_name": "Malloc disk", 00:13:02.490 "block_size": 512, 00:13:02.490 "num_blocks": 65536, 00:13:02.490 "uuid": "967c637e-123c-11ef-8c90-4585f0cfab08", 00:13:02.490 "assigned_rate_limits": { 00:13:02.490 "rw_ios_per_sec": 0, 00:13:02.490 "rw_mbytes_per_sec": 0, 00:13:02.490 "r_mbytes_per_sec": 0, 00:13:02.490 "w_mbytes_per_sec": 0 00:13:02.490 }, 00:13:02.490 "claimed": true, 00:13:02.490 "claim_type": "exclusive_write", 00:13:02.490 "zoned": false, 00:13:02.490 "supported_io_types": { 00:13:02.490 "read": true, 00:13:02.490 "write": true, 00:13:02.490 "unmap": true, 00:13:02.490 "write_zeroes": true, 00:13:02.490 "flush": true, 00:13:02.490 "reset": true, 00:13:02.490 "compare": false, 00:13:02.490 "compare_and_write": false, 00:13:02.490 "abort": true, 00:13:02.490 "nvme_admin": false, 00:13:02.490 "nvme_io": false 00:13:02.490 }, 00:13:02.490 "memory_domains": [ 00:13:02.490 { 00:13:02.490 "dma_device_id": "system", 00:13:02.490 "dma_device_type": 1 00:13:02.490 }, 00:13:02.490 { 00:13:02.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.490 "dma_device_type": 2 00:13:02.490 } 00:13:02.490 ], 00:13:02.490 "driver_specific": {} 00:13:02.490 }' 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:02.490 21:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:02.747 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:02.748 "name": "BaseBdev3", 00:13:02.748 "aliases": [ 00:13:02.748 "96fabb8d-123c-11ef-8c90-4585f0cfab08" 00:13:02.748 ], 00:13:02.748 "product_name": "Malloc disk", 00:13:02.748 "block_size": 512, 00:13:02.748 "num_blocks": 65536, 00:13:02.748 "uuid": "96fabb8d-123c-11ef-8c90-4585f0cfab08", 00:13:02.748 "assigned_rate_limits": { 00:13:02.748 "rw_ios_per_sec": 0, 00:13:02.748 "rw_mbytes_per_sec": 0, 00:13:02.748 "r_mbytes_per_sec": 0, 00:13:02.748 "w_mbytes_per_sec": 0 00:13:02.748 }, 00:13:02.748 "claimed": true, 00:13:02.748 "claim_type": "exclusive_write", 00:13:02.748 "zoned": false, 00:13:02.748 "supported_io_types": { 00:13:02.748 "read": true, 00:13:02.748 "write": true, 00:13:02.748 "unmap": true, 00:13:02.748 "write_zeroes": true, 00:13:02.748 "flush": true, 00:13:02.748 "reset": true, 00:13:02.748 "compare": false, 00:13:02.748 "compare_and_write": false, 00:13:02.748 "abort": true, 00:13:02.748 "nvme_admin": false, 00:13:02.748 "nvme_io": false 00:13:02.748 }, 00:13:02.748 "memory_domains": [ 00:13:02.748 { 00:13:02.748 "dma_device_id": "system", 00:13:02.748 "dma_device_type": 1 00:13:02.748 }, 00:13:02.748 { 00:13:02.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.748 "dma_device_type": 2 00:13:02.748 } 00:13:02.748 ], 00:13:02.748 "driver_specific": {} 00:13:02.748 }' 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:02.748 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:03.014 [2024-05-14 21:55:03.537758] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.014 [2024-05-14 21:55:03.537785] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.014 [2024-05-14 21:55:03.537816] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.014 [2024-05-14 21:55:03.537886] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.014 [2024-05-14 21:55:03.537900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829466300 name Existed_Raid, state offline 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 55866 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 55866 ']' 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 55866 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 55866 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:03.014 killing process with pid 55866 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55866' 00:13:03.014 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 55866 00:13:03.014 [2024-05-14 21:55:03.565701] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.015 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 55866 00:13:03.015 [2024-05-14 21:55:03.582961] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.272 21:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:03.272 00:13:03.272 real 0m24.777s 00:13:03.272 user 0m45.392s 00:13:03.272 sys 0m3.323s 00:13:03.272 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:03.272 21:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 ************************************ 00:13:03.272 END TEST raid_state_function_test_sb 00:13:03.272 ************************************ 00:13:03.272 21:55:03 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:03.272 21:55:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:03.272 21:55:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:03.272 21:55:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 ************************************ 00:13:03.272 START TEST raid_superblock_test 00:13:03.272 ************************************ 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=56594 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 56594 /var/tmp/spdk-raid.sock 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 56594 ']' 00:13:03.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.272 21:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 [2024-05-14 21:55:03.807252] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:03.272 [2024-05-14 21:55:03.807489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:03.837 EAL: TSC is not safe to use in SMP mode 00:13:03.837 EAL: TSC is not invariant 00:13:03.837 [2024-05-14 21:55:04.378589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.095 [2024-05-14 21:55:04.480815] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:04.095 [2024-05-14 21:55:04.483427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.095 [2024-05-14 21:55:04.484367] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.095 [2024-05-14 21:55:04.484387] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.353 21:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:04.611 malloc1 00:13:04.611 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.869 [2024-05-14 21:55:05.398271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.869 [2024-05-14 21:55:05.398349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.869 [2024-05-14 21:55:05.398957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54780 00:13:04.869 [2024-05-14 21:55:05.398989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.869 [2024-05-14 21:55:05.399863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.869 [2024-05-14 21:55:05.399890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.869 pt1 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.869 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:05.126 malloc2 00:13:05.126 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.385 [2024-05-14 21:55:05.906286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.385 [2024-05-14 21:55:05.906367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.385 [2024-05-14 21:55:05.906413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54c80 00:13:05.385 [2024-05-14 21:55:05.906422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.385 [2024-05-14 21:55:05.907065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.385 [2024-05-14 21:55:05.907093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.385 pt2 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:05.385 21:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:05.643 malloc3 00:13:05.643 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:05.900 [2024-05-14 21:55:06.414309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:05.900 [2024-05-14 21:55:06.414391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.900 [2024-05-14 21:55:06.414436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd55180 00:13:05.900 [2024-05-14 21:55:06.414445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.900 [2024-05-14 21:55:06.415102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.900 [2024-05-14 21:55:06.415129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:05.900 pt3 00:13:05.900 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.900 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.900 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:06.159 [2024-05-14 21:55:06.646326] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.159 [2024-05-14 21:55:06.646942] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.159 [2024-05-14 21:55:06.646965] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.159 [2024-05-14 21:55:06.647019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd59300 00:13:06.159 [2024-05-14 21:55:06.647026] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.159 [2024-05-14 21:55:06.647060] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdb7e20 00:13:06.159 [2024-05-14 21:55:06.647135] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd59300 00:13:06.159 [2024-05-14 21:55:06.647140] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd59300 00:13:06.159 [2024-05-14 21:55:06.647167] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.159 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.429 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:06.429 "name": "raid_bdev1", 00:13:06.429 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:06.429 "strip_size_kb": 0, 00:13:06.429 "state": "online", 00:13:06.429 "raid_level": "raid1", 00:13:06.429 "superblock": true, 00:13:06.429 "num_base_bdevs": 3, 00:13:06.429 "num_base_bdevs_discovered": 3, 00:13:06.429 "num_base_bdevs_operational": 3, 00:13:06.429 "base_bdevs_list": [ 00:13:06.429 { 00:13:06.429 "name": "pt1", 00:13:06.429 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:06.429 "is_configured": true, 00:13:06.429 "data_offset": 2048, 00:13:06.429 "data_size": 63488 00:13:06.429 }, 00:13:06.429 { 00:13:06.429 "name": "pt2", 00:13:06.429 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:06.429 "is_configured": true, 00:13:06.429 "data_offset": 2048, 00:13:06.429 "data_size": 63488 00:13:06.429 }, 00:13:06.429 { 00:13:06.429 "name": "pt3", 00:13:06.429 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:06.429 "is_configured": true, 00:13:06.429 "data_offset": 2048, 00:13:06.429 "data_size": 63488 00:13:06.429 } 00:13:06.429 ] 00:13:06.429 }' 00:13:06.429 21:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:06.429 21:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:06.705 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:06.963 [2024-05-14 21:55:07.514375] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:06.963 "name": "raid_bdev1", 00:13:06.963 "aliases": [ 00:13:06.963 "a02db7a6-123c-11ef-8c90-4585f0cfab08" 00:13:06.963 ], 00:13:06.963 "product_name": "Raid Volume", 00:13:06.963 "block_size": 512, 00:13:06.963 "num_blocks": 63488, 00:13:06.963 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:06.963 "assigned_rate_limits": { 00:13:06.963 "rw_ios_per_sec": 0, 00:13:06.963 "rw_mbytes_per_sec": 0, 00:13:06.963 "r_mbytes_per_sec": 0, 00:13:06.963 "w_mbytes_per_sec": 0 00:13:06.963 }, 00:13:06.963 "claimed": false, 00:13:06.963 "zoned": false, 00:13:06.963 "supported_io_types": { 00:13:06.963 "read": true, 00:13:06.963 "write": true, 00:13:06.963 "unmap": false, 00:13:06.963 "write_zeroes": true, 00:13:06.963 "flush": false, 00:13:06.963 "reset": true, 00:13:06.963 "compare": false, 00:13:06.963 "compare_and_write": false, 00:13:06.963 "abort": false, 00:13:06.963 "nvme_admin": false, 00:13:06.963 "nvme_io": false 00:13:06.963 }, 00:13:06.963 "memory_domains": [ 00:13:06.963 { 00:13:06.963 "dma_device_id": "system", 00:13:06.963 "dma_device_type": 1 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.963 "dma_device_type": 2 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "dma_device_id": "system", 00:13:06.963 "dma_device_type": 1 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.963 "dma_device_type": 2 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "dma_device_id": "system", 00:13:06.963 "dma_device_type": 1 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.963 "dma_device_type": 2 00:13:06.963 } 00:13:06.963 ], 00:13:06.963 "driver_specific": { 00:13:06.963 "raid": { 00:13:06.963 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:06.963 "strip_size_kb": 0, 00:13:06.963 "state": "online", 00:13:06.963 "raid_level": "raid1", 00:13:06.963 "superblock": true, 00:13:06.963 "num_base_bdevs": 3, 00:13:06.963 "num_base_bdevs_discovered": 3, 00:13:06.963 "num_base_bdevs_operational": 3, 00:13:06.963 "base_bdevs_list": [ 00:13:06.963 { 00:13:06.963 "name": "pt1", 00:13:06.963 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:06.963 "is_configured": true, 00:13:06.963 "data_offset": 2048, 00:13:06.963 "data_size": 63488 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "name": "pt2", 00:13:06.963 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:06.963 "is_configured": true, 00:13:06.963 "data_offset": 2048, 00:13:06.963 "data_size": 63488 00:13:06.963 }, 00:13:06.963 { 00:13:06.963 "name": "pt3", 00:13:06.963 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:06.963 "is_configured": true, 00:13:06.963 "data_offset": 2048, 00:13:06.963 "data_size": 63488 00:13:06.963 } 00:13:06.963 ] 00:13:06.963 } 00:13:06.963 } 00:13:06.963 }' 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:06.963 pt2 00:13:06.963 pt3' 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:06.963 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:07.222 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:07.222 "name": "pt1", 00:13:07.222 "aliases": [ 00:13:07.222 "0c646799-c261-f654-baf6-07394bb45f2d" 00:13:07.222 ], 00:13:07.222 "product_name": "passthru", 00:13:07.222 "block_size": 512, 00:13:07.222 "num_blocks": 65536, 00:13:07.222 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:07.222 "assigned_rate_limits": { 00:13:07.222 "rw_ios_per_sec": 0, 00:13:07.222 "rw_mbytes_per_sec": 0, 00:13:07.222 "r_mbytes_per_sec": 0, 00:13:07.222 "w_mbytes_per_sec": 0 00:13:07.222 }, 00:13:07.222 "claimed": true, 00:13:07.222 "claim_type": "exclusive_write", 00:13:07.222 "zoned": false, 00:13:07.222 "supported_io_types": { 00:13:07.222 "read": true, 00:13:07.222 "write": true, 00:13:07.222 "unmap": true, 00:13:07.222 "write_zeroes": true, 00:13:07.222 "flush": true, 00:13:07.222 "reset": true, 00:13:07.222 "compare": false, 00:13:07.222 "compare_and_write": false, 00:13:07.222 "abort": true, 00:13:07.222 "nvme_admin": false, 00:13:07.222 "nvme_io": false 00:13:07.222 }, 00:13:07.222 "memory_domains": [ 00:13:07.222 { 00:13:07.222 "dma_device_id": "system", 00:13:07.222 "dma_device_type": 1 00:13:07.222 }, 00:13:07.222 { 00:13:07.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.222 "dma_device_type": 2 00:13:07.222 } 00:13:07.222 ], 00:13:07.222 "driver_specific": { 00:13:07.222 "passthru": { 00:13:07.222 "name": "pt1", 00:13:07.223 "base_bdev_name": "malloc1" 00:13:07.223 } 00:13:07.223 } 00:13:07.223 }' 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.223 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:07.481 21:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:07.739 "name": "pt2", 00:13:07.739 "aliases": [ 00:13:07.739 "195a1937-4860-5c5c-85f1-feba42811e51" 00:13:07.739 ], 00:13:07.739 "product_name": "passthru", 00:13:07.739 "block_size": 512, 00:13:07.739 "num_blocks": 65536, 00:13:07.739 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:07.739 "assigned_rate_limits": { 00:13:07.739 "rw_ios_per_sec": 0, 00:13:07.739 "rw_mbytes_per_sec": 0, 00:13:07.739 "r_mbytes_per_sec": 0, 00:13:07.739 "w_mbytes_per_sec": 0 00:13:07.739 }, 00:13:07.739 "claimed": true, 00:13:07.739 "claim_type": "exclusive_write", 00:13:07.739 "zoned": false, 00:13:07.739 "supported_io_types": { 00:13:07.739 "read": true, 00:13:07.739 "write": true, 00:13:07.739 "unmap": true, 00:13:07.739 "write_zeroes": true, 00:13:07.739 "flush": true, 00:13:07.739 "reset": true, 00:13:07.739 "compare": false, 00:13:07.739 "compare_and_write": false, 00:13:07.739 "abort": true, 00:13:07.739 "nvme_admin": false, 00:13:07.739 "nvme_io": false 00:13:07.739 }, 00:13:07.739 "memory_domains": [ 00:13:07.739 { 00:13:07.739 "dma_device_id": "system", 00:13:07.739 "dma_device_type": 1 00:13:07.739 }, 00:13:07.739 { 00:13:07.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.739 "dma_device_type": 2 00:13:07.739 } 00:13:07.739 ], 00:13:07.739 "driver_specific": { 00:13:07.739 "passthru": { 00:13:07.739 "name": "pt2", 00:13:07.739 "base_bdev_name": "malloc2" 00:13:07.739 } 00:13:07.739 } 00:13:07.739 }' 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:07.739 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:07.998 "name": "pt3", 00:13:07.998 "aliases": [ 00:13:07.998 "bfae9780-2598-ee5f-87d6-57d7846fec29" 00:13:07.998 ], 00:13:07.998 "product_name": "passthru", 00:13:07.998 "block_size": 512, 00:13:07.998 "num_blocks": 65536, 00:13:07.998 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:07.998 "assigned_rate_limits": { 00:13:07.998 "rw_ios_per_sec": 0, 00:13:07.998 "rw_mbytes_per_sec": 0, 00:13:07.998 "r_mbytes_per_sec": 0, 00:13:07.998 "w_mbytes_per_sec": 0 00:13:07.998 }, 00:13:07.998 "claimed": true, 00:13:07.998 "claim_type": "exclusive_write", 00:13:07.998 "zoned": false, 00:13:07.998 "supported_io_types": { 00:13:07.998 "read": true, 00:13:07.998 "write": true, 00:13:07.998 "unmap": true, 00:13:07.998 "write_zeroes": true, 00:13:07.998 "flush": true, 00:13:07.998 "reset": true, 00:13:07.998 "compare": false, 00:13:07.998 "compare_and_write": false, 00:13:07.998 "abort": true, 00:13:07.998 "nvme_admin": false, 00:13:07.998 "nvme_io": false 00:13:07.998 }, 00:13:07.998 "memory_domains": [ 00:13:07.998 { 00:13:07.998 "dma_device_id": "system", 00:13:07.998 "dma_device_type": 1 00:13:07.998 }, 00:13:07.998 { 00:13:07.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.998 "dma_device_type": 2 00:13:07.998 } 00:13:07.998 ], 00:13:07.998 "driver_specific": { 00:13:07.998 "passthru": { 00:13:07.998 "name": "pt3", 00:13:07.998 "base_bdev_name": "malloc3" 00:13:07.998 } 00:13:07.998 } 00:13:07.998 }' 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:07.998 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:08.256 [2024-05-14 21:55:08.698385] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.256 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a02db7a6-123c-11ef-8c90-4585f0cfab08 00:13:08.256 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a02db7a6-123c-11ef-8c90-4585f0cfab08 ']' 00:13:08.256 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:08.515 [2024-05-14 21:55:08.974352] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.515 [2024-05-14 21:55:08.974387] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.515 [2024-05-14 21:55:08.974425] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.515 [2024-05-14 21:55:08.974443] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.515 [2024-05-14 21:55:08.974459] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state offline 00:13:08.515 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:08.515 21:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.773 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:08.774 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:08.774 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:08.774 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:09.031 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.032 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:09.289 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.289 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:09.548 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:09.548 21:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:09.806 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:10.064 [2024-05-14 21:55:10.398405] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:10.064 [2024-05-14 21:55:10.398981] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:10.064 [2024-05-14 21:55:10.399000] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:10.064 [2024-05-14 21:55:10.399015] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:10.064 [2024-05-14 21:55:10.399056] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:10.064 [2024-05-14 21:55:10.399068] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:10.064 [2024-05-14 21:55:10.399076] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.064 [2024-05-14 21:55:10.399081] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state configuring 00:13:10.064 request: 00:13:10.064 { 00:13:10.064 "name": "raid_bdev1", 00:13:10.064 "raid_level": "raid1", 00:13:10.064 "base_bdevs": [ 00:13:10.064 "malloc1", 00:13:10.064 "malloc2", 00:13:10.064 "malloc3" 00:13:10.064 ], 00:13:10.064 "superblock": false, 00:13:10.064 "method": "bdev_raid_create", 00:13:10.064 "req_id": 1 00:13:10.064 } 00:13:10.064 Got JSON-RPC error response 00:13:10.064 response: 00:13:10.064 { 00:13:10.064 "code": -17, 00:13:10.064 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:10.064 } 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:10.064 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.321 [2024-05-14 21:55:10.858414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.321 [2024-05-14 21:55:10.858497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.321 [2024-05-14 21:55:10.858542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd55180 00:13:10.321 [2024-05-14 21:55:10.858551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.321 [2024-05-14 21:55:10.859274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.321 [2024-05-14 21:55:10.859301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.321 [2024-05-14 21:55:10.859327] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:10.321 [2024-05-14 21:55:10.859339] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.321 pt1 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.321 21:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.597 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.597 "name": "raid_bdev1", 00:13:10.597 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:10.597 "strip_size_kb": 0, 00:13:10.597 "state": "configuring", 00:13:10.597 "raid_level": "raid1", 00:13:10.597 "superblock": true, 00:13:10.597 "num_base_bdevs": 3, 00:13:10.597 "num_base_bdevs_discovered": 1, 00:13:10.597 "num_base_bdevs_operational": 3, 00:13:10.597 "base_bdevs_list": [ 00:13:10.597 { 00:13:10.597 "name": "pt1", 00:13:10.597 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:10.597 "is_configured": true, 00:13:10.597 "data_offset": 2048, 00:13:10.597 "data_size": 63488 00:13:10.597 }, 00:13:10.597 { 00:13:10.597 "name": null, 00:13:10.597 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:10.597 "is_configured": false, 00:13:10.597 "data_offset": 2048, 00:13:10.597 "data_size": 63488 00:13:10.597 }, 00:13:10.597 { 00:13:10.597 "name": null, 00:13:10.597 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:10.597 "is_configured": false, 00:13:10.597 "data_offset": 2048, 00:13:10.597 "data_size": 63488 00:13:10.597 } 00:13:10.597 ] 00:13:10.597 }' 00:13:10.597 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.597 21:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.165 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:11.165 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.165 [2024-05-14 21:55:11.662443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.165 [2024-05-14 21:55:11.662523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.165 [2024-05-14 21:55:11.662568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54780 00:13:11.165 [2024-05-14 21:55:11.662576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.165 [2024-05-14 21:55:11.662710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.165 [2024-05-14 21:55:11.662729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.165 [2024-05-14 21:55:11.662754] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:11.165 [2024-05-14 21:55:11.662764] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.165 pt2 00:13:11.165 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:11.424 [2024-05-14 21:55:11.922454] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.424 21:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.682 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:11.682 "name": "raid_bdev1", 00:13:11.682 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:11.682 "strip_size_kb": 0, 00:13:11.682 "state": "configuring", 00:13:11.682 "raid_level": "raid1", 00:13:11.682 "superblock": true, 00:13:11.682 "num_base_bdevs": 3, 00:13:11.682 "num_base_bdevs_discovered": 1, 00:13:11.682 "num_base_bdevs_operational": 3, 00:13:11.682 "base_bdevs_list": [ 00:13:11.682 { 00:13:11.682 "name": "pt1", 00:13:11.682 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:11.682 "is_configured": true, 00:13:11.682 "data_offset": 2048, 00:13:11.682 "data_size": 63488 00:13:11.682 }, 00:13:11.682 { 00:13:11.682 "name": null, 00:13:11.682 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:11.682 "is_configured": false, 00:13:11.682 "data_offset": 2048, 00:13:11.682 "data_size": 63488 00:13:11.682 }, 00:13:11.682 { 00:13:11.682 "name": null, 00:13:11.682 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:11.682 "is_configured": false, 00:13:11.682 "data_offset": 2048, 00:13:11.682 "data_size": 63488 00:13:11.682 } 00:13:11.682 ] 00:13:11.682 }' 00:13:11.682 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:11.682 21:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.939 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:11.939 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:11.939 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.196 [2024-05-14 21:55:12.706469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.196 [2024-05-14 21:55:12.706537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.197 [2024-05-14 21:55:12.706567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54780 00:13:12.197 [2024-05-14 21:55:12.706576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.197 [2024-05-14 21:55:12.706693] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.197 [2024-05-14 21:55:12.706705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.197 [2024-05-14 21:55:12.706730] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:12.197 [2024-05-14 21:55:12.706739] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.197 pt2 00:13:12.197 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.197 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.197 21:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.455 [2024-05-14 21:55:12.986471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.455 [2024-05-14 21:55:12.986545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.455 [2024-05-14 21:55:12.986573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd55400 00:13:12.455 [2024-05-14 21:55:12.986591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.455 [2024-05-14 21:55:12.986705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.455 [2024-05-14 21:55:12.986717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.455 [2024-05-14 21:55:12.986739] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:12.455 [2024-05-14 21:55:12.986748] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.455 [2024-05-14 21:55:12.986778] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd59300 00:13:12.455 [2024-05-14 21:55:12.986783] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.455 [2024-05-14 21:55:12.986803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdb7e20 00:13:12.455 [2024-05-14 21:55:12.986858] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd59300 00:13:12.455 [2024-05-14 21:55:12.986875] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd59300 00:13:12.455 [2024-05-14 21:55:12.986897] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.455 pt3 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.455 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.714 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:12.714 "name": "raid_bdev1", 00:13:12.714 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:12.714 "strip_size_kb": 0, 00:13:12.714 "state": "online", 00:13:12.714 "raid_level": "raid1", 00:13:12.714 "superblock": true, 00:13:12.714 "num_base_bdevs": 3, 00:13:12.714 "num_base_bdevs_discovered": 3, 00:13:12.714 "num_base_bdevs_operational": 3, 00:13:12.714 "base_bdevs_list": [ 00:13:12.714 { 00:13:12.714 "name": "pt1", 00:13:12.714 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:12.714 "is_configured": true, 00:13:12.714 "data_offset": 2048, 00:13:12.714 "data_size": 63488 00:13:12.714 }, 00:13:12.714 { 00:13:12.714 "name": "pt2", 00:13:12.714 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:12.714 "is_configured": true, 00:13:12.714 "data_offset": 2048, 00:13:12.714 "data_size": 63488 00:13:12.714 }, 00:13:12.714 { 00:13:12.714 "name": "pt3", 00:13:12.714 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:12.714 "is_configured": true, 00:13:12.714 "data_offset": 2048, 00:13:12.714 "data_size": 63488 00:13:12.714 } 00:13:12.714 ] 00:13:12.714 }' 00:13:12.714 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:12.714 21:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:13.278 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:13.537 [2024-05-14 21:55:13.874531] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.537 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:13.537 "name": "raid_bdev1", 00:13:13.537 "aliases": [ 00:13:13.537 "a02db7a6-123c-11ef-8c90-4585f0cfab08" 00:13:13.537 ], 00:13:13.537 "product_name": "Raid Volume", 00:13:13.537 "block_size": 512, 00:13:13.537 "num_blocks": 63488, 00:13:13.537 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:13.537 "assigned_rate_limits": { 00:13:13.537 "rw_ios_per_sec": 0, 00:13:13.537 "rw_mbytes_per_sec": 0, 00:13:13.537 "r_mbytes_per_sec": 0, 00:13:13.537 "w_mbytes_per_sec": 0 00:13:13.537 }, 00:13:13.537 "claimed": false, 00:13:13.537 "zoned": false, 00:13:13.537 "supported_io_types": { 00:13:13.537 "read": true, 00:13:13.537 "write": true, 00:13:13.537 "unmap": false, 00:13:13.537 "write_zeroes": true, 00:13:13.537 "flush": false, 00:13:13.537 "reset": true, 00:13:13.537 "compare": false, 00:13:13.537 "compare_and_write": false, 00:13:13.537 "abort": false, 00:13:13.537 "nvme_admin": false, 00:13:13.537 "nvme_io": false 00:13:13.537 }, 00:13:13.537 "memory_domains": [ 00:13:13.537 { 00:13:13.537 "dma_device_id": "system", 00:13:13.537 "dma_device_type": 1 00:13:13.537 }, 00:13:13.537 { 00:13:13.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.537 "dma_device_type": 2 00:13:13.537 }, 00:13:13.537 { 00:13:13.537 "dma_device_id": "system", 00:13:13.537 "dma_device_type": 1 00:13:13.537 }, 00:13:13.537 { 00:13:13.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.537 "dma_device_type": 2 00:13:13.537 }, 00:13:13.537 { 00:13:13.537 "dma_device_id": "system", 00:13:13.537 "dma_device_type": 1 00:13:13.537 }, 00:13:13.537 { 00:13:13.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.537 "dma_device_type": 2 00:13:13.537 } 00:13:13.537 ], 00:13:13.537 "driver_specific": { 00:13:13.537 "raid": { 00:13:13.537 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:13.537 "strip_size_kb": 0, 00:13:13.537 "state": "online", 00:13:13.537 "raid_level": "raid1", 00:13:13.537 "superblock": true, 00:13:13.537 "num_base_bdevs": 3, 00:13:13.537 "num_base_bdevs_discovered": 3, 00:13:13.537 "num_base_bdevs_operational": 3, 00:13:13.537 "base_bdevs_list": [ 00:13:13.537 { 00:13:13.537 "name": "pt1", 00:13:13.537 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:13.537 "is_configured": true, 00:13:13.538 "data_offset": 2048, 00:13:13.538 "data_size": 63488 00:13:13.538 }, 00:13:13.538 { 00:13:13.538 "name": "pt2", 00:13:13.538 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:13.538 "is_configured": true, 00:13:13.538 "data_offset": 2048, 00:13:13.538 "data_size": 63488 00:13:13.538 }, 00:13:13.538 { 00:13:13.538 "name": "pt3", 00:13:13.538 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:13.538 "is_configured": true, 00:13:13.538 "data_offset": 2048, 00:13:13.538 "data_size": 63488 00:13:13.538 } 00:13:13.538 ] 00:13:13.538 } 00:13:13.538 } 00:13:13.538 }' 00:13:13.538 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.538 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:13.538 pt2 00:13:13.538 pt3' 00:13:13.538 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:13.538 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:13.538 21:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:13.796 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:13.796 "name": "pt1", 00:13:13.796 "aliases": [ 00:13:13.796 "0c646799-c261-f654-baf6-07394bb45f2d" 00:13:13.796 ], 00:13:13.796 "product_name": "passthru", 00:13:13.796 "block_size": 512, 00:13:13.796 "num_blocks": 65536, 00:13:13.796 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:13.796 "assigned_rate_limits": { 00:13:13.796 "rw_ios_per_sec": 0, 00:13:13.796 "rw_mbytes_per_sec": 0, 00:13:13.796 "r_mbytes_per_sec": 0, 00:13:13.796 "w_mbytes_per_sec": 0 00:13:13.796 }, 00:13:13.796 "claimed": true, 00:13:13.796 "claim_type": "exclusive_write", 00:13:13.796 "zoned": false, 00:13:13.796 "supported_io_types": { 00:13:13.796 "read": true, 00:13:13.796 "write": true, 00:13:13.796 "unmap": true, 00:13:13.796 "write_zeroes": true, 00:13:13.796 "flush": true, 00:13:13.796 "reset": true, 00:13:13.796 "compare": false, 00:13:13.796 "compare_and_write": false, 00:13:13.796 "abort": true, 00:13:13.796 "nvme_admin": false, 00:13:13.796 "nvme_io": false 00:13:13.796 }, 00:13:13.796 "memory_domains": [ 00:13:13.796 { 00:13:13.796 "dma_device_id": "system", 00:13:13.796 "dma_device_type": 1 00:13:13.796 }, 00:13:13.796 { 00:13:13.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.796 "dma_device_type": 2 00:13:13.796 } 00:13:13.796 ], 00:13:13.796 "driver_specific": { 00:13:13.796 "passthru": { 00:13:13.796 "name": "pt1", 00:13:13.796 "base_bdev_name": "malloc1" 00:13:13.797 } 00:13:13.797 } 00:13:13.797 }' 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:13.797 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:14.055 "name": "pt2", 00:13:14.055 "aliases": [ 00:13:14.055 "195a1937-4860-5c5c-85f1-feba42811e51" 00:13:14.055 ], 00:13:14.055 "product_name": "passthru", 00:13:14.055 "block_size": 512, 00:13:14.055 "num_blocks": 65536, 00:13:14.055 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:14.055 "assigned_rate_limits": { 00:13:14.055 "rw_ios_per_sec": 0, 00:13:14.055 "rw_mbytes_per_sec": 0, 00:13:14.055 "r_mbytes_per_sec": 0, 00:13:14.055 "w_mbytes_per_sec": 0 00:13:14.055 }, 00:13:14.055 "claimed": true, 00:13:14.055 "claim_type": "exclusive_write", 00:13:14.055 "zoned": false, 00:13:14.055 "supported_io_types": { 00:13:14.055 "read": true, 00:13:14.055 "write": true, 00:13:14.055 "unmap": true, 00:13:14.055 "write_zeroes": true, 00:13:14.055 "flush": true, 00:13:14.055 "reset": true, 00:13:14.055 "compare": false, 00:13:14.055 "compare_and_write": false, 00:13:14.055 "abort": true, 00:13:14.055 "nvme_admin": false, 00:13:14.055 "nvme_io": false 00:13:14.055 }, 00:13:14.055 "memory_domains": [ 00:13:14.055 { 00:13:14.055 "dma_device_id": "system", 00:13:14.055 "dma_device_type": 1 00:13:14.055 }, 00:13:14.055 { 00:13:14.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.055 "dma_device_type": 2 00:13:14.055 } 00:13:14.055 ], 00:13:14.055 "driver_specific": { 00:13:14.055 "passthru": { 00:13:14.055 "name": "pt2", 00:13:14.055 "base_bdev_name": "malloc2" 00:13:14.055 } 00:13:14.055 } 00:13:14.055 }' 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:14.055 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:14.313 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:14.313 "name": "pt3", 00:13:14.313 "aliases": [ 00:13:14.313 "bfae9780-2598-ee5f-87d6-57d7846fec29" 00:13:14.313 ], 00:13:14.313 "product_name": "passthru", 00:13:14.313 "block_size": 512, 00:13:14.313 "num_blocks": 65536, 00:13:14.313 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:14.313 "assigned_rate_limits": { 00:13:14.313 "rw_ios_per_sec": 0, 00:13:14.313 "rw_mbytes_per_sec": 0, 00:13:14.313 "r_mbytes_per_sec": 0, 00:13:14.313 "w_mbytes_per_sec": 0 00:13:14.313 }, 00:13:14.313 "claimed": true, 00:13:14.313 "claim_type": "exclusive_write", 00:13:14.313 "zoned": false, 00:13:14.313 "supported_io_types": { 00:13:14.313 "read": true, 00:13:14.313 "write": true, 00:13:14.313 "unmap": true, 00:13:14.313 "write_zeroes": true, 00:13:14.313 "flush": true, 00:13:14.313 "reset": true, 00:13:14.313 "compare": false, 00:13:14.313 "compare_and_write": false, 00:13:14.313 "abort": true, 00:13:14.313 "nvme_admin": false, 00:13:14.313 "nvme_io": false 00:13:14.313 }, 00:13:14.313 "memory_domains": [ 00:13:14.313 { 00:13:14.313 "dma_device_id": "system", 00:13:14.313 "dma_device_type": 1 00:13:14.313 }, 00:13:14.313 { 00:13:14.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.313 "dma_device_type": 2 00:13:14.313 } 00:13:14.313 ], 00:13:14.313 "driver_specific": { 00:13:14.313 "passthru": { 00:13:14.313 "name": "pt3", 00:13:14.313 "base_bdev_name": "malloc3" 00:13:14.313 } 00:13:14.313 } 00:13:14.313 }' 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:14.314 21:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:14.572 [2024-05-14 21:55:15.078548] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.572 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a02db7a6-123c-11ef-8c90-4585f0cfab08 '!=' a02db7a6-123c-11ef-8c90-4585f0cfab08 ']' 00:13:14.572 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:14.572 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:14.572 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:13:14.572 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:14.831 [2024-05-14 21:55:15.354595] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.831 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.089 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:15.089 "name": "raid_bdev1", 00:13:15.089 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:15.089 "strip_size_kb": 0, 00:13:15.089 "state": "online", 00:13:15.089 "raid_level": "raid1", 00:13:15.089 "superblock": true, 00:13:15.089 "num_base_bdevs": 3, 00:13:15.089 "num_base_bdevs_discovered": 2, 00:13:15.089 "num_base_bdevs_operational": 2, 00:13:15.089 "base_bdevs_list": [ 00:13:15.089 { 00:13:15.089 "name": null, 00:13:15.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.089 "is_configured": false, 00:13:15.089 "data_offset": 2048, 00:13:15.089 "data_size": 63488 00:13:15.089 }, 00:13:15.089 { 00:13:15.089 "name": "pt2", 00:13:15.089 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:15.089 "is_configured": true, 00:13:15.089 "data_offset": 2048, 00:13:15.089 "data_size": 63488 00:13:15.089 }, 00:13:15.089 { 00:13:15.089 "name": "pt3", 00:13:15.089 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:15.089 "is_configured": true, 00:13:15.089 "data_offset": 2048, 00:13:15.089 "data_size": 63488 00:13:15.089 } 00:13:15.089 ] 00:13:15.089 }' 00:13:15.089 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:15.089 21:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.653 21:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:15.653 [2024-05-14 21:55:16.162596] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.653 [2024-05-14 21:55:16.162619] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.653 [2024-05-14 21:55:16.162641] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.653 [2024-05-14 21:55:16.162656] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.653 [2024-05-14 21:55:16.162661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state offline 00:13:15.653 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.653 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:15.911 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:15.911 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:15.911 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:15.911 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:15.911 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:16.170 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:16.170 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:16.170 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:16.427 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:16.427 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:16.427 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:16.427 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:16.427 21:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.694 [2024-05-14 21:55:17.162656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.694 [2024-05-14 21:55:17.162719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.694 [2024-05-14 21:55:17.162747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd55400 00:13:16.694 [2024-05-14 21:55:17.162756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.694 [2024-05-14 21:55:17.163393] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.694 [2024-05-14 21:55:17.163419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.694 [2024-05-14 21:55:17.163446] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:16.694 [2024-05-14 21:55:17.163458] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.694 pt2 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.694 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.951 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:16.951 "name": "raid_bdev1", 00:13:16.951 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:16.951 "strip_size_kb": 0, 00:13:16.951 "state": "configuring", 00:13:16.951 "raid_level": "raid1", 00:13:16.951 "superblock": true, 00:13:16.951 "num_base_bdevs": 3, 00:13:16.951 "num_base_bdevs_discovered": 1, 00:13:16.951 "num_base_bdevs_operational": 2, 00:13:16.951 "base_bdevs_list": [ 00:13:16.951 { 00:13:16.951 "name": null, 00:13:16.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.951 "is_configured": false, 00:13:16.951 "data_offset": 2048, 00:13:16.951 "data_size": 63488 00:13:16.951 }, 00:13:16.951 { 00:13:16.951 "name": "pt2", 00:13:16.951 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:16.951 "is_configured": true, 00:13:16.951 "data_offset": 2048, 00:13:16.951 "data_size": 63488 00:13:16.951 }, 00:13:16.951 { 00:13:16.951 "name": null, 00:13:16.951 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:16.951 "is_configured": false, 00:13:16.951 "data_offset": 2048, 00:13:16.951 "data_size": 63488 00:13:16.951 } 00:13:16.951 ] 00:13:16.951 }' 00:13:16.951 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:16.952 21:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.210 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:17.210 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:17.210 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:17.210 21:55:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.467 [2024-05-14 21:55:18.042701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.467 [2024-05-14 21:55:18.042783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.467 [2024-05-14 21:55:18.042812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54c80 00:13:17.467 [2024-05-14 21:55:18.042821] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.467 [2024-05-14 21:55:18.042949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.467 [2024-05-14 21:55:18.042969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.467 [2024-05-14 21:55:18.043003] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:17.467 [2024-05-14 21:55:18.043011] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:17.467 [2024-05-14 21:55:18.043049] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd59300 00:13:17.468 [2024-05-14 21:55:18.043053] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:17.468 [2024-05-14 21:55:18.043073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdb7e20 00:13:17.468 [2024-05-14 21:55:18.043119] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd59300 00:13:17.468 [2024-05-14 21:55:18.043123] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd59300 00:13:17.468 [2024-05-14 21:55:18.043145] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.468 pt3 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.725 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.983 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.983 "name": "raid_bdev1", 00:13:17.983 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:17.983 "strip_size_kb": 0, 00:13:17.983 "state": "online", 00:13:17.983 "raid_level": "raid1", 00:13:17.983 "superblock": true, 00:13:17.983 "num_base_bdevs": 3, 00:13:17.983 "num_base_bdevs_discovered": 2, 00:13:17.983 "num_base_bdevs_operational": 2, 00:13:17.983 "base_bdevs_list": [ 00:13:17.983 { 00:13:17.983 "name": null, 00:13:17.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.983 "is_configured": false, 00:13:17.983 "data_offset": 2048, 00:13:17.983 "data_size": 63488 00:13:17.983 }, 00:13:17.983 { 00:13:17.983 "name": "pt2", 00:13:17.983 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:17.983 "is_configured": true, 00:13:17.983 "data_offset": 2048, 00:13:17.983 "data_size": 63488 00:13:17.983 }, 00:13:17.983 { 00:13:17.983 "name": "pt3", 00:13:17.983 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:17.983 "is_configured": true, 00:13:17.983 "data_offset": 2048, 00:13:17.983 "data_size": 63488 00:13:17.983 } 00:13:17.983 ] 00:13:17.983 }' 00:13:17.983 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.983 21:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.240 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 3 -gt 2 ']' 00:13:18.240 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:18.498 [2024-05-14 21:55:18.938799] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.498 [2024-05-14 21:55:18.938827] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.498 [2024-05-14 21:55:18.938851] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.498 [2024-05-14 21:55:18.938865] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.498 [2024-05-14 21:55:18.938869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state offline 00:13:18.498 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.498 21:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:13:18.756 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:13:18.756 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:13:18.756 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:19.014 [2024-05-14 21:55:19.570902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:19.014 [2024-05-14 21:55:19.570970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.014 [2024-05-14 21:55:19.570999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54780 00:13:19.014 [2024-05-14 21:55:19.571008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.014 [2024-05-14 21:55:19.571647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.014 [2024-05-14 21:55:19.571673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:19.014 [2024-05-14 21:55:19.571699] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:19.014 [2024-05-14 21:55:19.571716] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:19.014 pt1 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.014 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.282 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:19.282 "name": "raid_bdev1", 00:13:19.282 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:19.282 "strip_size_kb": 0, 00:13:19.282 "state": "configuring", 00:13:19.282 "raid_level": "raid1", 00:13:19.282 "superblock": true, 00:13:19.282 "num_base_bdevs": 3, 00:13:19.282 "num_base_bdevs_discovered": 1, 00:13:19.282 "num_base_bdevs_operational": 3, 00:13:19.282 "base_bdevs_list": [ 00:13:19.282 { 00:13:19.282 "name": "pt1", 00:13:19.282 "uuid": "0c646799-c261-f654-baf6-07394bb45f2d", 00:13:19.282 "is_configured": true, 00:13:19.282 "data_offset": 2048, 00:13:19.282 "data_size": 63488 00:13:19.282 }, 00:13:19.282 { 00:13:19.282 "name": null, 00:13:19.283 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:19.283 "is_configured": false, 00:13:19.283 "data_offset": 2048, 00:13:19.283 "data_size": 63488 00:13:19.283 }, 00:13:19.283 { 00:13:19.283 "name": null, 00:13:19.283 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:19.283 "is_configured": false, 00:13:19.283 "data_offset": 2048, 00:13:19.283 "data_size": 63488 00:13:19.283 } 00:13:19.283 ] 00:13:19.283 }' 00:13:19.283 21:55:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:19.283 21:55:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:13:19.847 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:20.106 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:13:20.106 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:13:20.106 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=2 00:13:20.106 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:20.364 [2024-05-14 21:55:20.894971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:20.364 [2024-05-14 21:55:20.895045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.364 [2024-05-14 21:55:20.895073] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd54c80 00:13:20.364 [2024-05-14 21:55:20.895081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.364 [2024-05-14 21:55:20.895219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.364 [2024-05-14 21:55:20.895231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:20.364 [2024-05-14 21:55:20.895255] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:20.364 [2024-05-14 21:55:20.895261] bdev_raid.c:3398:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:20.364 [2024-05-14 21:55:20.895264] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.364 [2024-05-14 21:55:20.895270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state configuring 00:13:20.364 [2024-05-14 21:55:20.895284] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:20.364 pt3 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.364 21:55:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.622 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.622 "name": "raid_bdev1", 00:13:20.622 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:20.622 "strip_size_kb": 0, 00:13:20.622 "state": "configuring", 00:13:20.622 "raid_level": "raid1", 00:13:20.622 "superblock": true, 00:13:20.622 "num_base_bdevs": 3, 00:13:20.622 "num_base_bdevs_discovered": 1, 00:13:20.622 "num_base_bdevs_operational": 2, 00:13:20.622 "base_bdevs_list": [ 00:13:20.622 { 00:13:20.622 "name": null, 00:13:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.622 "is_configured": false, 00:13:20.622 "data_offset": 2048, 00:13:20.622 "data_size": 63488 00:13:20.622 }, 00:13:20.622 { 00:13:20.622 "name": null, 00:13:20.622 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:20.622 "is_configured": false, 00:13:20.622 "data_offset": 2048, 00:13:20.622 "data_size": 63488 00:13:20.622 }, 00:13:20.622 { 00:13:20.622 "name": "pt3", 00:13:20.622 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:20.622 "is_configured": true, 00:13:20.622 "data_offset": 2048, 00:13:20.622 "data_size": 63488 00:13:20.622 } 00:13:20.622 ] 00:13:20.622 }' 00:13:20.622 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.622 21:55:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:21.189 [2024-05-14 21:55:21.715097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:21.189 [2024-05-14 21:55:21.715175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.189 [2024-05-14 21:55:21.715218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd55400 00:13:21.189 [2024-05-14 21:55:21.715227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.189 [2024-05-14 21:55:21.715351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.189 [2024-05-14 21:55:21.715364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:21.189 [2024-05-14 21:55:21.715388] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:21.189 [2024-05-14 21:55:21.715396] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:21.189 [2024-05-14 21:55:21.715424] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd59300 00:13:21.189 [2024-05-14 21:55:21.715428] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:21.189 [2024-05-14 21:55:21.715448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdb7e20 00:13:21.189 [2024-05-14 21:55:21.715494] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd59300 00:13:21.189 [2024-05-14 21:55:21.715499] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd59300 00:13:21.189 [2024-05-14 21:55:21.715519] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.189 pt2 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.189 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.447 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:21.447 "name": "raid_bdev1", 00:13:21.447 "uuid": "a02db7a6-123c-11ef-8c90-4585f0cfab08", 00:13:21.447 "strip_size_kb": 0, 00:13:21.447 "state": "online", 00:13:21.447 "raid_level": "raid1", 00:13:21.447 "superblock": true, 00:13:21.447 "num_base_bdevs": 3, 00:13:21.447 "num_base_bdevs_discovered": 2, 00:13:21.447 "num_base_bdevs_operational": 2, 00:13:21.447 "base_bdevs_list": [ 00:13:21.447 { 00:13:21.447 "name": null, 00:13:21.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.447 "is_configured": false, 00:13:21.447 "data_offset": 2048, 00:13:21.447 "data_size": 63488 00:13:21.447 }, 00:13:21.447 { 00:13:21.447 "name": "pt2", 00:13:21.447 "uuid": "195a1937-4860-5c5c-85f1-feba42811e51", 00:13:21.447 "is_configured": true, 00:13:21.447 "data_offset": 2048, 00:13:21.447 "data_size": 63488 00:13:21.447 }, 00:13:21.447 { 00:13:21.447 "name": "pt3", 00:13:21.447 "uuid": "bfae9780-2598-ee5f-87d6-57d7846fec29", 00:13:21.447 "is_configured": true, 00:13:21.447 "data_offset": 2048, 00:13:21.447 "data_size": 63488 00:13:21.447 } 00:13:21.447 ] 00:13:21.447 }' 00:13:21.447 21:55:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:21.447 21:55:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:13:22.020 [2024-05-14 21:55:22.575165] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' a02db7a6-123c-11ef-8c90-4585f0cfab08 '!=' a02db7a6-123c-11ef-8c90-4585f0cfab08 ']' 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 56594 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 56594 ']' 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 56594 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 56594 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56594' 00:13:22.020 killing process with pid 56594 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 56594 00:13:22.020 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 56594 00:13:22.020 [2024-05-14 21:55:22.607919] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.020 [2024-05-14 21:55:22.607974] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.020 [2024-05-14 21:55:22.608003] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.020 [2024-05-14 21:55:22.608012] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd59300 name raid_bdev1, state offline 00:13:22.277 [2024-05-14 21:55:22.628135] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.277 21:55:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:13:22.277 00:13:22.277 real 0m19.047s 00:13:22.277 user 0m34.600s 00:13:22.277 sys 0m2.661s 00:13:22.277 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.277 21:55:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.277 ************************************ 00:13:22.277 END TEST raid_superblock_test 00:13:22.277 ************************************ 00:13:22.536 21:55:22 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:13:22.536 21:55:22 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:13:22.536 21:55:22 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:22.536 21:55:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:22.536 21:55:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.536 21:55:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.536 ************************************ 00:13:22.536 START TEST raid_state_function_test 00:13:22.536 ************************************ 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:22.536 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=57172 00:13:22.537 Process raid pid: 57172 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57172' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 57172 /var/tmp/spdk-raid.sock 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 57172 ']' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:22.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:22.537 21:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.537 [2024-05-14 21:55:22.907776] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:22.537 [2024-05-14 21:55:22.908018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:23.103 EAL: TSC is not safe to use in SMP mode 00:13:23.103 EAL: TSC is not invariant 00:13:23.103 [2024-05-14 21:55:23.469489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.103 [2024-05-14 21:55:23.562348] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:23.103 [2024-05-14 21:55:23.564620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.103 [2024-05-14 21:55:23.565460] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.103 [2024-05-14 21:55:23.565475] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.669 21:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.669 21:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:23.669 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:23.928 [2024-05-14 21:55:24.282060] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.928 [2024-05-14 21:55:24.282124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.928 [2024-05-14 21:55:24.282130] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.928 [2024-05-14 21:55:24.282140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.928 [2024-05-14 21:55:24.282143] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.928 [2024-05-14 21:55:24.282151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.928 [2024-05-14 21:55:24.282154] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.928 [2024-05-14 21:55:24.282162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.928 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.187 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:24.187 "name": "Existed_Raid", 00:13:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.187 "strip_size_kb": 64, 00:13:24.187 "state": "configuring", 00:13:24.187 "raid_level": "raid0", 00:13:24.187 "superblock": false, 00:13:24.187 "num_base_bdevs": 4, 00:13:24.187 "num_base_bdevs_discovered": 0, 00:13:24.187 "num_base_bdevs_operational": 4, 00:13:24.187 "base_bdevs_list": [ 00:13:24.187 { 00:13:24.187 "name": "BaseBdev1", 00:13:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.187 "is_configured": false, 00:13:24.187 "data_offset": 0, 00:13:24.187 "data_size": 0 00:13:24.187 }, 00:13:24.187 { 00:13:24.187 "name": "BaseBdev2", 00:13:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.187 "is_configured": false, 00:13:24.187 "data_offset": 0, 00:13:24.187 "data_size": 0 00:13:24.187 }, 00:13:24.187 { 00:13:24.187 "name": "BaseBdev3", 00:13:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.187 "is_configured": false, 00:13:24.187 "data_offset": 0, 00:13:24.187 "data_size": 0 00:13:24.187 }, 00:13:24.187 { 00:13:24.187 "name": "BaseBdev4", 00:13:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.187 "is_configured": false, 00:13:24.187 "data_offset": 0, 00:13:24.187 "data_size": 0 00:13:24.187 } 00:13:24.187 ] 00:13:24.187 }' 00:13:24.187 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:24.187 21:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.445 21:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:24.703 [2024-05-14 21:55:25.146179] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.703 [2024-05-14 21:55:25.146211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f6300 name Existed_Raid, state configuring 00:13:24.703 21:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:24.962 [2024-05-14 21:55:25.410201] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.962 [2024-05-14 21:55:25.410267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.962 [2024-05-14 21:55:25.410274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.962 [2024-05-14 21:55:25.410283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.962 [2024-05-14 21:55:25.410287] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:24.962 [2024-05-14 21:55:25.410294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:24.962 [2024-05-14 21:55:25.410298] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:24.962 [2024-05-14 21:55:25.410305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:24.962 21:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.220 [2024-05-14 21:55:25.707263] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.220 BaseBdev1 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:25.220 21:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.478 21:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.044 [ 00:13:26.044 { 00:13:26.044 "name": "BaseBdev1", 00:13:26.044 "aliases": [ 00:13:26.044 "ab8a0761-123c-11ef-8c90-4585f0cfab08" 00:13:26.044 ], 00:13:26.044 "product_name": "Malloc disk", 00:13:26.044 "block_size": 512, 00:13:26.044 "num_blocks": 65536, 00:13:26.044 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:26.044 "assigned_rate_limits": { 00:13:26.044 "rw_ios_per_sec": 0, 00:13:26.045 "rw_mbytes_per_sec": 0, 00:13:26.045 "r_mbytes_per_sec": 0, 00:13:26.045 "w_mbytes_per_sec": 0 00:13:26.045 }, 00:13:26.045 "claimed": true, 00:13:26.045 "claim_type": "exclusive_write", 00:13:26.045 "zoned": false, 00:13:26.045 "supported_io_types": { 00:13:26.045 "read": true, 00:13:26.045 "write": true, 00:13:26.045 "unmap": true, 00:13:26.045 "write_zeroes": true, 00:13:26.045 "flush": true, 00:13:26.045 "reset": true, 00:13:26.045 "compare": false, 00:13:26.045 "compare_and_write": false, 00:13:26.045 "abort": true, 00:13:26.045 "nvme_admin": false, 00:13:26.045 "nvme_io": false 00:13:26.045 }, 00:13:26.045 "memory_domains": [ 00:13:26.045 { 00:13:26.045 "dma_device_id": "system", 00:13:26.045 "dma_device_type": 1 00:13:26.045 }, 00:13:26.045 { 00:13:26.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.045 "dma_device_type": 2 00:13:26.045 } 00:13:26.045 ], 00:13:26.045 "driver_specific": {} 00:13:26.045 } 00:13:26.045 ] 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.045 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.304 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.304 "name": "Existed_Raid", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.304 "strip_size_kb": 64, 00:13:26.304 "state": "configuring", 00:13:26.304 "raid_level": "raid0", 00:13:26.304 "superblock": false, 00:13:26.304 "num_base_bdevs": 4, 00:13:26.304 "num_base_bdevs_discovered": 1, 00:13:26.304 "num_base_bdevs_operational": 4, 00:13:26.304 "base_bdevs_list": [ 00:13:26.304 { 00:13:26.304 "name": "BaseBdev1", 00:13:26.304 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:26.304 "is_configured": true, 00:13:26.304 "data_offset": 0, 00:13:26.304 "data_size": 65536 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "name": "BaseBdev2", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.304 "is_configured": false, 00:13:26.304 "data_offset": 0, 00:13:26.304 "data_size": 0 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "name": "BaseBdev3", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.304 "is_configured": false, 00:13:26.304 "data_offset": 0, 00:13:26.304 "data_size": 0 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "name": "BaseBdev4", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.304 "is_configured": false, 00:13:26.304 "data_offset": 0, 00:13:26.304 "data_size": 0 00:13:26.304 } 00:13:26.304 ] 00:13:26.304 }' 00:13:26.304 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.304 21:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.562 21:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:26.821 [2024-05-14 21:55:27.242275] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.821 [2024-05-14 21:55:27.242314] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f6300 name Existed_Raid, state configuring 00:13:26.821 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:27.079 [2024-05-14 21:55:27.482297] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.079 [2024-05-14 21:55:27.483100] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.079 [2024-05-14 21:55:27.483144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.079 [2024-05-14 21:55:27.483149] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.079 [2024-05-14 21:55:27.483158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.079 [2024-05-14 21:55:27.483161] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:27.079 [2024-05-14 21:55:27.483169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.079 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.339 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.339 "name": "Existed_Raid", 00:13:27.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.339 "strip_size_kb": 64, 00:13:27.339 "state": "configuring", 00:13:27.339 "raid_level": "raid0", 00:13:27.339 "superblock": false, 00:13:27.339 "num_base_bdevs": 4, 00:13:27.339 "num_base_bdevs_discovered": 1, 00:13:27.339 "num_base_bdevs_operational": 4, 00:13:27.339 "base_bdevs_list": [ 00:13:27.339 { 00:13:27.339 "name": "BaseBdev1", 00:13:27.339 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:27.339 "is_configured": true, 00:13:27.339 "data_offset": 0, 00:13:27.339 "data_size": 65536 00:13:27.339 }, 00:13:27.339 { 00:13:27.339 "name": "BaseBdev2", 00:13:27.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.339 "is_configured": false, 00:13:27.339 "data_offset": 0, 00:13:27.339 "data_size": 0 00:13:27.339 }, 00:13:27.339 { 00:13:27.339 "name": "BaseBdev3", 00:13:27.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.339 "is_configured": false, 00:13:27.339 "data_offset": 0, 00:13:27.339 "data_size": 0 00:13:27.339 }, 00:13:27.339 { 00:13:27.339 "name": "BaseBdev4", 00:13:27.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.339 "is_configured": false, 00:13:27.339 "data_offset": 0, 00:13:27.339 "data_size": 0 00:13:27.339 } 00:13:27.339 ] 00:13:27.339 }' 00:13:27.339 21:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.339 21:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.597 21:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.857 [2024-05-14 21:55:28.402466] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.857 BaseBdev2 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:27.857 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:28.424 21:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:28.424 [ 00:13:28.424 { 00:13:28.424 "name": "BaseBdev2", 00:13:28.424 "aliases": [ 00:13:28.424 "ad256ba3-123c-11ef-8c90-4585f0cfab08" 00:13:28.424 ], 00:13:28.424 "product_name": "Malloc disk", 00:13:28.424 "block_size": 512, 00:13:28.424 "num_blocks": 65536, 00:13:28.424 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:28.424 "assigned_rate_limits": { 00:13:28.424 "rw_ios_per_sec": 0, 00:13:28.424 "rw_mbytes_per_sec": 0, 00:13:28.424 "r_mbytes_per_sec": 0, 00:13:28.424 "w_mbytes_per_sec": 0 00:13:28.424 }, 00:13:28.424 "claimed": true, 00:13:28.424 "claim_type": "exclusive_write", 00:13:28.424 "zoned": false, 00:13:28.424 "supported_io_types": { 00:13:28.424 "read": true, 00:13:28.424 "write": true, 00:13:28.424 "unmap": true, 00:13:28.424 "write_zeroes": true, 00:13:28.424 "flush": true, 00:13:28.424 "reset": true, 00:13:28.424 "compare": false, 00:13:28.425 "compare_and_write": false, 00:13:28.425 "abort": true, 00:13:28.425 "nvme_admin": false, 00:13:28.425 "nvme_io": false 00:13:28.425 }, 00:13:28.425 "memory_domains": [ 00:13:28.425 { 00:13:28.425 "dma_device_id": "system", 00:13:28.425 "dma_device_type": 1 00:13:28.425 }, 00:13:28.425 { 00:13:28.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.425 "dma_device_type": 2 00:13:28.425 } 00:13:28.425 ], 00:13:28.425 "driver_specific": {} 00:13:28.425 } 00:13:28.425 ] 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.425 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.992 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.992 "name": "Existed_Raid", 00:13:28.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.992 "strip_size_kb": 64, 00:13:28.992 "state": "configuring", 00:13:28.992 "raid_level": "raid0", 00:13:28.992 "superblock": false, 00:13:28.992 "num_base_bdevs": 4, 00:13:28.992 "num_base_bdevs_discovered": 2, 00:13:28.992 "num_base_bdevs_operational": 4, 00:13:28.992 "base_bdevs_list": [ 00:13:28.992 { 00:13:28.992 "name": "BaseBdev1", 00:13:28.992 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:28.992 "is_configured": true, 00:13:28.992 "data_offset": 0, 00:13:28.992 "data_size": 65536 00:13:28.992 }, 00:13:28.992 { 00:13:28.992 "name": "BaseBdev2", 00:13:28.992 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:28.992 "is_configured": true, 00:13:28.992 "data_offset": 0, 00:13:28.992 "data_size": 65536 00:13:28.992 }, 00:13:28.992 { 00:13:28.992 "name": "BaseBdev3", 00:13:28.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.992 "is_configured": false, 00:13:28.992 "data_offset": 0, 00:13:28.992 "data_size": 0 00:13:28.992 }, 00:13:28.992 { 00:13:28.992 "name": "BaseBdev4", 00:13:28.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.992 "is_configured": false, 00:13:28.992 "data_offset": 0, 00:13:28.992 "data_size": 0 00:13:28.992 } 00:13:28.992 ] 00:13:28.992 }' 00:13:28.992 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.992 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.250 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:29.509 [2024-05-14 21:55:29.918464] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.509 BaseBdev3 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:29.509 21:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:29.767 21:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.026 [ 00:13:30.026 { 00:13:30.026 "name": "BaseBdev3", 00:13:30.026 "aliases": [ 00:13:30.026 "ae0cbf31-123c-11ef-8c90-4585f0cfab08" 00:13:30.026 ], 00:13:30.026 "product_name": "Malloc disk", 00:13:30.026 "block_size": 512, 00:13:30.026 "num_blocks": 65536, 00:13:30.026 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:30.026 "assigned_rate_limits": { 00:13:30.026 "rw_ios_per_sec": 0, 00:13:30.026 "rw_mbytes_per_sec": 0, 00:13:30.026 "r_mbytes_per_sec": 0, 00:13:30.026 "w_mbytes_per_sec": 0 00:13:30.026 }, 00:13:30.026 "claimed": true, 00:13:30.026 "claim_type": "exclusive_write", 00:13:30.026 "zoned": false, 00:13:30.026 "supported_io_types": { 00:13:30.026 "read": true, 00:13:30.026 "write": true, 00:13:30.026 "unmap": true, 00:13:30.026 "write_zeroes": true, 00:13:30.026 "flush": true, 00:13:30.026 "reset": true, 00:13:30.026 "compare": false, 00:13:30.026 "compare_and_write": false, 00:13:30.026 "abort": true, 00:13:30.026 "nvme_admin": false, 00:13:30.026 "nvme_io": false 00:13:30.026 }, 00:13:30.026 "memory_domains": [ 00:13:30.026 { 00:13:30.026 "dma_device_id": "system", 00:13:30.026 "dma_device_type": 1 00:13:30.026 }, 00:13:30.026 { 00:13:30.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.026 "dma_device_type": 2 00:13:30.026 } 00:13:30.026 ], 00:13:30.026 "driver_specific": {} 00:13:30.026 } 00:13:30.026 ] 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.026 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.284 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.284 "name": "Existed_Raid", 00:13:30.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.284 "strip_size_kb": 64, 00:13:30.284 "state": "configuring", 00:13:30.284 "raid_level": "raid0", 00:13:30.284 "superblock": false, 00:13:30.284 "num_base_bdevs": 4, 00:13:30.284 "num_base_bdevs_discovered": 3, 00:13:30.284 "num_base_bdevs_operational": 4, 00:13:30.284 "base_bdevs_list": [ 00:13:30.284 { 00:13:30.284 "name": "BaseBdev1", 00:13:30.284 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:30.284 "is_configured": true, 00:13:30.284 "data_offset": 0, 00:13:30.284 "data_size": 65536 00:13:30.285 }, 00:13:30.285 { 00:13:30.285 "name": "BaseBdev2", 00:13:30.285 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:30.285 "is_configured": true, 00:13:30.285 "data_offset": 0, 00:13:30.285 "data_size": 65536 00:13:30.285 }, 00:13:30.285 { 00:13:30.285 "name": "BaseBdev3", 00:13:30.285 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:30.285 "is_configured": true, 00:13:30.285 "data_offset": 0, 00:13:30.285 "data_size": 65536 00:13:30.285 }, 00:13:30.285 { 00:13:30.285 "name": "BaseBdev4", 00:13:30.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.285 "is_configured": false, 00:13:30.285 "data_offset": 0, 00:13:30.285 "data_size": 0 00:13:30.285 } 00:13:30.285 ] 00:13:30.285 }' 00:13:30.285 21:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.285 21:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.543 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.802 [2024-05-14 21:55:31.386505] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.802 [2024-05-14 21:55:31.386537] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f6300 00:13:30.802 [2024-05-14 21:55:31.386541] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:30.802 [2024-05-14 21:55:31.386571] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654ec0 00:13:30.802 [2024-05-14 21:55:31.386663] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f6300 00:13:30.802 [2024-05-14 21:55:31.386668] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5f6300 00:13:30.802 [2024-05-14 21:55:31.386702] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.802 BaseBdev4 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:31.060 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:31.319 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:31.577 [ 00:13:31.577 { 00:13:31.577 "name": "BaseBdev4", 00:13:31.577 "aliases": [ 00:13:31.577 "aeecc0a5-123c-11ef-8c90-4585f0cfab08" 00:13:31.577 ], 00:13:31.577 "product_name": "Malloc disk", 00:13:31.577 "block_size": 512, 00:13:31.577 "num_blocks": 65536, 00:13:31.577 "uuid": "aeecc0a5-123c-11ef-8c90-4585f0cfab08", 00:13:31.577 "assigned_rate_limits": { 00:13:31.577 "rw_ios_per_sec": 0, 00:13:31.577 "rw_mbytes_per_sec": 0, 00:13:31.577 "r_mbytes_per_sec": 0, 00:13:31.577 "w_mbytes_per_sec": 0 00:13:31.577 }, 00:13:31.577 "claimed": true, 00:13:31.577 "claim_type": "exclusive_write", 00:13:31.577 "zoned": false, 00:13:31.577 "supported_io_types": { 00:13:31.577 "read": true, 00:13:31.577 "write": true, 00:13:31.577 "unmap": true, 00:13:31.577 "write_zeroes": true, 00:13:31.577 "flush": true, 00:13:31.577 "reset": true, 00:13:31.577 "compare": false, 00:13:31.577 "compare_and_write": false, 00:13:31.577 "abort": true, 00:13:31.577 "nvme_admin": false, 00:13:31.577 "nvme_io": false 00:13:31.578 }, 00:13:31.578 "memory_domains": [ 00:13:31.578 { 00:13:31.578 "dma_device_id": "system", 00:13:31.578 "dma_device_type": 1 00:13:31.578 }, 00:13:31.578 { 00:13:31.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.578 "dma_device_type": 2 00:13:31.578 } 00:13:31.578 ], 00:13:31.578 "driver_specific": {} 00:13:31.578 } 00:13:31.578 ] 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.578 21:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.837 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:31.837 "name": "Existed_Raid", 00:13:31.837 "uuid": "aeecc73b-123c-11ef-8c90-4585f0cfab08", 00:13:31.837 "strip_size_kb": 64, 00:13:31.837 "state": "online", 00:13:31.837 "raid_level": "raid0", 00:13:31.837 "superblock": false, 00:13:31.837 "num_base_bdevs": 4, 00:13:31.837 "num_base_bdevs_discovered": 4, 00:13:31.837 "num_base_bdevs_operational": 4, 00:13:31.837 "base_bdevs_list": [ 00:13:31.837 { 00:13:31.837 "name": "BaseBdev1", 00:13:31.837 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:31.837 "is_configured": true, 00:13:31.837 "data_offset": 0, 00:13:31.837 "data_size": 65536 00:13:31.837 }, 00:13:31.837 { 00:13:31.837 "name": "BaseBdev2", 00:13:31.837 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:31.837 "is_configured": true, 00:13:31.837 "data_offset": 0, 00:13:31.837 "data_size": 65536 00:13:31.837 }, 00:13:31.837 { 00:13:31.837 "name": "BaseBdev3", 00:13:31.837 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:31.837 "is_configured": true, 00:13:31.837 "data_offset": 0, 00:13:31.837 "data_size": 65536 00:13:31.837 }, 00:13:31.837 { 00:13:31.837 "name": "BaseBdev4", 00:13:31.837 "uuid": "aeecc0a5-123c-11ef-8c90-4585f0cfab08", 00:13:31.837 "is_configured": true, 00:13:31.837 "data_offset": 0, 00:13:31.837 "data_size": 65536 00:13:31.837 } 00:13:31.837 ] 00:13:31.837 }' 00:13:31.837 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:31.837 21:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:32.097 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:32.354 [2024-05-14 21:55:32.830556] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:32.354 "name": "Existed_Raid", 00:13:32.354 "aliases": [ 00:13:32.354 "aeecc73b-123c-11ef-8c90-4585f0cfab08" 00:13:32.354 ], 00:13:32.354 "product_name": "Raid Volume", 00:13:32.354 "block_size": 512, 00:13:32.354 "num_blocks": 262144, 00:13:32.354 "uuid": "aeecc73b-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "assigned_rate_limits": { 00:13:32.354 "rw_ios_per_sec": 0, 00:13:32.354 "rw_mbytes_per_sec": 0, 00:13:32.354 "r_mbytes_per_sec": 0, 00:13:32.354 "w_mbytes_per_sec": 0 00:13:32.354 }, 00:13:32.354 "claimed": false, 00:13:32.354 "zoned": false, 00:13:32.354 "supported_io_types": { 00:13:32.354 "read": true, 00:13:32.354 "write": true, 00:13:32.354 "unmap": true, 00:13:32.354 "write_zeroes": true, 00:13:32.354 "flush": true, 00:13:32.354 "reset": true, 00:13:32.354 "compare": false, 00:13:32.354 "compare_and_write": false, 00:13:32.354 "abort": false, 00:13:32.354 "nvme_admin": false, 00:13:32.354 "nvme_io": false 00:13:32.354 }, 00:13:32.354 "memory_domains": [ 00:13:32.354 { 00:13:32.354 "dma_device_id": "system", 00:13:32.354 "dma_device_type": 1 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.354 "dma_device_type": 2 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "system", 00:13:32.354 "dma_device_type": 1 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.354 "dma_device_type": 2 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "system", 00:13:32.354 "dma_device_type": 1 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.354 "dma_device_type": 2 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "system", 00:13:32.354 "dma_device_type": 1 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.354 "dma_device_type": 2 00:13:32.354 } 00:13:32.354 ], 00:13:32.354 "driver_specific": { 00:13:32.354 "raid": { 00:13:32.354 "uuid": "aeecc73b-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "strip_size_kb": 64, 00:13:32.354 "state": "online", 00:13:32.354 "raid_level": "raid0", 00:13:32.354 "superblock": false, 00:13:32.354 "num_base_bdevs": 4, 00:13:32.354 "num_base_bdevs_discovered": 4, 00:13:32.354 "num_base_bdevs_operational": 4, 00:13:32.354 "base_bdevs_list": [ 00:13:32.354 { 00:13:32.354 "name": "BaseBdev1", 00:13:32.354 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "is_configured": true, 00:13:32.354 "data_offset": 0, 00:13:32.354 "data_size": 65536 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "name": "BaseBdev2", 00:13:32.354 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "is_configured": true, 00:13:32.354 "data_offset": 0, 00:13:32.354 "data_size": 65536 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "name": "BaseBdev3", 00:13:32.354 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "is_configured": true, 00:13:32.354 "data_offset": 0, 00:13:32.354 "data_size": 65536 00:13:32.354 }, 00:13:32.354 { 00:13:32.354 "name": "BaseBdev4", 00:13:32.354 "uuid": "aeecc0a5-123c-11ef-8c90-4585f0cfab08", 00:13:32.354 "is_configured": true, 00:13:32.354 "data_offset": 0, 00:13:32.354 "data_size": 65536 00:13:32.354 } 00:13:32.354 ] 00:13:32.354 } 00:13:32.354 } 00:13:32.354 }' 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:32.354 BaseBdev2 00:13:32.354 BaseBdev3 00:13:32.354 BaseBdev4' 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:32.354 21:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:32.612 "name": "BaseBdev1", 00:13:32.612 "aliases": [ 00:13:32.612 "ab8a0761-123c-11ef-8c90-4585f0cfab08" 00:13:32.612 ], 00:13:32.612 "product_name": "Malloc disk", 00:13:32.612 "block_size": 512, 00:13:32.612 "num_blocks": 65536, 00:13:32.612 "uuid": "ab8a0761-123c-11ef-8c90-4585f0cfab08", 00:13:32.612 "assigned_rate_limits": { 00:13:32.612 "rw_ios_per_sec": 0, 00:13:32.612 "rw_mbytes_per_sec": 0, 00:13:32.612 "r_mbytes_per_sec": 0, 00:13:32.612 "w_mbytes_per_sec": 0 00:13:32.612 }, 00:13:32.612 "claimed": true, 00:13:32.612 "claim_type": "exclusive_write", 00:13:32.612 "zoned": false, 00:13:32.612 "supported_io_types": { 00:13:32.612 "read": true, 00:13:32.612 "write": true, 00:13:32.612 "unmap": true, 00:13:32.612 "write_zeroes": true, 00:13:32.612 "flush": true, 00:13:32.612 "reset": true, 00:13:32.612 "compare": false, 00:13:32.612 "compare_and_write": false, 00:13:32.612 "abort": true, 00:13:32.612 "nvme_admin": false, 00:13:32.612 "nvme_io": false 00:13:32.612 }, 00:13:32.612 "memory_domains": [ 00:13:32.612 { 00:13:32.612 "dma_device_id": "system", 00:13:32.612 "dma_device_type": 1 00:13:32.612 }, 00:13:32.612 { 00:13:32.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.612 "dma_device_type": 2 00:13:32.612 } 00:13:32.612 ], 00:13:32.612 "driver_specific": {} 00:13:32.612 }' 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:32.612 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:32.870 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:33.129 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:33.129 "name": "BaseBdev2", 00:13:33.129 "aliases": [ 00:13:33.129 "ad256ba3-123c-11ef-8c90-4585f0cfab08" 00:13:33.129 ], 00:13:33.129 "product_name": "Malloc disk", 00:13:33.129 "block_size": 512, 00:13:33.129 "num_blocks": 65536, 00:13:33.129 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:33.129 "assigned_rate_limits": { 00:13:33.129 "rw_ios_per_sec": 0, 00:13:33.129 "rw_mbytes_per_sec": 0, 00:13:33.129 "r_mbytes_per_sec": 0, 00:13:33.129 "w_mbytes_per_sec": 0 00:13:33.129 }, 00:13:33.129 "claimed": true, 00:13:33.129 "claim_type": "exclusive_write", 00:13:33.129 "zoned": false, 00:13:33.129 "supported_io_types": { 00:13:33.129 "read": true, 00:13:33.129 "write": true, 00:13:33.129 "unmap": true, 00:13:33.129 "write_zeroes": true, 00:13:33.129 "flush": true, 00:13:33.129 "reset": true, 00:13:33.129 "compare": false, 00:13:33.129 "compare_and_write": false, 00:13:33.129 "abort": true, 00:13:33.129 "nvme_admin": false, 00:13:33.129 "nvme_io": false 00:13:33.129 }, 00:13:33.129 "memory_domains": [ 00:13:33.129 { 00:13:33.129 "dma_device_id": "system", 00:13:33.129 "dma_device_type": 1 00:13:33.129 }, 00:13:33.129 { 00:13:33.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.129 "dma_device_type": 2 00:13:33.129 } 00:13:33.129 ], 00:13:33.129 "driver_specific": {} 00:13:33.129 }' 00:13:33.129 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.129 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:33.130 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:33.388 "name": "BaseBdev3", 00:13:33.388 "aliases": [ 00:13:33.388 "ae0cbf31-123c-11ef-8c90-4585f0cfab08" 00:13:33.388 ], 00:13:33.388 "product_name": "Malloc disk", 00:13:33.388 "block_size": 512, 00:13:33.388 "num_blocks": 65536, 00:13:33.388 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:33.388 "assigned_rate_limits": { 00:13:33.388 "rw_ios_per_sec": 0, 00:13:33.388 "rw_mbytes_per_sec": 0, 00:13:33.388 "r_mbytes_per_sec": 0, 00:13:33.388 "w_mbytes_per_sec": 0 00:13:33.388 }, 00:13:33.388 "claimed": true, 00:13:33.388 "claim_type": "exclusive_write", 00:13:33.388 "zoned": false, 00:13:33.388 "supported_io_types": { 00:13:33.388 "read": true, 00:13:33.388 "write": true, 00:13:33.388 "unmap": true, 00:13:33.388 "write_zeroes": true, 00:13:33.388 "flush": true, 00:13:33.388 "reset": true, 00:13:33.388 "compare": false, 00:13:33.388 "compare_and_write": false, 00:13:33.388 "abort": true, 00:13:33.388 "nvme_admin": false, 00:13:33.388 "nvme_io": false 00:13:33.388 }, 00:13:33.388 "memory_domains": [ 00:13:33.388 { 00:13:33.388 "dma_device_id": "system", 00:13:33.388 "dma_device_type": 1 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.388 "dma_device_type": 2 00:13:33.388 } 00:13:33.388 ], 00:13:33.388 "driver_specific": {} 00:13:33.388 }' 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:33.388 21:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:33.647 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:33.647 "name": "BaseBdev4", 00:13:33.647 "aliases": [ 00:13:33.647 "aeecc0a5-123c-11ef-8c90-4585f0cfab08" 00:13:33.647 ], 00:13:33.647 "product_name": "Malloc disk", 00:13:33.647 "block_size": 512, 00:13:33.647 "num_blocks": 65536, 00:13:33.647 "uuid": "aeecc0a5-123c-11ef-8c90-4585f0cfab08", 00:13:33.647 "assigned_rate_limits": { 00:13:33.647 "rw_ios_per_sec": 0, 00:13:33.647 "rw_mbytes_per_sec": 0, 00:13:33.647 "r_mbytes_per_sec": 0, 00:13:33.647 "w_mbytes_per_sec": 0 00:13:33.647 }, 00:13:33.647 "claimed": true, 00:13:33.647 "claim_type": "exclusive_write", 00:13:33.647 "zoned": false, 00:13:33.647 "supported_io_types": { 00:13:33.647 "read": true, 00:13:33.647 "write": true, 00:13:33.648 "unmap": true, 00:13:33.648 "write_zeroes": true, 00:13:33.648 "flush": true, 00:13:33.648 "reset": true, 00:13:33.648 "compare": false, 00:13:33.648 "compare_and_write": false, 00:13:33.648 "abort": true, 00:13:33.648 "nvme_admin": false, 00:13:33.648 "nvme_io": false 00:13:33.648 }, 00:13:33.648 "memory_domains": [ 00:13:33.648 { 00:13:33.648 "dma_device_id": "system", 00:13:33.648 "dma_device_type": 1 00:13:33.648 }, 00:13:33.648 { 00:13:33.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.648 "dma_device_type": 2 00:13:33.648 } 00:13:33.648 ], 00:13:33.648 "driver_specific": {} 00:13:33.648 }' 00:13:33.648 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.905 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:33.906 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:34.163 [2024-05-14 21:55:34.618637] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.163 [2024-05-14 21:55:34.618675] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.163 [2024-05-14 21:55:34.618696] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.163 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.420 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.420 "name": "Existed_Raid", 00:13:34.420 "uuid": "aeecc73b-123c-11ef-8c90-4585f0cfab08", 00:13:34.420 "strip_size_kb": 64, 00:13:34.420 "state": "offline", 00:13:34.420 "raid_level": "raid0", 00:13:34.420 "superblock": false, 00:13:34.420 "num_base_bdevs": 4, 00:13:34.420 "num_base_bdevs_discovered": 3, 00:13:34.420 "num_base_bdevs_operational": 3, 00:13:34.420 "base_bdevs_list": [ 00:13:34.420 { 00:13:34.420 "name": null, 00:13:34.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.420 "is_configured": false, 00:13:34.420 "data_offset": 0, 00:13:34.420 "data_size": 65536 00:13:34.420 }, 00:13:34.420 { 00:13:34.420 "name": "BaseBdev2", 00:13:34.420 "uuid": "ad256ba3-123c-11ef-8c90-4585f0cfab08", 00:13:34.420 "is_configured": true, 00:13:34.420 "data_offset": 0, 00:13:34.420 "data_size": 65536 00:13:34.420 }, 00:13:34.420 { 00:13:34.420 "name": "BaseBdev3", 00:13:34.420 "uuid": "ae0cbf31-123c-11ef-8c90-4585f0cfab08", 00:13:34.420 "is_configured": true, 00:13:34.420 "data_offset": 0, 00:13:34.420 "data_size": 65536 00:13:34.420 }, 00:13:34.420 { 00:13:34.420 "name": "BaseBdev4", 00:13:34.420 "uuid": "aeecc0a5-123c-11ef-8c90-4585f0cfab08", 00:13:34.420 "is_configured": true, 00:13:34.420 "data_offset": 0, 00:13:34.420 "data_size": 65536 00:13:34.420 } 00:13:34.420 ] 00:13:34.420 }' 00:13:34.420 21:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.420 21:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:34.987 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.987 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.987 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:35.245 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:35.245 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.245 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:35.503 [2024-05-14 21:55:35.848547] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.503 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.503 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.503 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.503 21:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:35.762 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:35.762 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.762 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:35.762 [2024-05-14 21:55:36.343290] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:36.020 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:36.278 [2024-05-14 21:55:36.821761] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:36.278 [2024-05-14 21:55:36.821811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f6300 name Existed_Raid, state offline 00:13:36.278 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.278 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.278 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.278 21:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:36.537 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.796 BaseBdev2 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:36.796 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:37.054 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.313 [ 00:13:37.313 { 00:13:37.313 "name": "BaseBdev2", 00:13:37.313 "aliases": [ 00:13:37.313 "b27152b8-123c-11ef-8c90-4585f0cfab08" 00:13:37.313 ], 00:13:37.313 "product_name": "Malloc disk", 00:13:37.313 "block_size": 512, 00:13:37.313 "num_blocks": 65536, 00:13:37.313 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:37.313 "assigned_rate_limits": { 00:13:37.313 "rw_ios_per_sec": 0, 00:13:37.313 "rw_mbytes_per_sec": 0, 00:13:37.313 "r_mbytes_per_sec": 0, 00:13:37.313 "w_mbytes_per_sec": 0 00:13:37.313 }, 00:13:37.313 "claimed": false, 00:13:37.313 "zoned": false, 00:13:37.313 "supported_io_types": { 00:13:37.313 "read": true, 00:13:37.313 "write": true, 00:13:37.313 "unmap": true, 00:13:37.313 "write_zeroes": true, 00:13:37.313 "flush": true, 00:13:37.313 "reset": true, 00:13:37.313 "compare": false, 00:13:37.313 "compare_and_write": false, 00:13:37.313 "abort": true, 00:13:37.313 "nvme_admin": false, 00:13:37.313 "nvme_io": false 00:13:37.313 }, 00:13:37.313 "memory_domains": [ 00:13:37.313 { 00:13:37.313 "dma_device_id": "system", 00:13:37.313 "dma_device_type": 1 00:13:37.313 }, 00:13:37.313 { 00:13:37.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.313 "dma_device_type": 2 00:13:37.313 } 00:13:37.313 ], 00:13:37.313 "driver_specific": {} 00:13:37.313 } 00:13:37.313 ] 00:13:37.313 21:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:37.313 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:37.313 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:37.313 21:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.572 BaseBdev3 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:37.572 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:37.831 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.089 [ 00:13:38.089 { 00:13:38.089 "name": "BaseBdev3", 00:13:38.089 "aliases": [ 00:13:38.089 "b2ec0119-123c-11ef-8c90-4585f0cfab08" 00:13:38.089 ], 00:13:38.090 "product_name": "Malloc disk", 00:13:38.090 "block_size": 512, 00:13:38.090 "num_blocks": 65536, 00:13:38.090 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:38.090 "assigned_rate_limits": { 00:13:38.090 "rw_ios_per_sec": 0, 00:13:38.090 "rw_mbytes_per_sec": 0, 00:13:38.090 "r_mbytes_per_sec": 0, 00:13:38.090 "w_mbytes_per_sec": 0 00:13:38.090 }, 00:13:38.090 "claimed": false, 00:13:38.090 "zoned": false, 00:13:38.090 "supported_io_types": { 00:13:38.090 "read": true, 00:13:38.090 "write": true, 00:13:38.090 "unmap": true, 00:13:38.090 "write_zeroes": true, 00:13:38.090 "flush": true, 00:13:38.090 "reset": true, 00:13:38.090 "compare": false, 00:13:38.090 "compare_and_write": false, 00:13:38.090 "abort": true, 00:13:38.090 "nvme_admin": false, 00:13:38.090 "nvme_io": false 00:13:38.090 }, 00:13:38.090 "memory_domains": [ 00:13:38.090 { 00:13:38.090 "dma_device_id": "system", 00:13:38.090 "dma_device_type": 1 00:13:38.090 }, 00:13:38.090 { 00:13:38.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.090 "dma_device_type": 2 00:13:38.090 } 00:13:38.090 ], 00:13:38.090 "driver_specific": {} 00:13:38.090 } 00:13:38.090 ] 00:13:38.090 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:38.090 21:55:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:38.090 21:55:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:38.090 21:55:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.348 BaseBdev4 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:38.348 21:55:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.607 21:55:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.864 [ 00:13:38.864 { 00:13:38.864 "name": "BaseBdev4", 00:13:38.864 "aliases": [ 00:13:38.864 "b35f5d87-123c-11ef-8c90-4585f0cfab08" 00:13:38.864 ], 00:13:38.864 "product_name": "Malloc disk", 00:13:38.864 "block_size": 512, 00:13:38.864 "num_blocks": 65536, 00:13:38.864 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:38.864 "assigned_rate_limits": { 00:13:38.864 "rw_ios_per_sec": 0, 00:13:38.864 "rw_mbytes_per_sec": 0, 00:13:38.864 "r_mbytes_per_sec": 0, 00:13:38.864 "w_mbytes_per_sec": 0 00:13:38.864 }, 00:13:38.864 "claimed": false, 00:13:38.864 "zoned": false, 00:13:38.864 "supported_io_types": { 00:13:38.864 "read": true, 00:13:38.864 "write": true, 00:13:38.864 "unmap": true, 00:13:38.864 "write_zeroes": true, 00:13:38.864 "flush": true, 00:13:38.864 "reset": true, 00:13:38.864 "compare": false, 00:13:38.864 "compare_and_write": false, 00:13:38.864 "abort": true, 00:13:38.864 "nvme_admin": false, 00:13:38.864 "nvme_io": false 00:13:38.864 }, 00:13:38.864 "memory_domains": [ 00:13:38.864 { 00:13:38.864 "dma_device_id": "system", 00:13:38.864 "dma_device_type": 1 00:13:38.864 }, 00:13:38.864 { 00:13:38.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.864 "dma_device_type": 2 00:13:38.864 } 00:13:38.864 ], 00:13:38.864 "driver_specific": {} 00:13:38.864 } 00:13:38.864 ] 00:13:38.864 21:55:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:38.864 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:13:38.864 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:13:38.864 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:39.124 [2024-05-14 21:55:39.588427] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.124 [2024-05-14 21:55:39.588499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.124 [2024-05-14 21:55:39.588540] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.124 [2024-05-14 21:55:39.589169] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:39.124 [2024-05-14 21:55:39.589190] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.124 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.383 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.383 "name": "Existed_Raid", 00:13:39.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.383 "strip_size_kb": 64, 00:13:39.383 "state": "configuring", 00:13:39.383 "raid_level": "raid0", 00:13:39.383 "superblock": false, 00:13:39.383 "num_base_bdevs": 4, 00:13:39.383 "num_base_bdevs_discovered": 3, 00:13:39.383 "num_base_bdevs_operational": 4, 00:13:39.383 "base_bdevs_list": [ 00:13:39.383 { 00:13:39.383 "name": "BaseBdev1", 00:13:39.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.383 "is_configured": false, 00:13:39.383 "data_offset": 0, 00:13:39.383 "data_size": 0 00:13:39.383 }, 00:13:39.383 { 00:13:39.383 "name": "BaseBdev2", 00:13:39.383 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:39.383 "is_configured": true, 00:13:39.383 "data_offset": 0, 00:13:39.383 "data_size": 65536 00:13:39.383 }, 00:13:39.383 { 00:13:39.383 "name": "BaseBdev3", 00:13:39.384 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:39.384 "is_configured": true, 00:13:39.384 "data_offset": 0, 00:13:39.384 "data_size": 65536 00:13:39.384 }, 00:13:39.384 { 00:13:39.384 "name": "BaseBdev4", 00:13:39.384 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:39.384 "is_configured": true, 00:13:39.384 "data_offset": 0, 00:13:39.384 "data_size": 65536 00:13:39.384 } 00:13:39.384 ] 00:13:39.384 }' 00:13:39.384 21:55:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.384 21:55:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.645 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:39.915 [2024-05-14 21:55:40.432466] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.915 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.174 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:40.174 "name": "Existed_Raid", 00:13:40.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.174 "strip_size_kb": 64, 00:13:40.174 "state": "configuring", 00:13:40.174 "raid_level": "raid0", 00:13:40.174 "superblock": false, 00:13:40.174 "num_base_bdevs": 4, 00:13:40.174 "num_base_bdevs_discovered": 2, 00:13:40.174 "num_base_bdevs_operational": 4, 00:13:40.174 "base_bdevs_list": [ 00:13:40.174 { 00:13:40.174 "name": "BaseBdev1", 00:13:40.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.174 "is_configured": false, 00:13:40.174 "data_offset": 0, 00:13:40.174 "data_size": 0 00:13:40.174 }, 00:13:40.174 { 00:13:40.174 "name": null, 00:13:40.174 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:40.174 "is_configured": false, 00:13:40.174 "data_offset": 0, 00:13:40.174 "data_size": 65536 00:13:40.174 }, 00:13:40.174 { 00:13:40.174 "name": "BaseBdev3", 00:13:40.174 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:40.174 "is_configured": true, 00:13:40.174 "data_offset": 0, 00:13:40.174 "data_size": 65536 00:13:40.174 }, 00:13:40.174 { 00:13:40.174 "name": "BaseBdev4", 00:13:40.174 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:40.174 "is_configured": true, 00:13:40.174 "data_offset": 0, 00:13:40.174 "data_size": 65536 00:13:40.174 } 00:13:40.174 ] 00:13:40.174 }' 00:13:40.174 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:40.174 21:55:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.433 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.433 21:55:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:40.691 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:13:40.692 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.950 [2024-05-14 21:55:41.464678] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.950 BaseBdev1 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:40.950 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.209 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.467 [ 00:13:41.467 { 00:13:41.467 "name": "BaseBdev1", 00:13:41.467 "aliases": [ 00:13:41.467 "b4ee8f6d-123c-11ef-8c90-4585f0cfab08" 00:13:41.467 ], 00:13:41.467 "product_name": "Malloc disk", 00:13:41.467 "block_size": 512, 00:13:41.467 "num_blocks": 65536, 00:13:41.467 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:41.467 "assigned_rate_limits": { 00:13:41.467 "rw_ios_per_sec": 0, 00:13:41.467 "rw_mbytes_per_sec": 0, 00:13:41.467 "r_mbytes_per_sec": 0, 00:13:41.467 "w_mbytes_per_sec": 0 00:13:41.467 }, 00:13:41.467 "claimed": true, 00:13:41.467 "claim_type": "exclusive_write", 00:13:41.467 "zoned": false, 00:13:41.467 "supported_io_types": { 00:13:41.467 "read": true, 00:13:41.467 "write": true, 00:13:41.467 "unmap": true, 00:13:41.467 "write_zeroes": true, 00:13:41.467 "flush": true, 00:13:41.467 "reset": true, 00:13:41.467 "compare": false, 00:13:41.467 "compare_and_write": false, 00:13:41.467 "abort": true, 00:13:41.467 "nvme_admin": false, 00:13:41.467 "nvme_io": false 00:13:41.467 }, 00:13:41.467 "memory_domains": [ 00:13:41.467 { 00:13:41.467 "dma_device_id": "system", 00:13:41.467 "dma_device_type": 1 00:13:41.467 }, 00:13:41.467 { 00:13:41.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.467 "dma_device_type": 2 00:13:41.467 } 00:13:41.467 ], 00:13:41.468 "driver_specific": {} 00:13:41.468 } 00:13:41.468 ] 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.468 21:55:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.726 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.726 "name": "Existed_Raid", 00:13:41.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.726 "strip_size_kb": 64, 00:13:41.726 "state": "configuring", 00:13:41.726 "raid_level": "raid0", 00:13:41.726 "superblock": false, 00:13:41.726 "num_base_bdevs": 4, 00:13:41.726 "num_base_bdevs_discovered": 3, 00:13:41.726 "num_base_bdevs_operational": 4, 00:13:41.726 "base_bdevs_list": [ 00:13:41.726 { 00:13:41.726 "name": "BaseBdev1", 00:13:41.726 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:41.726 "is_configured": true, 00:13:41.726 "data_offset": 0, 00:13:41.726 "data_size": 65536 00:13:41.726 }, 00:13:41.726 { 00:13:41.726 "name": null, 00:13:41.726 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:41.726 "is_configured": false, 00:13:41.726 "data_offset": 0, 00:13:41.726 "data_size": 65536 00:13:41.726 }, 00:13:41.726 { 00:13:41.726 "name": "BaseBdev3", 00:13:41.726 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:41.726 "is_configured": true, 00:13:41.726 "data_offset": 0, 00:13:41.726 "data_size": 65536 00:13:41.726 }, 00:13:41.726 { 00:13:41.726 "name": "BaseBdev4", 00:13:41.726 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:41.726 "is_configured": true, 00:13:41.726 "data_offset": 0, 00:13:41.726 "data_size": 65536 00:13:41.726 } 00:13:41.726 ] 00:13:41.726 }' 00:13:41.726 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.726 21:55:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.292 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.292 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:42.292 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:42.292 21:55:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:42.550 [2024-05-14 21:55:43.120702] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.808 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.065 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.065 "name": "Existed_Raid", 00:13:43.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.065 "strip_size_kb": 64, 00:13:43.065 "state": "configuring", 00:13:43.065 "raid_level": "raid0", 00:13:43.065 "superblock": false, 00:13:43.065 "num_base_bdevs": 4, 00:13:43.065 "num_base_bdevs_discovered": 2, 00:13:43.065 "num_base_bdevs_operational": 4, 00:13:43.065 "base_bdevs_list": [ 00:13:43.065 { 00:13:43.065 "name": "BaseBdev1", 00:13:43.065 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:43.065 "is_configured": true, 00:13:43.065 "data_offset": 0, 00:13:43.065 "data_size": 65536 00:13:43.065 }, 00:13:43.065 { 00:13:43.065 "name": null, 00:13:43.065 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:43.065 "is_configured": false, 00:13:43.065 "data_offset": 0, 00:13:43.065 "data_size": 65536 00:13:43.065 }, 00:13:43.065 { 00:13:43.065 "name": null, 00:13:43.065 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:43.065 "is_configured": false, 00:13:43.065 "data_offset": 0, 00:13:43.065 "data_size": 65536 00:13:43.065 }, 00:13:43.065 { 00:13:43.065 "name": "BaseBdev4", 00:13:43.065 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:43.065 "is_configured": true, 00:13:43.065 "data_offset": 0, 00:13:43.065 "data_size": 65536 00:13:43.065 } 00:13:43.065 ] 00:13:43.065 }' 00:13:43.065 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.065 21:55:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.322 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:43.322 21:55:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.580 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:13:43.580 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:43.838 [2024-05-14 21:55:44.256815] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.838 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.096 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:44.096 "name": "Existed_Raid", 00:13:44.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.096 "strip_size_kb": 64, 00:13:44.096 "state": "configuring", 00:13:44.096 "raid_level": "raid0", 00:13:44.096 "superblock": false, 00:13:44.096 "num_base_bdevs": 4, 00:13:44.096 "num_base_bdevs_discovered": 3, 00:13:44.096 "num_base_bdevs_operational": 4, 00:13:44.096 "base_bdevs_list": [ 00:13:44.096 { 00:13:44.096 "name": "BaseBdev1", 00:13:44.096 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:44.096 "is_configured": true, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 65536 00:13:44.096 }, 00:13:44.096 { 00:13:44.096 "name": null, 00:13:44.096 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:44.096 "is_configured": false, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 65536 00:13:44.096 }, 00:13:44.096 { 00:13:44.096 "name": "BaseBdev3", 00:13:44.096 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:44.096 "is_configured": true, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 65536 00:13:44.096 }, 00:13:44.096 { 00:13:44.096 "name": "BaseBdev4", 00:13:44.096 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:44.096 "is_configured": true, 00:13:44.096 "data_offset": 0, 00:13:44.096 "data_size": 65536 00:13:44.096 } 00:13:44.096 ] 00:13:44.096 }' 00:13:44.096 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:44.096 21:55:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.355 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.355 21:55:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.612 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:13:44.612 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:45.178 [2024-05-14 21:55:45.480874] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.178 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.436 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.436 "name": "Existed_Raid", 00:13:45.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.436 "strip_size_kb": 64, 00:13:45.436 "state": "configuring", 00:13:45.436 "raid_level": "raid0", 00:13:45.436 "superblock": false, 00:13:45.436 "num_base_bdevs": 4, 00:13:45.436 "num_base_bdevs_discovered": 2, 00:13:45.436 "num_base_bdevs_operational": 4, 00:13:45.436 "base_bdevs_list": [ 00:13:45.436 { 00:13:45.436 "name": null, 00:13:45.436 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:45.436 "is_configured": false, 00:13:45.436 "data_offset": 0, 00:13:45.436 "data_size": 65536 00:13:45.436 }, 00:13:45.436 { 00:13:45.436 "name": null, 00:13:45.436 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:45.436 "is_configured": false, 00:13:45.436 "data_offset": 0, 00:13:45.436 "data_size": 65536 00:13:45.436 }, 00:13:45.436 { 00:13:45.436 "name": "BaseBdev3", 00:13:45.436 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:45.436 "is_configured": true, 00:13:45.436 "data_offset": 0, 00:13:45.436 "data_size": 65536 00:13:45.436 }, 00:13:45.436 { 00:13:45.436 "name": "BaseBdev4", 00:13:45.436 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:45.436 "is_configured": true, 00:13:45.436 "data_offset": 0, 00:13:45.436 "data_size": 65536 00:13:45.436 } 00:13:45.436 ] 00:13:45.436 }' 00:13:45.436 21:55:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.436 21:55:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.694 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.694 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.953 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:13:45.953 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:46.211 [2024-05-14 21:55:46.714842] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.211 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.470 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.470 "name": "Existed_Raid", 00:13:46.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.470 "strip_size_kb": 64, 00:13:46.470 "state": "configuring", 00:13:46.470 "raid_level": "raid0", 00:13:46.470 "superblock": false, 00:13:46.470 "num_base_bdevs": 4, 00:13:46.470 "num_base_bdevs_discovered": 3, 00:13:46.470 "num_base_bdevs_operational": 4, 00:13:46.470 "base_bdevs_list": [ 00:13:46.470 { 00:13:46.470 "name": null, 00:13:46.470 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:46.470 "is_configured": false, 00:13:46.470 "data_offset": 0, 00:13:46.470 "data_size": 65536 00:13:46.470 }, 00:13:46.470 { 00:13:46.470 "name": "BaseBdev2", 00:13:46.470 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:46.470 "is_configured": true, 00:13:46.470 "data_offset": 0, 00:13:46.470 "data_size": 65536 00:13:46.470 }, 00:13:46.470 { 00:13:46.470 "name": "BaseBdev3", 00:13:46.470 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:46.470 "is_configured": true, 00:13:46.470 "data_offset": 0, 00:13:46.470 "data_size": 65536 00:13:46.470 }, 00:13:46.470 { 00:13:46.470 "name": "BaseBdev4", 00:13:46.470 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:46.470 "is_configured": true, 00:13:46.470 "data_offset": 0, 00:13:46.470 "data_size": 65536 00:13:46.470 } 00:13:46.470 ] 00:13:46.470 }' 00:13:46.470 21:55:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.470 21:55:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.729 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.729 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.296 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:13:47.296 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.296 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:47.296 21:55:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b4ee8f6d-123c-11ef-8c90-4585f0cfab08 00:13:47.863 [2024-05-14 21:55:48.175124] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:47.863 [2024-05-14 21:55:48.175156] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f6300 00:13:47.863 [2024-05-14 21:55:48.175161] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:47.863 [2024-05-14 21:55:48.175185] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654e20 00:13:47.863 [2024-05-14 21:55:48.175267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f6300 00:13:47.863 [2024-05-14 21:55:48.175273] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5f6300 00:13:47.863 [2024-05-14 21:55:48.175307] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.863 NewBaseBdev 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:47.863 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.156 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.156 [ 00:13:48.156 { 00:13:48.156 "name": "NewBaseBdev", 00:13:48.156 "aliases": [ 00:13:48.156 "b4ee8f6d-123c-11ef-8c90-4585f0cfab08" 00:13:48.156 ], 00:13:48.156 "product_name": "Malloc disk", 00:13:48.156 "block_size": 512, 00:13:48.156 "num_blocks": 65536, 00:13:48.156 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:48.156 "assigned_rate_limits": { 00:13:48.156 "rw_ios_per_sec": 0, 00:13:48.156 "rw_mbytes_per_sec": 0, 00:13:48.156 "r_mbytes_per_sec": 0, 00:13:48.156 "w_mbytes_per_sec": 0 00:13:48.156 }, 00:13:48.156 "claimed": true, 00:13:48.156 "claim_type": "exclusive_write", 00:13:48.156 "zoned": false, 00:13:48.156 "supported_io_types": { 00:13:48.156 "read": true, 00:13:48.156 "write": true, 00:13:48.156 "unmap": true, 00:13:48.156 "write_zeroes": true, 00:13:48.156 "flush": true, 00:13:48.156 "reset": true, 00:13:48.156 "compare": false, 00:13:48.156 "compare_and_write": false, 00:13:48.156 "abort": true, 00:13:48.156 "nvme_admin": false, 00:13:48.156 "nvme_io": false 00:13:48.156 }, 00:13:48.156 "memory_domains": [ 00:13:48.156 { 00:13:48.156 "dma_device_id": "system", 00:13:48.156 "dma_device_type": 1 00:13:48.156 }, 00:13:48.156 { 00:13:48.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.156 "dma_device_type": 2 00:13:48.156 } 00:13:48.156 ], 00:13:48.156 "driver_specific": {} 00:13:48.156 } 00:13:48.156 ] 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.436 21:55:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.693 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.693 "name": "Existed_Raid", 00:13:48.693 "uuid": "b8ee8482-123c-11ef-8c90-4585f0cfab08", 00:13:48.693 "strip_size_kb": 64, 00:13:48.693 "state": "online", 00:13:48.693 "raid_level": "raid0", 00:13:48.693 "superblock": false, 00:13:48.693 "num_base_bdevs": 4, 00:13:48.693 "num_base_bdevs_discovered": 4, 00:13:48.693 "num_base_bdevs_operational": 4, 00:13:48.693 "base_bdevs_list": [ 00:13:48.693 { 00:13:48.693 "name": "NewBaseBdev", 00:13:48.693 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:48.693 "is_configured": true, 00:13:48.693 "data_offset": 0, 00:13:48.693 "data_size": 65536 00:13:48.693 }, 00:13:48.693 { 00:13:48.693 "name": "BaseBdev2", 00:13:48.693 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:48.693 "is_configured": true, 00:13:48.693 "data_offset": 0, 00:13:48.693 "data_size": 65536 00:13:48.693 }, 00:13:48.693 { 00:13:48.693 "name": "BaseBdev3", 00:13:48.693 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:48.693 "is_configured": true, 00:13:48.693 "data_offset": 0, 00:13:48.693 "data_size": 65536 00:13:48.693 }, 00:13:48.693 { 00:13:48.693 "name": "BaseBdev4", 00:13:48.693 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:48.693 "is_configured": true, 00:13:48.694 "data_offset": 0, 00:13:48.694 "data_size": 65536 00:13:48.694 } 00:13:48.694 ] 00:13:48.694 }' 00:13:48.694 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.694 21:55:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:48.951 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:49.209 [2024-05-14 21:55:49.695081] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.209 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:49.209 "name": "Existed_Raid", 00:13:49.209 "aliases": [ 00:13:49.209 "b8ee8482-123c-11ef-8c90-4585f0cfab08" 00:13:49.209 ], 00:13:49.209 "product_name": "Raid Volume", 00:13:49.209 "block_size": 512, 00:13:49.209 "num_blocks": 262144, 00:13:49.209 "uuid": "b8ee8482-123c-11ef-8c90-4585f0cfab08", 00:13:49.209 "assigned_rate_limits": { 00:13:49.209 "rw_ios_per_sec": 0, 00:13:49.209 "rw_mbytes_per_sec": 0, 00:13:49.209 "r_mbytes_per_sec": 0, 00:13:49.209 "w_mbytes_per_sec": 0 00:13:49.209 }, 00:13:49.209 "claimed": false, 00:13:49.209 "zoned": false, 00:13:49.209 "supported_io_types": { 00:13:49.209 "read": true, 00:13:49.209 "write": true, 00:13:49.209 "unmap": true, 00:13:49.209 "write_zeroes": true, 00:13:49.209 "flush": true, 00:13:49.209 "reset": true, 00:13:49.209 "compare": false, 00:13:49.209 "compare_and_write": false, 00:13:49.209 "abort": false, 00:13:49.209 "nvme_admin": false, 00:13:49.209 "nvme_io": false 00:13:49.209 }, 00:13:49.209 "memory_domains": [ 00:13:49.209 { 00:13:49.209 "dma_device_id": "system", 00:13:49.209 "dma_device_type": 1 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.209 "dma_device_type": 2 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "system", 00:13:49.209 "dma_device_type": 1 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.209 "dma_device_type": 2 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "system", 00:13:49.209 "dma_device_type": 1 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.209 "dma_device_type": 2 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "system", 00:13:49.209 "dma_device_type": 1 00:13:49.209 }, 00:13:49.209 { 00:13:49.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.209 "dma_device_type": 2 00:13:49.210 } 00:13:49.210 ], 00:13:49.210 "driver_specific": { 00:13:49.210 "raid": { 00:13:49.210 "uuid": "b8ee8482-123c-11ef-8c90-4585f0cfab08", 00:13:49.210 "strip_size_kb": 64, 00:13:49.210 "state": "online", 00:13:49.210 "raid_level": "raid0", 00:13:49.210 "superblock": false, 00:13:49.210 "num_base_bdevs": 4, 00:13:49.210 "num_base_bdevs_discovered": 4, 00:13:49.210 "num_base_bdevs_operational": 4, 00:13:49.210 "base_bdevs_list": [ 00:13:49.210 { 00:13:49.210 "name": "NewBaseBdev", 00:13:49.210 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:49.210 "is_configured": true, 00:13:49.210 "data_offset": 0, 00:13:49.210 "data_size": 65536 00:13:49.210 }, 00:13:49.210 { 00:13:49.210 "name": "BaseBdev2", 00:13:49.210 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:49.210 "is_configured": true, 00:13:49.210 "data_offset": 0, 00:13:49.210 "data_size": 65536 00:13:49.210 }, 00:13:49.210 { 00:13:49.210 "name": "BaseBdev3", 00:13:49.210 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:49.210 "is_configured": true, 00:13:49.210 "data_offset": 0, 00:13:49.210 "data_size": 65536 00:13:49.210 }, 00:13:49.210 { 00:13:49.210 "name": "BaseBdev4", 00:13:49.210 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:49.210 "is_configured": true, 00:13:49.210 "data_offset": 0, 00:13:49.210 "data_size": 65536 00:13:49.210 } 00:13:49.210 ] 00:13:49.210 } 00:13:49.210 } 00:13:49.210 }' 00:13:49.210 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.210 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:13:49.210 BaseBdev2 00:13:49.210 BaseBdev3 00:13:49.210 BaseBdev4' 00:13:49.210 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:49.210 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:49.210 21:55:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:49.468 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:49.468 "name": "NewBaseBdev", 00:13:49.468 "aliases": [ 00:13:49.468 "b4ee8f6d-123c-11ef-8c90-4585f0cfab08" 00:13:49.468 ], 00:13:49.468 "product_name": "Malloc disk", 00:13:49.468 "block_size": 512, 00:13:49.468 "num_blocks": 65536, 00:13:49.468 "uuid": "b4ee8f6d-123c-11ef-8c90-4585f0cfab08", 00:13:49.468 "assigned_rate_limits": { 00:13:49.468 "rw_ios_per_sec": 0, 00:13:49.468 "rw_mbytes_per_sec": 0, 00:13:49.468 "r_mbytes_per_sec": 0, 00:13:49.468 "w_mbytes_per_sec": 0 00:13:49.469 }, 00:13:49.469 "claimed": true, 00:13:49.469 "claim_type": "exclusive_write", 00:13:49.469 "zoned": false, 00:13:49.469 "supported_io_types": { 00:13:49.469 "read": true, 00:13:49.469 "write": true, 00:13:49.469 "unmap": true, 00:13:49.469 "write_zeroes": true, 00:13:49.469 "flush": true, 00:13:49.469 "reset": true, 00:13:49.469 "compare": false, 00:13:49.469 "compare_and_write": false, 00:13:49.469 "abort": true, 00:13:49.469 "nvme_admin": false, 00:13:49.469 "nvme_io": false 00:13:49.469 }, 00:13:49.469 "memory_domains": [ 00:13:49.469 { 00:13:49.469 "dma_device_id": "system", 00:13:49.469 "dma_device_type": 1 00:13:49.469 }, 00:13:49.469 { 00:13:49.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.469 "dma_device_type": 2 00:13:49.469 } 00:13:49.469 ], 00:13:49.469 "driver_specific": {} 00:13:49.469 }' 00:13:49.469 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.469 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:49.728 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:49.987 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:49.987 "name": "BaseBdev2", 00:13:49.987 "aliases": [ 00:13:49.987 "b27152b8-123c-11ef-8c90-4585f0cfab08" 00:13:49.987 ], 00:13:49.987 "product_name": "Malloc disk", 00:13:49.987 "block_size": 512, 00:13:49.987 "num_blocks": 65536, 00:13:49.987 "uuid": "b27152b8-123c-11ef-8c90-4585f0cfab08", 00:13:49.987 "assigned_rate_limits": { 00:13:49.987 "rw_ios_per_sec": 0, 00:13:49.987 "rw_mbytes_per_sec": 0, 00:13:49.987 "r_mbytes_per_sec": 0, 00:13:49.987 "w_mbytes_per_sec": 0 00:13:49.987 }, 00:13:49.987 "claimed": true, 00:13:49.987 "claim_type": "exclusive_write", 00:13:49.987 "zoned": false, 00:13:49.987 "supported_io_types": { 00:13:49.987 "read": true, 00:13:49.987 "write": true, 00:13:49.987 "unmap": true, 00:13:49.987 "write_zeroes": true, 00:13:49.987 "flush": true, 00:13:49.987 "reset": true, 00:13:49.987 "compare": false, 00:13:49.987 "compare_and_write": false, 00:13:49.987 "abort": true, 00:13:49.987 "nvme_admin": false, 00:13:49.987 "nvme_io": false 00:13:49.987 }, 00:13:49.987 "memory_domains": [ 00:13:49.987 { 00:13:49.987 "dma_device_id": "system", 00:13:49.987 "dma_device_type": 1 00:13:49.987 }, 00:13:49.987 { 00:13:49.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.987 "dma_device_type": 2 00:13:49.987 } 00:13:49.987 ], 00:13:49.987 "driver_specific": {} 00:13:49.987 }' 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:49.988 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:50.246 "name": "BaseBdev3", 00:13:50.246 "aliases": [ 00:13:50.246 "b2ec0119-123c-11ef-8c90-4585f0cfab08" 00:13:50.246 ], 00:13:50.246 "product_name": "Malloc disk", 00:13:50.246 "block_size": 512, 00:13:50.246 "num_blocks": 65536, 00:13:50.246 "uuid": "b2ec0119-123c-11ef-8c90-4585f0cfab08", 00:13:50.246 "assigned_rate_limits": { 00:13:50.246 "rw_ios_per_sec": 0, 00:13:50.246 "rw_mbytes_per_sec": 0, 00:13:50.246 "r_mbytes_per_sec": 0, 00:13:50.246 "w_mbytes_per_sec": 0 00:13:50.246 }, 00:13:50.246 "claimed": true, 00:13:50.246 "claim_type": "exclusive_write", 00:13:50.246 "zoned": false, 00:13:50.246 "supported_io_types": { 00:13:50.246 "read": true, 00:13:50.246 "write": true, 00:13:50.246 "unmap": true, 00:13:50.246 "write_zeroes": true, 00:13:50.246 "flush": true, 00:13:50.246 "reset": true, 00:13:50.246 "compare": false, 00:13:50.246 "compare_and_write": false, 00:13:50.246 "abort": true, 00:13:50.246 "nvme_admin": false, 00:13:50.246 "nvme_io": false 00:13:50.246 }, 00:13:50.246 "memory_domains": [ 00:13:50.246 { 00:13:50.246 "dma_device_id": "system", 00:13:50.246 "dma_device_type": 1 00:13:50.246 }, 00:13:50.246 { 00:13:50.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.246 "dma_device_type": 2 00:13:50.246 } 00:13:50.246 ], 00:13:50.246 "driver_specific": {} 00:13:50.246 }' 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.246 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:50.505 21:55:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:50.764 "name": "BaseBdev4", 00:13:50.764 "aliases": [ 00:13:50.764 "b35f5d87-123c-11ef-8c90-4585f0cfab08" 00:13:50.764 ], 00:13:50.764 "product_name": "Malloc disk", 00:13:50.764 "block_size": 512, 00:13:50.764 "num_blocks": 65536, 00:13:50.764 "uuid": "b35f5d87-123c-11ef-8c90-4585f0cfab08", 00:13:50.764 "assigned_rate_limits": { 00:13:50.764 "rw_ios_per_sec": 0, 00:13:50.764 "rw_mbytes_per_sec": 0, 00:13:50.764 "r_mbytes_per_sec": 0, 00:13:50.764 "w_mbytes_per_sec": 0 00:13:50.764 }, 00:13:50.764 "claimed": true, 00:13:50.764 "claim_type": "exclusive_write", 00:13:50.764 "zoned": false, 00:13:50.764 "supported_io_types": { 00:13:50.764 "read": true, 00:13:50.764 "write": true, 00:13:50.764 "unmap": true, 00:13:50.764 "write_zeroes": true, 00:13:50.764 "flush": true, 00:13:50.764 "reset": true, 00:13:50.764 "compare": false, 00:13:50.764 "compare_and_write": false, 00:13:50.764 "abort": true, 00:13:50.764 "nvme_admin": false, 00:13:50.764 "nvme_io": false 00:13:50.764 }, 00:13:50.764 "memory_domains": [ 00:13:50.764 { 00:13:50.764 "dma_device_id": "system", 00:13:50.764 "dma_device_type": 1 00:13:50.764 }, 00:13:50.764 { 00:13:50.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.764 "dma_device_type": 2 00:13:50.764 } 00:13:50.764 ], 00:13:50.764 "driver_specific": {} 00:13:50.764 }' 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:50.764 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:51.023 [2024-05-14 21:55:51.539085] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.023 [2024-05-14 21:55:51.539119] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.023 [2024-05-14 21:55:51.539142] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.023 [2024-05-14 21:55:51.539158] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.023 [2024-05-14 21:55:51.539163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f6300 name Existed_Raid, state offline 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 57172 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 57172 ']' 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 57172 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 57172 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:13:51.023 killing process with pid 57172 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57172' 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 57172 00:13:51.023 [2024-05-14 21:55:51.569998] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.023 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 57172 00:13:51.023 [2024-05-14 21:55:51.593309] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.281 21:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:13:51.281 ************************************ 00:13:51.281 END TEST raid_state_function_test 00:13:51.282 ************************************ 00:13:51.282 00:13:51.282 real 0m28.894s 00:13:51.282 user 0m53.089s 00:13:51.282 sys 0m3.810s 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.282 21:55:51 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:51.282 21:55:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:51.282 21:55:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.282 21:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.282 ************************************ 00:13:51.282 START TEST raid_state_function_test_sb 00:13:51.282 ************************************ 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=57999 00:13:51.282 Process raid pid: 57999 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57999' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 57999 /var/tmp/spdk-raid.sock 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 57999 ']' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.282 21:55:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.282 [2024-05-14 21:55:51.848021] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:51.282 [2024-05-14 21:55:51.848342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:52.214 EAL: TSC is not safe to use in SMP mode 00:13:52.214 EAL: TSC is not invariant 00:13:52.214 [2024-05-14 21:55:52.453613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.214 [2024-05-14 21:55:52.542688] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:52.214 [2024-05-14 21:55:52.545040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.214 [2024-05-14 21:55:52.545832] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.214 [2024-05-14 21:55:52.545847] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.472 21:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.472 21:55:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:52.472 21:55:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:52.730 [2024-05-14 21:55:53.194597] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.730 [2024-05-14 21:55:53.194670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.730 [2024-05-14 21:55:53.194675] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.730 [2024-05-14 21:55:53.194701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.730 [2024-05-14 21:55:53.194704] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.730 [2024-05-14 21:55:53.194712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.730 [2024-05-14 21:55:53.194715] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.730 [2024-05-14 21:55:53.194723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.730 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.987 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.987 "name": "Existed_Raid", 00:13:52.987 "uuid": "bbec6bb4-123c-11ef-8c90-4585f0cfab08", 00:13:52.987 "strip_size_kb": 64, 00:13:52.987 "state": "configuring", 00:13:52.987 "raid_level": "raid0", 00:13:52.987 "superblock": true, 00:13:52.987 "num_base_bdevs": 4, 00:13:52.987 "num_base_bdevs_discovered": 0, 00:13:52.987 "num_base_bdevs_operational": 4, 00:13:52.987 "base_bdevs_list": [ 00:13:52.987 { 00:13:52.987 "name": "BaseBdev1", 00:13:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.987 "is_configured": false, 00:13:52.987 "data_offset": 0, 00:13:52.987 "data_size": 0 00:13:52.987 }, 00:13:52.987 { 00:13:52.987 "name": "BaseBdev2", 00:13:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.988 "is_configured": false, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 0 00:13:52.988 }, 00:13:52.988 { 00:13:52.988 "name": "BaseBdev3", 00:13:52.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.988 "is_configured": false, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 0 00:13:52.988 }, 00:13:52.988 { 00:13:52.988 "name": "BaseBdev4", 00:13:52.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.988 "is_configured": false, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 0 00:13:52.988 } 00:13:52.988 ] 00:13:52.988 }' 00:13:52.988 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.988 21:55:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.244 21:55:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:53.809 [2024-05-14 21:55:54.098578] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.809 [2024-05-14 21:55:54.098610] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ad5e300 name Existed_Raid, state configuring 00:13:53.809 21:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:53.809 [2024-05-14 21:55:54.358595] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.809 [2024-05-14 21:55:54.358653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.809 [2024-05-14 21:55:54.358659] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.809 [2024-05-14 21:55:54.358668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.809 [2024-05-14 21:55:54.358671] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.809 [2024-05-14 21:55:54.358679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.809 [2024-05-14 21:55:54.358682] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.809 [2024-05-14 21:55:54.358689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.809 21:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.067 [2024-05-14 21:55:54.611620] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.067 BaseBdev1 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:54.067 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.323 21:55:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.581 [ 00:13:54.581 { 00:13:54.581 "name": "BaseBdev1", 00:13:54.581 "aliases": [ 00:13:54.581 "bcc47d67-123c-11ef-8c90-4585f0cfab08" 00:13:54.581 ], 00:13:54.581 "product_name": "Malloc disk", 00:13:54.581 "block_size": 512, 00:13:54.581 "num_blocks": 65536, 00:13:54.581 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:54.581 "assigned_rate_limits": { 00:13:54.581 "rw_ios_per_sec": 0, 00:13:54.581 "rw_mbytes_per_sec": 0, 00:13:54.581 "r_mbytes_per_sec": 0, 00:13:54.581 "w_mbytes_per_sec": 0 00:13:54.581 }, 00:13:54.581 "claimed": true, 00:13:54.581 "claim_type": "exclusive_write", 00:13:54.581 "zoned": false, 00:13:54.581 "supported_io_types": { 00:13:54.581 "read": true, 00:13:54.581 "write": true, 00:13:54.581 "unmap": true, 00:13:54.581 "write_zeroes": true, 00:13:54.581 "flush": true, 00:13:54.581 "reset": true, 00:13:54.581 "compare": false, 00:13:54.581 "compare_and_write": false, 00:13:54.581 "abort": true, 00:13:54.581 "nvme_admin": false, 00:13:54.581 "nvme_io": false 00:13:54.581 }, 00:13:54.581 "memory_domains": [ 00:13:54.581 { 00:13:54.581 "dma_device_id": "system", 00:13:54.581 "dma_device_type": 1 00:13:54.581 }, 00:13:54.581 { 00:13:54.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.581 "dma_device_type": 2 00:13:54.581 } 00:13:54.581 ], 00:13:54.581 "driver_specific": {} 00:13:54.581 } 00:13:54.581 ] 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.581 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.839 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.839 "name": "Existed_Raid", 00:13:54.839 "uuid": "bc9e0868-123c-11ef-8c90-4585f0cfab08", 00:13:54.839 "strip_size_kb": 64, 00:13:54.839 "state": "configuring", 00:13:54.839 "raid_level": "raid0", 00:13:54.839 "superblock": true, 00:13:54.839 "num_base_bdevs": 4, 00:13:54.839 "num_base_bdevs_discovered": 1, 00:13:54.839 "num_base_bdevs_operational": 4, 00:13:54.839 "base_bdevs_list": [ 00:13:54.839 { 00:13:54.839 "name": "BaseBdev1", 00:13:54.839 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:54.839 "is_configured": true, 00:13:54.839 "data_offset": 2048, 00:13:54.839 "data_size": 63488 00:13:54.839 }, 00:13:54.839 { 00:13:54.839 "name": "BaseBdev2", 00:13:54.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.839 "is_configured": false, 00:13:54.839 "data_offset": 0, 00:13:54.839 "data_size": 0 00:13:54.839 }, 00:13:54.839 { 00:13:54.839 "name": "BaseBdev3", 00:13:54.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.839 "is_configured": false, 00:13:54.839 "data_offset": 0, 00:13:54.839 "data_size": 0 00:13:54.839 }, 00:13:54.839 { 00:13:54.839 "name": "BaseBdev4", 00:13:54.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.839 "is_configured": false, 00:13:54.839 "data_offset": 0, 00:13:54.839 "data_size": 0 00:13:54.839 } 00:13:54.839 ] 00:13:54.839 }' 00:13:54.839 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.839 21:55:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.404 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.404 [2024-05-14 21:55:55.930639] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.404 [2024-05-14 21:55:55.930678] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ad5e300 name Existed_Raid, state configuring 00:13:55.404 21:55:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:55.662 [2024-05-14 21:55:56.198668] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.662 [2024-05-14 21:55:56.199470] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.662 [2024-05-14 21:55:56.199513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.662 [2024-05-14 21:55:56.199519] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.662 [2024-05-14 21:55:56.199527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.662 [2024-05-14 21:55:56.199531] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.662 [2024-05-14 21:55:56.199538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.663 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.919 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.919 "name": "Existed_Raid", 00:13:55.920 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:13:55.920 "strip_size_kb": 64, 00:13:55.920 "state": "configuring", 00:13:55.920 "raid_level": "raid0", 00:13:55.920 "superblock": true, 00:13:55.920 "num_base_bdevs": 4, 00:13:55.920 "num_base_bdevs_discovered": 1, 00:13:55.920 "num_base_bdevs_operational": 4, 00:13:55.920 "base_bdevs_list": [ 00:13:55.920 { 00:13:55.920 "name": "BaseBdev1", 00:13:55.920 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:55.920 "is_configured": true, 00:13:55.920 "data_offset": 2048, 00:13:55.920 "data_size": 63488 00:13:55.920 }, 00:13:55.920 { 00:13:55.920 "name": "BaseBdev2", 00:13:55.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.920 "is_configured": false, 00:13:55.920 "data_offset": 0, 00:13:55.920 "data_size": 0 00:13:55.920 }, 00:13:55.920 { 00:13:55.920 "name": "BaseBdev3", 00:13:55.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.920 "is_configured": false, 00:13:55.920 "data_offset": 0, 00:13:55.920 "data_size": 0 00:13:55.920 }, 00:13:55.920 { 00:13:55.920 "name": "BaseBdev4", 00:13:55.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.920 "is_configured": false, 00:13:55.920 "data_offset": 0, 00:13:55.920 "data_size": 0 00:13:55.920 } 00:13:55.920 ] 00:13:55.920 }' 00:13:55.920 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.920 21:55:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.484 21:55:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.484 [2024-05-14 21:55:57.062800] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.484 BaseBdev2 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.743 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.002 [ 00:13:57.002 { 00:13:57.002 "name": "BaseBdev2", 00:13:57.002 "aliases": [ 00:13:57.002 "be3aa47d-123c-11ef-8c90-4585f0cfab08" 00:13:57.002 ], 00:13:57.002 "product_name": "Malloc disk", 00:13:57.002 "block_size": 512, 00:13:57.002 "num_blocks": 65536, 00:13:57.002 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:13:57.002 "assigned_rate_limits": { 00:13:57.002 "rw_ios_per_sec": 0, 00:13:57.002 "rw_mbytes_per_sec": 0, 00:13:57.002 "r_mbytes_per_sec": 0, 00:13:57.002 "w_mbytes_per_sec": 0 00:13:57.002 }, 00:13:57.002 "claimed": true, 00:13:57.002 "claim_type": "exclusive_write", 00:13:57.002 "zoned": false, 00:13:57.002 "supported_io_types": { 00:13:57.002 "read": true, 00:13:57.002 "write": true, 00:13:57.002 "unmap": true, 00:13:57.002 "write_zeroes": true, 00:13:57.002 "flush": true, 00:13:57.002 "reset": true, 00:13:57.002 "compare": false, 00:13:57.002 "compare_and_write": false, 00:13:57.002 "abort": true, 00:13:57.002 "nvme_admin": false, 00:13:57.002 "nvme_io": false 00:13:57.002 }, 00:13:57.002 "memory_domains": [ 00:13:57.002 { 00:13:57.002 "dma_device_id": "system", 00:13:57.002 "dma_device_type": 1 00:13:57.002 }, 00:13:57.002 { 00:13:57.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.002 "dma_device_type": 2 00:13:57.002 } 00:13:57.002 ], 00:13:57.002 "driver_specific": {} 00:13:57.002 } 00:13:57.002 ] 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.002 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.260 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.260 "name": "Existed_Raid", 00:13:57.260 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:13:57.260 "strip_size_kb": 64, 00:13:57.260 "state": "configuring", 00:13:57.260 "raid_level": "raid0", 00:13:57.260 "superblock": true, 00:13:57.260 "num_base_bdevs": 4, 00:13:57.260 "num_base_bdevs_discovered": 2, 00:13:57.260 "num_base_bdevs_operational": 4, 00:13:57.260 "base_bdevs_list": [ 00:13:57.260 { 00:13:57.260 "name": "BaseBdev1", 00:13:57.260 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:57.260 "is_configured": true, 00:13:57.260 "data_offset": 2048, 00:13:57.260 "data_size": 63488 00:13:57.260 }, 00:13:57.260 { 00:13:57.260 "name": "BaseBdev2", 00:13:57.260 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:13:57.260 "is_configured": true, 00:13:57.260 "data_offset": 2048, 00:13:57.260 "data_size": 63488 00:13:57.260 }, 00:13:57.260 { 00:13:57.260 "name": "BaseBdev3", 00:13:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.260 "is_configured": false, 00:13:57.260 "data_offset": 0, 00:13:57.260 "data_size": 0 00:13:57.260 }, 00:13:57.260 { 00:13:57.260 "name": "BaseBdev4", 00:13:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.260 "is_configured": false, 00:13:57.260 "data_offset": 0, 00:13:57.260 "data_size": 0 00:13:57.260 } 00:13:57.260 ] 00:13:57.260 }' 00:13:57.260 21:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.260 21:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.827 [2024-05-14 21:55:58.358811] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.827 BaseBdev3 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:57.827 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.085 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.342 [ 00:13:58.342 { 00:13:58.342 "name": "BaseBdev3", 00:13:58.342 "aliases": [ 00:13:58.342 "bf006673-123c-11ef-8c90-4585f0cfab08" 00:13:58.342 ], 00:13:58.342 "product_name": "Malloc disk", 00:13:58.342 "block_size": 512, 00:13:58.342 "num_blocks": 65536, 00:13:58.342 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:13:58.342 "assigned_rate_limits": { 00:13:58.342 "rw_ios_per_sec": 0, 00:13:58.342 "rw_mbytes_per_sec": 0, 00:13:58.342 "r_mbytes_per_sec": 0, 00:13:58.342 "w_mbytes_per_sec": 0 00:13:58.342 }, 00:13:58.342 "claimed": true, 00:13:58.342 "claim_type": "exclusive_write", 00:13:58.342 "zoned": false, 00:13:58.342 "supported_io_types": { 00:13:58.342 "read": true, 00:13:58.342 "write": true, 00:13:58.342 "unmap": true, 00:13:58.342 "write_zeroes": true, 00:13:58.342 "flush": true, 00:13:58.342 "reset": true, 00:13:58.342 "compare": false, 00:13:58.342 "compare_and_write": false, 00:13:58.342 "abort": true, 00:13:58.342 "nvme_admin": false, 00:13:58.342 "nvme_io": false 00:13:58.342 }, 00:13:58.342 "memory_domains": [ 00:13:58.342 { 00:13:58.342 "dma_device_id": "system", 00:13:58.342 "dma_device_type": 1 00:13:58.342 }, 00:13:58.342 { 00:13:58.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.343 "dma_device_type": 2 00:13:58.343 } 00:13:58.343 ], 00:13:58.343 "driver_specific": {} 00:13:58.343 } 00:13:58.343 ] 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.343 21:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.600 21:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.600 "name": "Existed_Raid", 00:13:58.600 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:13:58.600 "strip_size_kb": 64, 00:13:58.600 "state": "configuring", 00:13:58.600 "raid_level": "raid0", 00:13:58.600 "superblock": true, 00:13:58.600 "num_base_bdevs": 4, 00:13:58.600 "num_base_bdevs_discovered": 3, 00:13:58.600 "num_base_bdevs_operational": 4, 00:13:58.600 "base_bdevs_list": [ 00:13:58.600 { 00:13:58.600 "name": "BaseBdev1", 00:13:58.600 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:58.600 "is_configured": true, 00:13:58.600 "data_offset": 2048, 00:13:58.600 "data_size": 63488 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "name": "BaseBdev2", 00:13:58.600 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:13:58.600 "is_configured": true, 00:13:58.600 "data_offset": 2048, 00:13:58.600 "data_size": 63488 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "name": "BaseBdev3", 00:13:58.600 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:13:58.600 "is_configured": true, 00:13:58.600 "data_offset": 2048, 00:13:58.600 "data_size": 63488 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "name": "BaseBdev4", 00:13:58.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.600 "is_configured": false, 00:13:58.600 "data_offset": 0, 00:13:58.600 "data_size": 0 00:13:58.600 } 00:13:58.600 ] 00:13:58.600 }' 00:13:58.600 21:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.600 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:59.167 [2024-05-14 21:55:59.722927] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.167 [2024-05-14 21:55:59.722999] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ad5e300 00:13:59.167 [2024-05-14 21:55:59.723006] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:59.167 [2024-05-14 21:55:59.723028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82adbcec0 00:13:59.167 [2024-05-14 21:55:59.723083] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ad5e300 00:13:59.167 [2024-05-14 21:55:59.723088] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ad5e300 00:13:59.167 [2024-05-14 21:55:59.723109] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.167 BaseBdev4 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:59.167 21:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:59.733 [ 00:13:59.733 { 00:13:59.733 "name": "BaseBdev4", 00:13:59.733 "aliases": [ 00:13:59.733 "bfd08b16-123c-11ef-8c90-4585f0cfab08" 00:13:59.733 ], 00:13:59.733 "product_name": "Malloc disk", 00:13:59.733 "block_size": 512, 00:13:59.733 "num_blocks": 65536, 00:13:59.733 "uuid": "bfd08b16-123c-11ef-8c90-4585f0cfab08", 00:13:59.733 "assigned_rate_limits": { 00:13:59.733 "rw_ios_per_sec": 0, 00:13:59.733 "rw_mbytes_per_sec": 0, 00:13:59.733 "r_mbytes_per_sec": 0, 00:13:59.733 "w_mbytes_per_sec": 0 00:13:59.733 }, 00:13:59.733 "claimed": true, 00:13:59.733 "claim_type": "exclusive_write", 00:13:59.733 "zoned": false, 00:13:59.733 "supported_io_types": { 00:13:59.733 "read": true, 00:13:59.733 "write": true, 00:13:59.733 "unmap": true, 00:13:59.733 "write_zeroes": true, 00:13:59.733 "flush": true, 00:13:59.733 "reset": true, 00:13:59.733 "compare": false, 00:13:59.733 "compare_and_write": false, 00:13:59.733 "abort": true, 00:13:59.733 "nvme_admin": false, 00:13:59.733 "nvme_io": false 00:13:59.733 }, 00:13:59.733 "memory_domains": [ 00:13:59.733 { 00:13:59.733 "dma_device_id": "system", 00:13:59.733 "dma_device_type": 1 00:13:59.733 }, 00:13:59.733 { 00:13:59.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.733 "dma_device_type": 2 00:13:59.733 } 00:13:59.733 ], 00:13:59.733 "driver_specific": {} 00:13:59.733 } 00:13:59.733 ] 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.733 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.992 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.992 "name": "Existed_Raid", 00:13:59.992 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:13:59.992 "strip_size_kb": 64, 00:13:59.992 "state": "online", 00:13:59.992 "raid_level": "raid0", 00:13:59.992 "superblock": true, 00:13:59.992 "num_base_bdevs": 4, 00:13:59.992 "num_base_bdevs_discovered": 4, 00:13:59.992 "num_base_bdevs_operational": 4, 00:13:59.992 "base_bdevs_list": [ 00:13:59.992 { 00:13:59.992 "name": "BaseBdev1", 00:13:59.992 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:13:59.992 "is_configured": true, 00:13:59.992 "data_offset": 2048, 00:13:59.992 "data_size": 63488 00:13:59.992 }, 00:13:59.992 { 00:13:59.992 "name": "BaseBdev2", 00:13:59.992 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:13:59.992 "is_configured": true, 00:13:59.992 "data_offset": 2048, 00:13:59.992 "data_size": 63488 00:13:59.992 }, 00:13:59.992 { 00:13:59.992 "name": "BaseBdev3", 00:13:59.992 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:13:59.992 "is_configured": true, 00:13:59.992 "data_offset": 2048, 00:13:59.992 "data_size": 63488 00:13:59.992 }, 00:13:59.992 { 00:13:59.992 "name": "BaseBdev4", 00:13:59.992 "uuid": "bfd08b16-123c-11ef-8c90-4585f0cfab08", 00:13:59.992 "is_configured": true, 00:13:59.992 "data_offset": 2048, 00:13:59.992 "data_size": 63488 00:13:59.992 } 00:13:59.992 ] 00:13:59.992 }' 00:13:59.992 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.992 21:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:00.561 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:00.562 21:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:00.562 [2024-05-14 21:56:01.138925] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.821 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:00.821 "name": "Existed_Raid", 00:14:00.821 "aliases": [ 00:14:00.821 "bdb6cdfa-123c-11ef-8c90-4585f0cfab08" 00:14:00.821 ], 00:14:00.821 "product_name": "Raid Volume", 00:14:00.821 "block_size": 512, 00:14:00.821 "num_blocks": 253952, 00:14:00.821 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:14:00.821 "assigned_rate_limits": { 00:14:00.821 "rw_ios_per_sec": 0, 00:14:00.821 "rw_mbytes_per_sec": 0, 00:14:00.821 "r_mbytes_per_sec": 0, 00:14:00.821 "w_mbytes_per_sec": 0 00:14:00.821 }, 00:14:00.821 "claimed": false, 00:14:00.821 "zoned": false, 00:14:00.821 "supported_io_types": { 00:14:00.821 "read": true, 00:14:00.821 "write": true, 00:14:00.821 "unmap": true, 00:14:00.821 "write_zeroes": true, 00:14:00.821 "flush": true, 00:14:00.821 "reset": true, 00:14:00.821 "compare": false, 00:14:00.821 "compare_and_write": false, 00:14:00.821 "abort": false, 00:14:00.821 "nvme_admin": false, 00:14:00.821 "nvme_io": false 00:14:00.821 }, 00:14:00.821 "memory_domains": [ 00:14:00.821 { 00:14:00.821 "dma_device_id": "system", 00:14:00.821 "dma_device_type": 1 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.821 "dma_device_type": 2 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "system", 00:14:00.821 "dma_device_type": 1 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.821 "dma_device_type": 2 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "system", 00:14:00.821 "dma_device_type": 1 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.821 "dma_device_type": 2 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "system", 00:14:00.821 "dma_device_type": 1 00:14:00.821 }, 00:14:00.821 { 00:14:00.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.821 "dma_device_type": 2 00:14:00.821 } 00:14:00.821 ], 00:14:00.821 "driver_specific": { 00:14:00.821 "raid": { 00:14:00.821 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:14:00.821 "strip_size_kb": 64, 00:14:00.821 "state": "online", 00:14:00.821 "raid_level": "raid0", 00:14:00.821 "superblock": true, 00:14:00.821 "num_base_bdevs": 4, 00:14:00.821 "num_base_bdevs_discovered": 4, 00:14:00.822 "num_base_bdevs_operational": 4, 00:14:00.822 "base_bdevs_list": [ 00:14:00.822 { 00:14:00.822 "name": "BaseBdev1", 00:14:00.822 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:14:00.822 "is_configured": true, 00:14:00.822 "data_offset": 2048, 00:14:00.822 "data_size": 63488 00:14:00.822 }, 00:14:00.822 { 00:14:00.822 "name": "BaseBdev2", 00:14:00.822 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:14:00.822 "is_configured": true, 00:14:00.822 "data_offset": 2048, 00:14:00.822 "data_size": 63488 00:14:00.822 }, 00:14:00.822 { 00:14:00.822 "name": "BaseBdev3", 00:14:00.822 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:14:00.822 "is_configured": true, 00:14:00.822 "data_offset": 2048, 00:14:00.822 "data_size": 63488 00:14:00.822 }, 00:14:00.822 { 00:14:00.822 "name": "BaseBdev4", 00:14:00.822 "uuid": "bfd08b16-123c-11ef-8c90-4585f0cfab08", 00:14:00.822 "is_configured": true, 00:14:00.822 "data_offset": 2048, 00:14:00.822 "data_size": 63488 00:14:00.822 } 00:14:00.822 ] 00:14:00.822 } 00:14:00.822 } 00:14:00.822 }' 00:14:00.822 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.822 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:00.822 BaseBdev2 00:14:00.822 BaseBdev3 00:14:00.822 BaseBdev4' 00:14:00.822 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:00.822 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:00.822 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:01.080 "name": "BaseBdev1", 00:14:01.080 "aliases": [ 00:14:01.080 "bcc47d67-123c-11ef-8c90-4585f0cfab08" 00:14:01.080 ], 00:14:01.080 "product_name": "Malloc disk", 00:14:01.080 "block_size": 512, 00:14:01.080 "num_blocks": 65536, 00:14:01.080 "uuid": "bcc47d67-123c-11ef-8c90-4585f0cfab08", 00:14:01.080 "assigned_rate_limits": { 00:14:01.080 "rw_ios_per_sec": 0, 00:14:01.080 "rw_mbytes_per_sec": 0, 00:14:01.080 "r_mbytes_per_sec": 0, 00:14:01.080 "w_mbytes_per_sec": 0 00:14:01.080 }, 00:14:01.080 "claimed": true, 00:14:01.080 "claim_type": "exclusive_write", 00:14:01.080 "zoned": false, 00:14:01.080 "supported_io_types": { 00:14:01.080 "read": true, 00:14:01.080 "write": true, 00:14:01.080 "unmap": true, 00:14:01.080 "write_zeroes": true, 00:14:01.080 "flush": true, 00:14:01.080 "reset": true, 00:14:01.080 "compare": false, 00:14:01.080 "compare_and_write": false, 00:14:01.080 "abort": true, 00:14:01.080 "nvme_admin": false, 00:14:01.080 "nvme_io": false 00:14:01.080 }, 00:14:01.080 "memory_domains": [ 00:14:01.080 { 00:14:01.080 "dma_device_id": "system", 00:14:01.080 "dma_device_type": 1 00:14:01.080 }, 00:14:01.080 { 00:14:01.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.080 "dma_device_type": 2 00:14:01.080 } 00:14:01.080 ], 00:14:01.080 "driver_specific": {} 00:14:01.080 }' 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:01.080 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:01.338 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:01.338 "name": "BaseBdev2", 00:14:01.338 "aliases": [ 00:14:01.339 "be3aa47d-123c-11ef-8c90-4585f0cfab08" 00:14:01.339 ], 00:14:01.339 "product_name": "Malloc disk", 00:14:01.339 "block_size": 512, 00:14:01.339 "num_blocks": 65536, 00:14:01.339 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:14:01.339 "assigned_rate_limits": { 00:14:01.339 "rw_ios_per_sec": 0, 00:14:01.339 "rw_mbytes_per_sec": 0, 00:14:01.339 "r_mbytes_per_sec": 0, 00:14:01.339 "w_mbytes_per_sec": 0 00:14:01.339 }, 00:14:01.339 "claimed": true, 00:14:01.339 "claim_type": "exclusive_write", 00:14:01.339 "zoned": false, 00:14:01.339 "supported_io_types": { 00:14:01.339 "read": true, 00:14:01.339 "write": true, 00:14:01.339 "unmap": true, 00:14:01.339 "write_zeroes": true, 00:14:01.339 "flush": true, 00:14:01.339 "reset": true, 00:14:01.339 "compare": false, 00:14:01.339 "compare_and_write": false, 00:14:01.339 "abort": true, 00:14:01.339 "nvme_admin": false, 00:14:01.339 "nvme_io": false 00:14:01.339 }, 00:14:01.339 "memory_domains": [ 00:14:01.339 { 00:14:01.339 "dma_device_id": "system", 00:14:01.339 "dma_device_type": 1 00:14:01.339 }, 00:14:01.339 { 00:14:01.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.339 "dma_device_type": 2 00:14:01.339 } 00:14:01.339 ], 00:14:01.339 "driver_specific": {} 00:14:01.339 }' 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:01.339 21:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:01.597 "name": "BaseBdev3", 00:14:01.597 "aliases": [ 00:14:01.597 "bf006673-123c-11ef-8c90-4585f0cfab08" 00:14:01.597 ], 00:14:01.597 "product_name": "Malloc disk", 00:14:01.597 "block_size": 512, 00:14:01.597 "num_blocks": 65536, 00:14:01.597 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:14:01.597 "assigned_rate_limits": { 00:14:01.597 "rw_ios_per_sec": 0, 00:14:01.597 "rw_mbytes_per_sec": 0, 00:14:01.597 "r_mbytes_per_sec": 0, 00:14:01.597 "w_mbytes_per_sec": 0 00:14:01.597 }, 00:14:01.597 "claimed": true, 00:14:01.597 "claim_type": "exclusive_write", 00:14:01.597 "zoned": false, 00:14:01.597 "supported_io_types": { 00:14:01.597 "read": true, 00:14:01.597 "write": true, 00:14:01.597 "unmap": true, 00:14:01.597 "write_zeroes": true, 00:14:01.597 "flush": true, 00:14:01.597 "reset": true, 00:14:01.597 "compare": false, 00:14:01.597 "compare_and_write": false, 00:14:01.597 "abort": true, 00:14:01.597 "nvme_admin": false, 00:14:01.597 "nvme_io": false 00:14:01.597 }, 00:14:01.597 "memory_domains": [ 00:14:01.597 { 00:14:01.597 "dma_device_id": "system", 00:14:01.597 "dma_device_type": 1 00:14:01.597 }, 00:14:01.597 { 00:14:01.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.597 "dma_device_type": 2 00:14:01.597 } 00:14:01.597 ], 00:14:01.597 "driver_specific": {} 00:14:01.597 }' 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:01.597 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:01.855 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:01.855 "name": "BaseBdev4", 00:14:01.855 "aliases": [ 00:14:01.855 "bfd08b16-123c-11ef-8c90-4585f0cfab08" 00:14:01.855 ], 00:14:01.855 "product_name": "Malloc disk", 00:14:01.855 "block_size": 512, 00:14:01.855 "num_blocks": 65536, 00:14:01.855 "uuid": "bfd08b16-123c-11ef-8c90-4585f0cfab08", 00:14:01.855 "assigned_rate_limits": { 00:14:01.855 "rw_ios_per_sec": 0, 00:14:01.855 "rw_mbytes_per_sec": 0, 00:14:01.855 "r_mbytes_per_sec": 0, 00:14:01.855 "w_mbytes_per_sec": 0 00:14:01.855 }, 00:14:01.855 "claimed": true, 00:14:01.855 "claim_type": "exclusive_write", 00:14:01.855 "zoned": false, 00:14:01.855 "supported_io_types": { 00:14:01.855 "read": true, 00:14:01.855 "write": true, 00:14:01.855 "unmap": true, 00:14:01.855 "write_zeroes": true, 00:14:01.855 "flush": true, 00:14:01.855 "reset": true, 00:14:01.855 "compare": false, 00:14:01.855 "compare_and_write": false, 00:14:01.855 "abort": true, 00:14:01.855 "nvme_admin": false, 00:14:01.855 "nvme_io": false 00:14:01.855 }, 00:14:01.856 "memory_domains": [ 00:14:01.856 { 00:14:01.856 "dma_device_id": "system", 00:14:01.856 "dma_device_type": 1 00:14:01.856 }, 00:14:01.856 { 00:14:01.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.856 "dma_device_type": 2 00:14:01.856 } 00:14:01.856 ], 00:14:01.856 "driver_specific": {} 00:14:01.856 }' 00:14:01.856 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.856 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:01.856 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:01.856 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:02.114 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:02.373 [2024-05-14 21:56:02.759027] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.373 [2024-05-14 21:56:02.759052] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.373 [2024-05-14 21:56:02.759081] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.373 21:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.631 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.631 "name": "Existed_Raid", 00:14:02.631 "uuid": "bdb6cdfa-123c-11ef-8c90-4585f0cfab08", 00:14:02.631 "strip_size_kb": 64, 00:14:02.631 "state": "offline", 00:14:02.631 "raid_level": "raid0", 00:14:02.631 "superblock": true, 00:14:02.631 "num_base_bdevs": 4, 00:14:02.631 "num_base_bdevs_discovered": 3, 00:14:02.631 "num_base_bdevs_operational": 3, 00:14:02.631 "base_bdevs_list": [ 00:14:02.631 { 00:14:02.631 "name": null, 00:14:02.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.631 "is_configured": false, 00:14:02.631 "data_offset": 2048, 00:14:02.631 "data_size": 63488 00:14:02.631 }, 00:14:02.631 { 00:14:02.631 "name": "BaseBdev2", 00:14:02.631 "uuid": "be3aa47d-123c-11ef-8c90-4585f0cfab08", 00:14:02.631 "is_configured": true, 00:14:02.631 "data_offset": 2048, 00:14:02.631 "data_size": 63488 00:14:02.631 }, 00:14:02.631 { 00:14:02.631 "name": "BaseBdev3", 00:14:02.631 "uuid": "bf006673-123c-11ef-8c90-4585f0cfab08", 00:14:02.631 "is_configured": true, 00:14:02.631 "data_offset": 2048, 00:14:02.631 "data_size": 63488 00:14:02.631 }, 00:14:02.631 { 00:14:02.631 "name": "BaseBdev4", 00:14:02.631 "uuid": "bfd08b16-123c-11ef-8c90-4585f0cfab08", 00:14:02.631 "is_configured": true, 00:14:02.631 "data_offset": 2048, 00:14:02.631 "data_size": 63488 00:14:02.631 } 00:14:02.631 ] 00:14:02.631 }' 00:14:02.631 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.631 21:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.891 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:02.891 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.891 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.891 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:03.148 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:03.148 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.148 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:03.407 [2024-05-14 21:56:03.893351] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.407 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.407 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.407 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.407 21:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:03.665 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:03.665 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.665 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:03.924 [2024-05-14 21:56:04.399567] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:03.924 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.924 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.924 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.924 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:04.185 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:04.185 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.185 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:04.472 [2024-05-14 21:56:04.965434] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:04.472 [2024-05-14 21:56:04.965484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ad5e300 name Existed_Raid, state offline 00:14:04.472 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.472 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.472 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.472 21:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:04.730 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.988 BaseBdev2 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:04.988 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.245 21:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.502 [ 00:14:05.502 { 00:14:05.502 "name": "BaseBdev2", 00:14:05.502 "aliases": [ 00:14:05.502 "c3429797-123c-11ef-8c90-4585f0cfab08" 00:14:05.502 ], 00:14:05.502 "product_name": "Malloc disk", 00:14:05.502 "block_size": 512, 00:14:05.502 "num_blocks": 65536, 00:14:05.502 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:05.502 "assigned_rate_limits": { 00:14:05.502 "rw_ios_per_sec": 0, 00:14:05.502 "rw_mbytes_per_sec": 0, 00:14:05.502 "r_mbytes_per_sec": 0, 00:14:05.502 "w_mbytes_per_sec": 0 00:14:05.502 }, 00:14:05.502 "claimed": false, 00:14:05.502 "zoned": false, 00:14:05.502 "supported_io_types": { 00:14:05.502 "read": true, 00:14:05.502 "write": true, 00:14:05.502 "unmap": true, 00:14:05.502 "write_zeroes": true, 00:14:05.502 "flush": true, 00:14:05.502 "reset": true, 00:14:05.502 "compare": false, 00:14:05.502 "compare_and_write": false, 00:14:05.502 "abort": true, 00:14:05.502 "nvme_admin": false, 00:14:05.502 "nvme_io": false 00:14:05.502 }, 00:14:05.502 "memory_domains": [ 00:14:05.502 { 00:14:05.502 "dma_device_id": "system", 00:14:05.502 "dma_device_type": 1 00:14:05.502 }, 00:14:05.502 { 00:14:05.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.502 "dma_device_type": 2 00:14:05.502 } 00:14:05.502 ], 00:14:05.502 "driver_specific": {} 00:14:05.502 } 00:14:05.502 ] 00:14:05.502 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:05.502 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:05.502 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:05.502 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.760 BaseBdev3 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:05.760 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.326 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:06.326 [ 00:14:06.326 { 00:14:06.326 "name": "BaseBdev3", 00:14:06.326 "aliases": [ 00:14:06.326 "c3bb7167-123c-11ef-8c90-4585f0cfab08" 00:14:06.326 ], 00:14:06.326 "product_name": "Malloc disk", 00:14:06.326 "block_size": 512, 00:14:06.326 "num_blocks": 65536, 00:14:06.326 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:06.326 "assigned_rate_limits": { 00:14:06.326 "rw_ios_per_sec": 0, 00:14:06.326 "rw_mbytes_per_sec": 0, 00:14:06.326 "r_mbytes_per_sec": 0, 00:14:06.326 "w_mbytes_per_sec": 0 00:14:06.326 }, 00:14:06.326 "claimed": false, 00:14:06.326 "zoned": false, 00:14:06.326 "supported_io_types": { 00:14:06.326 "read": true, 00:14:06.326 "write": true, 00:14:06.326 "unmap": true, 00:14:06.326 "write_zeroes": true, 00:14:06.326 "flush": true, 00:14:06.326 "reset": true, 00:14:06.326 "compare": false, 00:14:06.326 "compare_and_write": false, 00:14:06.326 "abort": true, 00:14:06.326 "nvme_admin": false, 00:14:06.326 "nvme_io": false 00:14:06.326 }, 00:14:06.326 "memory_domains": [ 00:14:06.326 { 00:14:06.326 "dma_device_id": "system", 00:14:06.326 "dma_device_type": 1 00:14:06.326 }, 00:14:06.326 { 00:14:06.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.326 "dma_device_type": 2 00:14:06.326 } 00:14:06.326 ], 00:14:06.326 "driver_specific": {} 00:14:06.326 } 00:14:06.326 ] 00:14:06.326 21:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:06.326 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:06.326 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:06.326 21:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:06.586 BaseBdev4 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:06.586 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.845 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:07.104 [ 00:14:07.104 { 00:14:07.104 "name": "BaseBdev4", 00:14:07.104 "aliases": [ 00:14:07.104 "c43a65cf-123c-11ef-8c90-4585f0cfab08" 00:14:07.104 ], 00:14:07.104 "product_name": "Malloc disk", 00:14:07.104 "block_size": 512, 00:14:07.104 "num_blocks": 65536, 00:14:07.104 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:07.104 "assigned_rate_limits": { 00:14:07.104 "rw_ios_per_sec": 0, 00:14:07.104 "rw_mbytes_per_sec": 0, 00:14:07.104 "r_mbytes_per_sec": 0, 00:14:07.104 "w_mbytes_per_sec": 0 00:14:07.104 }, 00:14:07.104 "claimed": false, 00:14:07.104 "zoned": false, 00:14:07.104 "supported_io_types": { 00:14:07.104 "read": true, 00:14:07.104 "write": true, 00:14:07.104 "unmap": true, 00:14:07.104 "write_zeroes": true, 00:14:07.104 "flush": true, 00:14:07.104 "reset": true, 00:14:07.104 "compare": false, 00:14:07.104 "compare_and_write": false, 00:14:07.104 "abort": true, 00:14:07.104 "nvme_admin": false, 00:14:07.104 "nvme_io": false 00:14:07.104 }, 00:14:07.104 "memory_domains": [ 00:14:07.104 { 00:14:07.104 "dma_device_id": "system", 00:14:07.104 "dma_device_type": 1 00:14:07.104 }, 00:14:07.104 { 00:14:07.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.104 "dma_device_type": 2 00:14:07.104 } 00:14:07.104 ], 00:14:07.104 "driver_specific": {} 00:14:07.104 } 00:14:07.104 ] 00:14:07.104 21:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:07.104 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:07.104 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:07.104 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:07.362 [2024-05-14 21:56:07.891407] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.362 [2024-05-14 21:56:07.891469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.362 [2024-05-14 21:56:07.891495] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.362 [2024-05-14 21:56:07.892049] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.362 [2024-05-14 21:56:07.892066] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.362 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.363 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.363 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.363 21:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.621 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.621 "name": "Existed_Raid", 00:14:07.621 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:07.621 "strip_size_kb": 64, 00:14:07.621 "state": "configuring", 00:14:07.621 "raid_level": "raid0", 00:14:07.621 "superblock": true, 00:14:07.621 "num_base_bdevs": 4, 00:14:07.621 "num_base_bdevs_discovered": 3, 00:14:07.621 "num_base_bdevs_operational": 4, 00:14:07.621 "base_bdevs_list": [ 00:14:07.621 { 00:14:07.621 "name": "BaseBdev1", 00:14:07.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.621 "is_configured": false, 00:14:07.621 "data_offset": 0, 00:14:07.622 "data_size": 0 00:14:07.622 }, 00:14:07.622 { 00:14:07.622 "name": "BaseBdev2", 00:14:07.622 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:07.622 "is_configured": true, 00:14:07.622 "data_offset": 2048, 00:14:07.622 "data_size": 63488 00:14:07.622 }, 00:14:07.622 { 00:14:07.622 "name": "BaseBdev3", 00:14:07.622 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:07.622 "is_configured": true, 00:14:07.622 "data_offset": 2048, 00:14:07.622 "data_size": 63488 00:14:07.622 }, 00:14:07.622 { 00:14:07.622 "name": "BaseBdev4", 00:14:07.622 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:07.622 "is_configured": true, 00:14:07.622 "data_offset": 2048, 00:14:07.622 "data_size": 63488 00:14:07.622 } 00:14:07.622 ] 00:14:07.622 }' 00:14:07.622 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.622 21:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:08.202 [2024-05-14 21:56:08.747415] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.202 21:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.464 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.464 "name": "Existed_Raid", 00:14:08.464 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:08.464 "strip_size_kb": 64, 00:14:08.464 "state": "configuring", 00:14:08.464 "raid_level": "raid0", 00:14:08.464 "superblock": true, 00:14:08.464 "num_base_bdevs": 4, 00:14:08.464 "num_base_bdevs_discovered": 2, 00:14:08.464 "num_base_bdevs_operational": 4, 00:14:08.464 "base_bdevs_list": [ 00:14:08.464 { 00:14:08.464 "name": "BaseBdev1", 00:14:08.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.464 "is_configured": false, 00:14:08.464 "data_offset": 0, 00:14:08.464 "data_size": 0 00:14:08.464 }, 00:14:08.464 { 00:14:08.464 "name": null, 00:14:08.464 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:08.464 "is_configured": false, 00:14:08.464 "data_offset": 2048, 00:14:08.464 "data_size": 63488 00:14:08.464 }, 00:14:08.464 { 00:14:08.464 "name": "BaseBdev3", 00:14:08.464 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:08.464 "is_configured": true, 00:14:08.464 "data_offset": 2048, 00:14:08.464 "data_size": 63488 00:14:08.464 }, 00:14:08.464 { 00:14:08.464 "name": "BaseBdev4", 00:14:08.464 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:08.464 "is_configured": true, 00:14:08.464 "data_offset": 2048, 00:14:08.464 "data_size": 63488 00:14:08.464 } 00:14:08.464 ] 00:14:08.464 }' 00:14:08.464 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.464 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.031 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.031 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:09.031 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:09.031 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.288 [2024-05-14 21:56:09.799592] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.288 BaseBdev1 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:09.288 21:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.547 21:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.806 [ 00:14:09.806 { 00:14:09.806 "name": "BaseBdev1", 00:14:09.806 "aliases": [ 00:14:09.806 "c5d21fad-123c-11ef-8c90-4585f0cfab08" 00:14:09.806 ], 00:14:09.806 "product_name": "Malloc disk", 00:14:09.806 "block_size": 512, 00:14:09.806 "num_blocks": 65536, 00:14:09.806 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:09.806 "assigned_rate_limits": { 00:14:09.806 "rw_ios_per_sec": 0, 00:14:09.806 "rw_mbytes_per_sec": 0, 00:14:09.806 "r_mbytes_per_sec": 0, 00:14:09.806 "w_mbytes_per_sec": 0 00:14:09.806 }, 00:14:09.806 "claimed": true, 00:14:09.806 "claim_type": "exclusive_write", 00:14:09.806 "zoned": false, 00:14:09.806 "supported_io_types": { 00:14:09.806 "read": true, 00:14:09.806 "write": true, 00:14:09.806 "unmap": true, 00:14:09.806 "write_zeroes": true, 00:14:09.806 "flush": true, 00:14:09.806 "reset": true, 00:14:09.806 "compare": false, 00:14:09.806 "compare_and_write": false, 00:14:09.806 "abort": true, 00:14:09.806 "nvme_admin": false, 00:14:09.806 "nvme_io": false 00:14:09.806 }, 00:14:09.806 "memory_domains": [ 00:14:09.806 { 00:14:09.806 "dma_device_id": "system", 00:14:09.806 "dma_device_type": 1 00:14:09.806 }, 00:14:09.806 { 00:14:09.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.806 "dma_device_type": 2 00:14:09.806 } 00:14:09.806 ], 00:14:09.806 "driver_specific": {} 00:14:09.806 } 00:14:09.806 ] 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.806 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.065 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.065 "name": "Existed_Raid", 00:14:10.065 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:10.065 "strip_size_kb": 64, 00:14:10.065 "state": "configuring", 00:14:10.065 "raid_level": "raid0", 00:14:10.065 "superblock": true, 00:14:10.065 "num_base_bdevs": 4, 00:14:10.065 "num_base_bdevs_discovered": 3, 00:14:10.065 "num_base_bdevs_operational": 4, 00:14:10.065 "base_bdevs_list": [ 00:14:10.065 { 00:14:10.065 "name": "BaseBdev1", 00:14:10.065 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:10.065 "is_configured": true, 00:14:10.065 "data_offset": 2048, 00:14:10.065 "data_size": 63488 00:14:10.065 }, 00:14:10.065 { 00:14:10.065 "name": null, 00:14:10.065 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:10.065 "is_configured": false, 00:14:10.065 "data_offset": 2048, 00:14:10.065 "data_size": 63488 00:14:10.065 }, 00:14:10.065 { 00:14:10.065 "name": "BaseBdev3", 00:14:10.065 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:10.065 "is_configured": true, 00:14:10.065 "data_offset": 2048, 00:14:10.065 "data_size": 63488 00:14:10.065 }, 00:14:10.065 { 00:14:10.065 "name": "BaseBdev4", 00:14:10.065 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:10.065 "is_configured": true, 00:14:10.065 "data_offset": 2048, 00:14:10.065 "data_size": 63488 00:14:10.065 } 00:14:10.065 ] 00:14:10.065 }' 00:14:10.065 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.065 21:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.631 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.631 21:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.890 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:10.890 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:11.148 [2024-05-14 21:56:11.491558] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.148 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.407 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.407 "name": "Existed_Raid", 00:14:11.407 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:11.407 "strip_size_kb": 64, 00:14:11.407 "state": "configuring", 00:14:11.407 "raid_level": "raid0", 00:14:11.407 "superblock": true, 00:14:11.407 "num_base_bdevs": 4, 00:14:11.407 "num_base_bdevs_discovered": 2, 00:14:11.407 "num_base_bdevs_operational": 4, 00:14:11.407 "base_bdevs_list": [ 00:14:11.407 { 00:14:11.407 "name": "BaseBdev1", 00:14:11.407 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:11.407 "is_configured": true, 00:14:11.407 "data_offset": 2048, 00:14:11.407 "data_size": 63488 00:14:11.407 }, 00:14:11.407 { 00:14:11.407 "name": null, 00:14:11.407 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:11.407 "is_configured": false, 00:14:11.407 "data_offset": 2048, 00:14:11.407 "data_size": 63488 00:14:11.407 }, 00:14:11.407 { 00:14:11.407 "name": null, 00:14:11.407 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:11.407 "is_configured": false, 00:14:11.407 "data_offset": 2048, 00:14:11.407 "data_size": 63488 00:14:11.407 }, 00:14:11.407 { 00:14:11.407 "name": "BaseBdev4", 00:14:11.407 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:11.407 "is_configured": true, 00:14:11.407 "data_offset": 2048, 00:14:11.407 "data_size": 63488 00:14:11.407 } 00:14:11.407 ] 00:14:11.407 }' 00:14:11.407 21:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.407 21:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.665 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.665 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:11.922 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:11.922 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:12.180 [2024-05-14 21:56:12.647608] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.180 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.449 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.449 "name": "Existed_Raid", 00:14:12.449 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:12.449 "strip_size_kb": 64, 00:14:12.449 "state": "configuring", 00:14:12.449 "raid_level": "raid0", 00:14:12.449 "superblock": true, 00:14:12.449 "num_base_bdevs": 4, 00:14:12.449 "num_base_bdevs_discovered": 3, 00:14:12.449 "num_base_bdevs_operational": 4, 00:14:12.449 "base_bdevs_list": [ 00:14:12.449 { 00:14:12.449 "name": "BaseBdev1", 00:14:12.449 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:12.449 "is_configured": true, 00:14:12.449 "data_offset": 2048, 00:14:12.449 "data_size": 63488 00:14:12.449 }, 00:14:12.449 { 00:14:12.449 "name": null, 00:14:12.449 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:12.449 "is_configured": false, 00:14:12.449 "data_offset": 2048, 00:14:12.449 "data_size": 63488 00:14:12.449 }, 00:14:12.449 { 00:14:12.449 "name": "BaseBdev3", 00:14:12.449 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:12.449 "is_configured": true, 00:14:12.449 "data_offset": 2048, 00:14:12.449 "data_size": 63488 00:14:12.449 }, 00:14:12.449 { 00:14:12.449 "name": "BaseBdev4", 00:14:12.449 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:12.449 "is_configured": true, 00:14:12.449 "data_offset": 2048, 00:14:12.449 "data_size": 63488 00:14:12.449 } 00:14:12.449 ] 00:14:12.449 }' 00:14:12.449 21:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.449 21:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.722 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.722 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.983 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:12.983 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:13.552 [2024-05-14 21:56:13.855743] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.552 21:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.809 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.809 "name": "Existed_Raid", 00:14:13.809 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:13.809 "strip_size_kb": 64, 00:14:13.809 "state": "configuring", 00:14:13.809 "raid_level": "raid0", 00:14:13.809 "superblock": true, 00:14:13.809 "num_base_bdevs": 4, 00:14:13.809 "num_base_bdevs_discovered": 2, 00:14:13.809 "num_base_bdevs_operational": 4, 00:14:13.809 "base_bdevs_list": [ 00:14:13.809 { 00:14:13.809 "name": null, 00:14:13.809 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:13.809 "is_configured": false, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "name": null, 00:14:13.809 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:13.809 "is_configured": false, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "name": "BaseBdev3", 00:14:13.809 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:13.809 "is_configured": true, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "name": "BaseBdev4", 00:14:13.809 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:13.809 "is_configured": true, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 } 00:14:13.809 ] 00:14:13.809 }' 00:14:13.809 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.809 21:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.067 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.067 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:14.325 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:14.325 21:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:14.582 [2024-05-14 21:56:14.990060] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.582 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.839 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.839 "name": "Existed_Raid", 00:14:14.839 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:14.839 "strip_size_kb": 64, 00:14:14.839 "state": "configuring", 00:14:14.839 "raid_level": "raid0", 00:14:14.839 "superblock": true, 00:14:14.839 "num_base_bdevs": 4, 00:14:14.839 "num_base_bdevs_discovered": 3, 00:14:14.839 "num_base_bdevs_operational": 4, 00:14:14.839 "base_bdevs_list": [ 00:14:14.839 { 00:14:14.839 "name": null, 00:14:14.839 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:14.839 "is_configured": false, 00:14:14.839 "data_offset": 2048, 00:14:14.839 "data_size": 63488 00:14:14.839 }, 00:14:14.839 { 00:14:14.839 "name": "BaseBdev2", 00:14:14.839 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:14.839 "is_configured": true, 00:14:14.839 "data_offset": 2048, 00:14:14.839 "data_size": 63488 00:14:14.839 }, 00:14:14.839 { 00:14:14.839 "name": "BaseBdev3", 00:14:14.839 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:14.839 "is_configured": true, 00:14:14.839 "data_offset": 2048, 00:14:14.839 "data_size": 63488 00:14:14.839 }, 00:14:14.839 { 00:14:14.839 "name": "BaseBdev4", 00:14:14.839 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:14.839 "is_configured": true, 00:14:14.839 "data_offset": 2048, 00:14:14.839 "data_size": 63488 00:14:14.839 } 00:14:14.839 ] 00:14:14.839 }' 00:14:14.839 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.839 21:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.098 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.098 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.354 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:15.354 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.354 21:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:15.612 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c5d21fad-123c-11ef-8c90-4585f0cfab08 00:14:15.869 [2024-05-14 21:56:16.350202] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:15.869 [2024-05-14 21:56:16.350251] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ad5e300 00:14:15.869 [2024-05-14 21:56:16.350256] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:15.869 [2024-05-14 21:56:16.350276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82adbce20 00:14:15.869 [2024-05-14 21:56:16.350323] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ad5e300 00:14:15.869 [2024-05-14 21:56:16.350328] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ad5e300 00:14:15.869 [2024-05-14 21:56:16.350351] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.869 NewBaseBdev 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:15.869 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.126 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:16.385 [ 00:14:16.385 { 00:14:16.385 "name": "NewBaseBdev", 00:14:16.385 "aliases": [ 00:14:16.385 "c5d21fad-123c-11ef-8c90-4585f0cfab08" 00:14:16.385 ], 00:14:16.385 "product_name": "Malloc disk", 00:14:16.385 "block_size": 512, 00:14:16.385 "num_blocks": 65536, 00:14:16.385 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:16.385 "assigned_rate_limits": { 00:14:16.385 "rw_ios_per_sec": 0, 00:14:16.385 "rw_mbytes_per_sec": 0, 00:14:16.385 "r_mbytes_per_sec": 0, 00:14:16.385 "w_mbytes_per_sec": 0 00:14:16.385 }, 00:14:16.385 "claimed": true, 00:14:16.385 "claim_type": "exclusive_write", 00:14:16.385 "zoned": false, 00:14:16.385 "supported_io_types": { 00:14:16.385 "read": true, 00:14:16.385 "write": true, 00:14:16.385 "unmap": true, 00:14:16.385 "write_zeroes": true, 00:14:16.385 "flush": true, 00:14:16.385 "reset": true, 00:14:16.385 "compare": false, 00:14:16.385 "compare_and_write": false, 00:14:16.385 "abort": true, 00:14:16.385 "nvme_admin": false, 00:14:16.385 "nvme_io": false 00:14:16.385 }, 00:14:16.385 "memory_domains": [ 00:14:16.385 { 00:14:16.385 "dma_device_id": "system", 00:14:16.385 "dma_device_type": 1 00:14:16.385 }, 00:14:16.385 { 00:14:16.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.385 "dma_device_type": 2 00:14:16.385 } 00:14:16.385 ], 00:14:16.385 "driver_specific": {} 00:14:16.385 } 00:14:16.385 ] 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.385 21:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.643 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.643 "name": "Existed_Raid", 00:14:16.643 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:16.643 "strip_size_kb": 64, 00:14:16.643 "state": "online", 00:14:16.643 "raid_level": "raid0", 00:14:16.643 "superblock": true, 00:14:16.643 "num_base_bdevs": 4, 00:14:16.643 "num_base_bdevs_discovered": 4, 00:14:16.643 "num_base_bdevs_operational": 4, 00:14:16.643 "base_bdevs_list": [ 00:14:16.643 { 00:14:16.643 "name": "NewBaseBdev", 00:14:16.643 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 2048, 00:14:16.643 "data_size": 63488 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": "BaseBdev2", 00:14:16.643 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 2048, 00:14:16.643 "data_size": 63488 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": "BaseBdev3", 00:14:16.643 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 2048, 00:14:16.643 "data_size": 63488 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": "BaseBdev4", 00:14:16.643 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 2048, 00:14:16.643 "data_size": 63488 00:14:16.643 } 00:14:16.643 ] 00:14:16.643 }' 00:14:16.643 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.643 21:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:16.900 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:17.173 [2024-05-14 21:56:17.622136] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:17.174 "name": "Existed_Raid", 00:14:17.174 "aliases": [ 00:14:17.174 "c4aef9d8-123c-11ef-8c90-4585f0cfab08" 00:14:17.174 ], 00:14:17.174 "product_name": "Raid Volume", 00:14:17.174 "block_size": 512, 00:14:17.174 "num_blocks": 253952, 00:14:17.174 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "assigned_rate_limits": { 00:14:17.174 "rw_ios_per_sec": 0, 00:14:17.174 "rw_mbytes_per_sec": 0, 00:14:17.174 "r_mbytes_per_sec": 0, 00:14:17.174 "w_mbytes_per_sec": 0 00:14:17.174 }, 00:14:17.174 "claimed": false, 00:14:17.174 "zoned": false, 00:14:17.174 "supported_io_types": { 00:14:17.174 "read": true, 00:14:17.174 "write": true, 00:14:17.174 "unmap": true, 00:14:17.174 "write_zeroes": true, 00:14:17.174 "flush": true, 00:14:17.174 "reset": true, 00:14:17.174 "compare": false, 00:14:17.174 "compare_and_write": false, 00:14:17.174 "abort": false, 00:14:17.174 "nvme_admin": false, 00:14:17.174 "nvme_io": false 00:14:17.174 }, 00:14:17.174 "memory_domains": [ 00:14:17.174 { 00:14:17.174 "dma_device_id": "system", 00:14:17.174 "dma_device_type": 1 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.174 "dma_device_type": 2 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "system", 00:14:17.174 "dma_device_type": 1 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.174 "dma_device_type": 2 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "system", 00:14:17.174 "dma_device_type": 1 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.174 "dma_device_type": 2 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "system", 00:14:17.174 "dma_device_type": 1 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.174 "dma_device_type": 2 00:14:17.174 } 00:14:17.174 ], 00:14:17.174 "driver_specific": { 00:14:17.174 "raid": { 00:14:17.174 "uuid": "c4aef9d8-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "strip_size_kb": 64, 00:14:17.174 "state": "online", 00:14:17.174 "raid_level": "raid0", 00:14:17.174 "superblock": true, 00:14:17.174 "num_base_bdevs": 4, 00:14:17.174 "num_base_bdevs_discovered": 4, 00:14:17.174 "num_base_bdevs_operational": 4, 00:14:17.174 "base_bdevs_list": [ 00:14:17.174 { 00:14:17.174 "name": "NewBaseBdev", 00:14:17.174 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "is_configured": true, 00:14:17.174 "data_offset": 2048, 00:14:17.174 "data_size": 63488 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "name": "BaseBdev2", 00:14:17.174 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "is_configured": true, 00:14:17.174 "data_offset": 2048, 00:14:17.174 "data_size": 63488 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "name": "BaseBdev3", 00:14:17.174 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "is_configured": true, 00:14:17.174 "data_offset": 2048, 00:14:17.174 "data_size": 63488 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "name": "BaseBdev4", 00:14:17.174 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:17.174 "is_configured": true, 00:14:17.174 "data_offset": 2048, 00:14:17.174 "data_size": 63488 00:14:17.174 } 00:14:17.174 ] 00:14:17.174 } 00:14:17.174 } 00:14:17.174 }' 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:17.174 BaseBdev2 00:14:17.174 BaseBdev3 00:14:17.174 BaseBdev4' 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:17.174 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:17.450 "name": "NewBaseBdev", 00:14:17.450 "aliases": [ 00:14:17.450 "c5d21fad-123c-11ef-8c90-4585f0cfab08" 00:14:17.450 ], 00:14:17.450 "product_name": "Malloc disk", 00:14:17.450 "block_size": 512, 00:14:17.450 "num_blocks": 65536, 00:14:17.450 "uuid": "c5d21fad-123c-11ef-8c90-4585f0cfab08", 00:14:17.450 "assigned_rate_limits": { 00:14:17.450 "rw_ios_per_sec": 0, 00:14:17.450 "rw_mbytes_per_sec": 0, 00:14:17.450 "r_mbytes_per_sec": 0, 00:14:17.450 "w_mbytes_per_sec": 0 00:14:17.450 }, 00:14:17.450 "claimed": true, 00:14:17.450 "claim_type": "exclusive_write", 00:14:17.450 "zoned": false, 00:14:17.450 "supported_io_types": { 00:14:17.450 "read": true, 00:14:17.450 "write": true, 00:14:17.450 "unmap": true, 00:14:17.450 "write_zeroes": true, 00:14:17.450 "flush": true, 00:14:17.450 "reset": true, 00:14:17.450 "compare": false, 00:14:17.450 "compare_and_write": false, 00:14:17.450 "abort": true, 00:14:17.450 "nvme_admin": false, 00:14:17.450 "nvme_io": false 00:14:17.450 }, 00:14:17.450 "memory_domains": [ 00:14:17.450 { 00:14:17.450 "dma_device_id": "system", 00:14:17.450 "dma_device_type": 1 00:14:17.450 }, 00:14:17.450 { 00:14:17.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.450 "dma_device_type": 2 00:14:17.450 } 00:14:17.450 ], 00:14:17.450 "driver_specific": {} 00:14:17.450 }' 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:17.450 21:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:17.707 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:17.707 "name": "BaseBdev2", 00:14:17.707 "aliases": [ 00:14:17.707 "c3429797-123c-11ef-8c90-4585f0cfab08" 00:14:17.707 ], 00:14:17.707 "product_name": "Malloc disk", 00:14:17.707 "block_size": 512, 00:14:17.707 "num_blocks": 65536, 00:14:17.707 "uuid": "c3429797-123c-11ef-8c90-4585f0cfab08", 00:14:17.707 "assigned_rate_limits": { 00:14:17.707 "rw_ios_per_sec": 0, 00:14:17.707 "rw_mbytes_per_sec": 0, 00:14:17.707 "r_mbytes_per_sec": 0, 00:14:17.707 "w_mbytes_per_sec": 0 00:14:17.708 }, 00:14:17.708 "claimed": true, 00:14:17.708 "claim_type": "exclusive_write", 00:14:17.708 "zoned": false, 00:14:17.708 "supported_io_types": { 00:14:17.708 "read": true, 00:14:17.708 "write": true, 00:14:17.708 "unmap": true, 00:14:17.708 "write_zeroes": true, 00:14:17.708 "flush": true, 00:14:17.708 "reset": true, 00:14:17.708 "compare": false, 00:14:17.708 "compare_and_write": false, 00:14:17.708 "abort": true, 00:14:17.708 "nvme_admin": false, 00:14:17.708 "nvme_io": false 00:14:17.708 }, 00:14:17.708 "memory_domains": [ 00:14:17.708 { 00:14:17.708 "dma_device_id": "system", 00:14:17.708 "dma_device_type": 1 00:14:17.708 }, 00:14:17.708 { 00:14:17.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.708 "dma_device_type": 2 00:14:17.708 } 00:14:17.708 ], 00:14:17.708 "driver_specific": {} 00:14:17.708 }' 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.708 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.966 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:17.966 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:17.966 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:17.966 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:18.224 "name": "BaseBdev3", 00:14:18.224 "aliases": [ 00:14:18.224 "c3bb7167-123c-11ef-8c90-4585f0cfab08" 00:14:18.224 ], 00:14:18.224 "product_name": "Malloc disk", 00:14:18.224 "block_size": 512, 00:14:18.224 "num_blocks": 65536, 00:14:18.224 "uuid": "c3bb7167-123c-11ef-8c90-4585f0cfab08", 00:14:18.224 "assigned_rate_limits": { 00:14:18.224 "rw_ios_per_sec": 0, 00:14:18.224 "rw_mbytes_per_sec": 0, 00:14:18.224 "r_mbytes_per_sec": 0, 00:14:18.224 "w_mbytes_per_sec": 0 00:14:18.224 }, 00:14:18.224 "claimed": true, 00:14:18.224 "claim_type": "exclusive_write", 00:14:18.224 "zoned": false, 00:14:18.224 "supported_io_types": { 00:14:18.224 "read": true, 00:14:18.224 "write": true, 00:14:18.224 "unmap": true, 00:14:18.224 "write_zeroes": true, 00:14:18.224 "flush": true, 00:14:18.224 "reset": true, 00:14:18.224 "compare": false, 00:14:18.224 "compare_and_write": false, 00:14:18.224 "abort": true, 00:14:18.224 "nvme_admin": false, 00:14:18.224 "nvme_io": false 00:14:18.224 }, 00:14:18.224 "memory_domains": [ 00:14:18.224 { 00:14:18.224 "dma_device_id": "system", 00:14:18.224 "dma_device_type": 1 00:14:18.224 }, 00:14:18.224 { 00:14:18.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.224 "dma_device_type": 2 00:14:18.224 } 00:14:18.224 ], 00:14:18.224 "driver_specific": {} 00:14:18.224 }' 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:18.224 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:18.482 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:18.482 "name": "BaseBdev4", 00:14:18.482 "aliases": [ 00:14:18.482 "c43a65cf-123c-11ef-8c90-4585f0cfab08" 00:14:18.482 ], 00:14:18.482 "product_name": "Malloc disk", 00:14:18.482 "block_size": 512, 00:14:18.482 "num_blocks": 65536, 00:14:18.482 "uuid": "c43a65cf-123c-11ef-8c90-4585f0cfab08", 00:14:18.482 "assigned_rate_limits": { 00:14:18.482 "rw_ios_per_sec": 0, 00:14:18.482 "rw_mbytes_per_sec": 0, 00:14:18.482 "r_mbytes_per_sec": 0, 00:14:18.482 "w_mbytes_per_sec": 0 00:14:18.482 }, 00:14:18.482 "claimed": true, 00:14:18.482 "claim_type": "exclusive_write", 00:14:18.482 "zoned": false, 00:14:18.482 "supported_io_types": { 00:14:18.482 "read": true, 00:14:18.482 "write": true, 00:14:18.482 "unmap": true, 00:14:18.482 "write_zeroes": true, 00:14:18.482 "flush": true, 00:14:18.482 "reset": true, 00:14:18.482 "compare": false, 00:14:18.482 "compare_and_write": false, 00:14:18.482 "abort": true, 00:14:18.482 "nvme_admin": false, 00:14:18.482 "nvme_io": false 00:14:18.482 }, 00:14:18.482 "memory_domains": [ 00:14:18.482 { 00:14:18.482 "dma_device_id": "system", 00:14:18.482 "dma_device_type": 1 00:14:18.482 }, 00:14:18.482 { 00:14:18.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.482 "dma_device_type": 2 00:14:18.482 } 00:14:18.482 ], 00:14:18.482 "driver_specific": {} 00:14:18.482 }' 00:14:18.482 21:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:18.482 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:18.482 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:18.482 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:18.482 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:18.483 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:18.741 [2024-05-14 21:56:19.322206] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.741 [2024-05-14 21:56:19.322244] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.741 [2024-05-14 21:56:19.322280] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.741 [2024-05-14 21:56:19.322296] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.741 [2024-05-14 21:56:19.322301] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ad5e300 name Existed_Raid, state offline 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 57999 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 57999 ']' 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 57999 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 57999 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:14:18.998 killing process with pid 57999 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57999' 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 57999 00:14:18.998 [2024-05-14 21:56:19.354941] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.998 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 57999 00:14:18.999 [2024-05-14 21:56:19.379362] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.999 21:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:14:18.999 00:14:18.999 real 0m27.725s 00:14:18.999 user 0m50.631s 00:14:18.999 sys 0m3.927s 00:14:18.999 ************************************ 00:14:18.999 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.999 21:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.999 END TEST raid_state_function_test_sb 00:14:18.999 ************************************ 00:14:19.257 21:56:19 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:19.257 21:56:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:19.257 21:56:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.257 21:56:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.257 ************************************ 00:14:19.257 START TEST raid_superblock_test 00:14:19.257 ************************************ 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=58817 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 58817 /var/tmp/spdk-raid.sock 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 58817 ']' 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:19.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:19.257 21:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.257 [2024-05-14 21:56:19.613010] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:19.257 [2024-05-14 21:56:19.613208] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:19.823 EAL: TSC is not safe to use in SMP mode 00:14:19.823 EAL: TSC is not invariant 00:14:19.823 [2024-05-14 21:56:20.147477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.823 [2024-05-14 21:56:20.233865] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:19.823 [2024-05-14 21:56:20.236128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.823 [2024-05-14 21:56:20.236917] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.823 [2024-05-14 21:56:20.236934] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:20.081 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:20.340 malloc1 00:14:20.340 21:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:20.599 [2024-05-14 21:56:21.093179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:20.599 [2024-05-14 21:56:21.093245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.599 [2024-05-14 21:56:21.093876] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84c780 00:14:20.599 [2024-05-14 21:56:21.093913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.599 [2024-05-14 21:56:21.094819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.599 [2024-05-14 21:56:21.094849] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:20.599 pt1 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:20.599 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:20.857 malloc2 00:14:20.857 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.115 [2024-05-14 21:56:21.617186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.115 [2024-05-14 21:56:21.617240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.115 [2024-05-14 21:56:21.617269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84cc80 00:14:21.115 [2024-05-14 21:56:21.617278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.115 [2024-05-14 21:56:21.617952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.115 [2024-05-14 21:56:21.617983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.115 pt2 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.115 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:21.373 malloc3 00:14:21.373 21:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:21.676 [2024-05-14 21:56:22.129195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:21.676 [2024-05-14 21:56:22.129257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.676 [2024-05-14 21:56:22.129284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84d180 00:14:21.676 [2024-05-14 21:56:22.129292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.676 [2024-05-14 21:56:22.129971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.676 [2024-05-14 21:56:22.130002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:21.676 pt3 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.676 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:21.933 malloc4 00:14:21.933 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:22.191 [2024-05-14 21:56:22.669199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:22.191 [2024-05-14 21:56:22.669250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.191 [2024-05-14 21:56:22.669278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84d680 00:14:22.191 [2024-05-14 21:56:22.669286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.191 [2024-05-14 21:56:22.669945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.191 [2024-05-14 21:56:22.669973] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:22.191 pt4 00:14:22.191 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.191 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.191 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:22.449 [2024-05-14 21:56:22.893212] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.449 [2024-05-14 21:56:22.893809] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.449 [2024-05-14 21:56:22.893834] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:22.449 [2024-05-14 21:56:22.893846] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:22.449 [2024-05-14 21:56:22.893899] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c851300 00:14:22.449 [2024-05-14 21:56:22.893905] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:22.449 [2024-05-14 21:56:22.893950] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c8afe20 00:14:22.449 [2024-05-14 21:56:22.894033] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c851300 00:14:22.449 [2024-05-14 21:56:22.894038] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c851300 00:14:22.449 [2024-05-14 21:56:22.894066] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.449 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.450 21:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.707 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.707 "name": "raid_bdev1", 00:14:22.707 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:22.707 "strip_size_kb": 64, 00:14:22.707 "state": "online", 00:14:22.707 "raid_level": "raid0", 00:14:22.707 "superblock": true, 00:14:22.707 "num_base_bdevs": 4, 00:14:22.707 "num_base_bdevs_discovered": 4, 00:14:22.707 "num_base_bdevs_operational": 4, 00:14:22.707 "base_bdevs_list": [ 00:14:22.707 { 00:14:22.707 "name": "pt1", 00:14:22.707 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:22.707 "is_configured": true, 00:14:22.707 "data_offset": 2048, 00:14:22.707 "data_size": 63488 00:14:22.707 }, 00:14:22.707 { 00:14:22.707 "name": "pt2", 00:14:22.707 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:22.707 "is_configured": true, 00:14:22.707 "data_offset": 2048, 00:14:22.707 "data_size": 63488 00:14:22.707 }, 00:14:22.707 { 00:14:22.707 "name": "pt3", 00:14:22.707 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:22.707 "is_configured": true, 00:14:22.707 "data_offset": 2048, 00:14:22.707 "data_size": 63488 00:14:22.707 }, 00:14:22.707 { 00:14:22.707 "name": "pt4", 00:14:22.707 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:22.707 "is_configured": true, 00:14:22.707 "data_offset": 2048, 00:14:22.707 "data_size": 63488 00:14:22.707 } 00:14:22.707 ] 00:14:22.707 }' 00:14:22.707 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.707 21:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.271 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:23.529 [2024-05-14 21:56:23.873271] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:23.529 "name": "raid_bdev1", 00:14:23.529 "aliases": [ 00:14:23.529 "cda011bd-123c-11ef-8c90-4585f0cfab08" 00:14:23.529 ], 00:14:23.529 "product_name": "Raid Volume", 00:14:23.529 "block_size": 512, 00:14:23.529 "num_blocks": 253952, 00:14:23.529 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:23.529 "assigned_rate_limits": { 00:14:23.529 "rw_ios_per_sec": 0, 00:14:23.529 "rw_mbytes_per_sec": 0, 00:14:23.529 "r_mbytes_per_sec": 0, 00:14:23.529 "w_mbytes_per_sec": 0 00:14:23.529 }, 00:14:23.529 "claimed": false, 00:14:23.529 "zoned": false, 00:14:23.529 "supported_io_types": { 00:14:23.529 "read": true, 00:14:23.529 "write": true, 00:14:23.529 "unmap": true, 00:14:23.529 "write_zeroes": true, 00:14:23.529 "flush": true, 00:14:23.529 "reset": true, 00:14:23.529 "compare": false, 00:14:23.529 "compare_and_write": false, 00:14:23.529 "abort": false, 00:14:23.529 "nvme_admin": false, 00:14:23.529 "nvme_io": false 00:14:23.529 }, 00:14:23.529 "memory_domains": [ 00:14:23.529 { 00:14:23.529 "dma_device_id": "system", 00:14:23.529 "dma_device_type": 1 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.529 "dma_device_type": 2 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "system", 00:14:23.529 "dma_device_type": 1 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.529 "dma_device_type": 2 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "system", 00:14:23.529 "dma_device_type": 1 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.529 "dma_device_type": 2 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "system", 00:14:23.529 "dma_device_type": 1 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.529 "dma_device_type": 2 00:14:23.529 } 00:14:23.529 ], 00:14:23.529 "driver_specific": { 00:14:23.529 "raid": { 00:14:23.529 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:23.529 "strip_size_kb": 64, 00:14:23.529 "state": "online", 00:14:23.529 "raid_level": "raid0", 00:14:23.529 "superblock": true, 00:14:23.529 "num_base_bdevs": 4, 00:14:23.529 "num_base_bdevs_discovered": 4, 00:14:23.529 "num_base_bdevs_operational": 4, 00:14:23.529 "base_bdevs_list": [ 00:14:23.529 { 00:14:23.529 "name": "pt1", 00:14:23.529 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:23.529 "is_configured": true, 00:14:23.529 "data_offset": 2048, 00:14:23.529 "data_size": 63488 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "name": "pt2", 00:14:23.529 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:23.529 "is_configured": true, 00:14:23.529 "data_offset": 2048, 00:14:23.529 "data_size": 63488 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "name": "pt3", 00:14:23.529 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:23.529 "is_configured": true, 00:14:23.529 "data_offset": 2048, 00:14:23.529 "data_size": 63488 00:14:23.529 }, 00:14:23.529 { 00:14:23.529 "name": "pt4", 00:14:23.529 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:23.529 "is_configured": true, 00:14:23.529 "data_offset": 2048, 00:14:23.529 "data_size": 63488 00:14:23.529 } 00:14:23.529 ] 00:14:23.529 } 00:14:23.529 } 00:14:23.529 }' 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:14:23.529 pt2 00:14:23.529 pt3 00:14:23.529 pt4' 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:23.529 21:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:23.788 "name": "pt1", 00:14:23.788 "aliases": [ 00:14:23.788 "587a4745-7fe4-fc59-b01b-1d905fcce7c2" 00:14:23.788 ], 00:14:23.788 "product_name": "passthru", 00:14:23.788 "block_size": 512, 00:14:23.788 "num_blocks": 65536, 00:14:23.788 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:23.788 "assigned_rate_limits": { 00:14:23.788 "rw_ios_per_sec": 0, 00:14:23.788 "rw_mbytes_per_sec": 0, 00:14:23.788 "r_mbytes_per_sec": 0, 00:14:23.788 "w_mbytes_per_sec": 0 00:14:23.788 }, 00:14:23.788 "claimed": true, 00:14:23.788 "claim_type": "exclusive_write", 00:14:23.788 "zoned": false, 00:14:23.788 "supported_io_types": { 00:14:23.788 "read": true, 00:14:23.788 "write": true, 00:14:23.788 "unmap": true, 00:14:23.788 "write_zeroes": true, 00:14:23.788 "flush": true, 00:14:23.788 "reset": true, 00:14:23.788 "compare": false, 00:14:23.788 "compare_and_write": false, 00:14:23.788 "abort": true, 00:14:23.788 "nvme_admin": false, 00:14:23.788 "nvme_io": false 00:14:23.788 }, 00:14:23.788 "memory_domains": [ 00:14:23.788 { 00:14:23.788 "dma_device_id": "system", 00:14:23.788 "dma_device_type": 1 00:14:23.788 }, 00:14:23.788 { 00:14:23.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.788 "dma_device_type": 2 00:14:23.788 } 00:14:23.788 ], 00:14:23.788 "driver_specific": { 00:14:23.788 "passthru": { 00:14:23.788 "name": "pt1", 00:14:23.788 "base_bdev_name": "malloc1" 00:14:23.788 } 00:14:23.788 } 00:14:23.788 }' 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:23.788 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:24.046 "name": "pt2", 00:14:24.046 "aliases": [ 00:14:24.046 "1c13fa68-68cc-5851-ab0f-2413c87b7457" 00:14:24.046 ], 00:14:24.046 "product_name": "passthru", 00:14:24.046 "block_size": 512, 00:14:24.046 "num_blocks": 65536, 00:14:24.046 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:24.046 "assigned_rate_limits": { 00:14:24.046 "rw_ios_per_sec": 0, 00:14:24.046 "rw_mbytes_per_sec": 0, 00:14:24.046 "r_mbytes_per_sec": 0, 00:14:24.046 "w_mbytes_per_sec": 0 00:14:24.046 }, 00:14:24.046 "claimed": true, 00:14:24.046 "claim_type": "exclusive_write", 00:14:24.046 "zoned": false, 00:14:24.046 "supported_io_types": { 00:14:24.046 "read": true, 00:14:24.046 "write": true, 00:14:24.046 "unmap": true, 00:14:24.046 "write_zeroes": true, 00:14:24.046 "flush": true, 00:14:24.046 "reset": true, 00:14:24.046 "compare": false, 00:14:24.046 "compare_and_write": false, 00:14:24.046 "abort": true, 00:14:24.046 "nvme_admin": false, 00:14:24.046 "nvme_io": false 00:14:24.046 }, 00:14:24.046 "memory_domains": [ 00:14:24.046 { 00:14:24.046 "dma_device_id": "system", 00:14:24.046 "dma_device_type": 1 00:14:24.046 }, 00:14:24.046 { 00:14:24.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.046 "dma_device_type": 2 00:14:24.046 } 00:14:24.046 ], 00:14:24.046 "driver_specific": { 00:14:24.046 "passthru": { 00:14:24.046 "name": "pt2", 00:14:24.046 "base_bdev_name": "malloc2" 00:14:24.046 } 00:14:24.046 } 00:14:24.046 }' 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:24.046 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:24.303 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:24.304 "name": "pt3", 00:14:24.304 "aliases": [ 00:14:24.304 "6a622aee-1424-cf50-b79a-4179c6794fbf" 00:14:24.304 ], 00:14:24.304 "product_name": "passthru", 00:14:24.304 "block_size": 512, 00:14:24.304 "num_blocks": 65536, 00:14:24.304 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:24.304 "assigned_rate_limits": { 00:14:24.304 "rw_ios_per_sec": 0, 00:14:24.304 "rw_mbytes_per_sec": 0, 00:14:24.304 "r_mbytes_per_sec": 0, 00:14:24.304 "w_mbytes_per_sec": 0 00:14:24.304 }, 00:14:24.304 "claimed": true, 00:14:24.304 "claim_type": "exclusive_write", 00:14:24.304 "zoned": false, 00:14:24.304 "supported_io_types": { 00:14:24.304 "read": true, 00:14:24.304 "write": true, 00:14:24.304 "unmap": true, 00:14:24.304 "write_zeroes": true, 00:14:24.304 "flush": true, 00:14:24.304 "reset": true, 00:14:24.304 "compare": false, 00:14:24.304 "compare_and_write": false, 00:14:24.304 "abort": true, 00:14:24.304 "nvme_admin": false, 00:14:24.304 "nvme_io": false 00:14:24.304 }, 00:14:24.304 "memory_domains": [ 00:14:24.304 { 00:14:24.304 "dma_device_id": "system", 00:14:24.304 "dma_device_type": 1 00:14:24.304 }, 00:14:24.304 { 00:14:24.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.304 "dma_device_type": 2 00:14:24.304 } 00:14:24.304 ], 00:14:24.304 "driver_specific": { 00:14:24.304 "passthru": { 00:14:24.304 "name": "pt3", 00:14:24.304 "base_bdev_name": "malloc3" 00:14:24.304 } 00:14:24.304 } 00:14:24.304 }' 00:14:24.304 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.304 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.304 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:24.304 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.304 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:24.562 21:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:24.821 "name": "pt4", 00:14:24.821 "aliases": [ 00:14:24.821 "63696672-170d-f35c-9d68-09c6944b8e34" 00:14:24.821 ], 00:14:24.821 "product_name": "passthru", 00:14:24.821 "block_size": 512, 00:14:24.821 "num_blocks": 65536, 00:14:24.821 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:24.821 "assigned_rate_limits": { 00:14:24.821 "rw_ios_per_sec": 0, 00:14:24.821 "rw_mbytes_per_sec": 0, 00:14:24.821 "r_mbytes_per_sec": 0, 00:14:24.821 "w_mbytes_per_sec": 0 00:14:24.821 }, 00:14:24.821 "claimed": true, 00:14:24.821 "claim_type": "exclusive_write", 00:14:24.821 "zoned": false, 00:14:24.821 "supported_io_types": { 00:14:24.821 "read": true, 00:14:24.821 "write": true, 00:14:24.821 "unmap": true, 00:14:24.821 "write_zeroes": true, 00:14:24.821 "flush": true, 00:14:24.821 "reset": true, 00:14:24.821 "compare": false, 00:14:24.821 "compare_and_write": false, 00:14:24.821 "abort": true, 00:14:24.821 "nvme_admin": false, 00:14:24.821 "nvme_io": false 00:14:24.821 }, 00:14:24.821 "memory_domains": [ 00:14:24.821 { 00:14:24.821 "dma_device_id": "system", 00:14:24.821 "dma_device_type": 1 00:14:24.821 }, 00:14:24.821 { 00:14:24.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.821 "dma_device_type": 2 00:14:24.821 } 00:14:24.821 ], 00:14:24.821 "driver_specific": { 00:14:24.821 "passthru": { 00:14:24.821 "name": "pt4", 00:14:24.821 "base_bdev_name": "malloc4" 00:14:24.821 } 00:14:24.821 } 00:14:24.821 }' 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:24.821 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:25.078 [2024-05-14 21:56:25.505285] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.078 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cda011bd-123c-11ef-8c90-4585f0cfab08 00:14:25.078 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cda011bd-123c-11ef-8c90-4585f0cfab08 ']' 00:14:25.078 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.336 [2024-05-14 21:56:25.781246] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.336 [2024-05-14 21:56:25.781273] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.336 [2024-05-14 21:56:25.781298] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.336 [2024-05-14 21:56:25.781314] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.336 [2024-05-14 21:56:25.781319] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c851300 name raid_bdev1, state offline 00:14:25.336 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:25.336 21:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.593 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:25.593 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:25.593 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.593 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:25.850 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.851 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:26.107 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:26.107 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:26.671 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:26.671 21:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:26.671 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:26.671 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.236 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:27.237 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:27.237 [2024-05-14 21:56:27.821296] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:27.237 [2024-05-14 21:56:27.821898] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:27.237 [2024-05-14 21:56:27.821919] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:27.237 [2024-05-14 21:56:27.821928] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:27.237 [2024-05-14 21:56:27.821944] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:27.237 [2024-05-14 21:56:27.821985] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:27.237 [2024-05-14 21:56:27.822003] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:27.237 [2024-05-14 21:56:27.822013] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:27.237 [2024-05-14 21:56:27.822022] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.237 [2024-05-14 21:56:27.822027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c851300 name raid_bdev1, state configuring 00:14:27.494 request: 00:14:27.494 { 00:14:27.494 "name": "raid_bdev1", 00:14:27.494 "raid_level": "raid0", 00:14:27.494 "base_bdevs": [ 00:14:27.494 "malloc1", 00:14:27.494 "malloc2", 00:14:27.494 "malloc3", 00:14:27.494 "malloc4" 00:14:27.494 ], 00:14:27.494 "superblock": false, 00:14:27.494 "strip_size_kb": 64, 00:14:27.494 "method": "bdev_raid_create", 00:14:27.494 "req_id": 1 00:14:27.494 } 00:14:27.494 Got JSON-RPC error response 00:14:27.494 response: 00:14:27.494 { 00:14:27.494 "code": -17, 00:14:27.494 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:27.494 } 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:27.494 21:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.752 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:27.752 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:27.752 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.010 [2024-05-14 21:56:28.349285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.010 [2024-05-14 21:56:28.349351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.010 [2024-05-14 21:56:28.349380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84d680 00:14:28.010 [2024-05-14 21:56:28.349389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.010 [2024-05-14 21:56:28.350065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.010 [2024-05-14 21:56:28.350095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.010 [2024-05-14 21:56:28.350122] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:28.010 [2024-05-14 21:56:28.350133] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.010 pt1 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.010 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.268 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.268 "name": "raid_bdev1", 00:14:28.268 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:28.268 "strip_size_kb": 64, 00:14:28.268 "state": "configuring", 00:14:28.268 "raid_level": "raid0", 00:14:28.268 "superblock": true, 00:14:28.268 "num_base_bdevs": 4, 00:14:28.268 "num_base_bdevs_discovered": 1, 00:14:28.268 "num_base_bdevs_operational": 4, 00:14:28.268 "base_bdevs_list": [ 00:14:28.268 { 00:14:28.268 "name": "pt1", 00:14:28.268 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:28.268 "is_configured": true, 00:14:28.268 "data_offset": 2048, 00:14:28.268 "data_size": 63488 00:14:28.268 }, 00:14:28.268 { 00:14:28.268 "name": null, 00:14:28.268 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:28.268 "is_configured": false, 00:14:28.268 "data_offset": 2048, 00:14:28.268 "data_size": 63488 00:14:28.268 }, 00:14:28.268 { 00:14:28.268 "name": null, 00:14:28.268 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:28.268 "is_configured": false, 00:14:28.268 "data_offset": 2048, 00:14:28.268 "data_size": 63488 00:14:28.268 }, 00:14:28.268 { 00:14:28.268 "name": null, 00:14:28.268 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:28.268 "is_configured": false, 00:14:28.268 "data_offset": 2048, 00:14:28.268 "data_size": 63488 00:14:28.268 } 00:14:28.268 ] 00:14:28.268 }' 00:14:28.268 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.268 21:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.524 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:28.524 21:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.781 [2024-05-14 21:56:29.153305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.781 [2024-05-14 21:56:29.153374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.782 [2024-05-14 21:56:29.153404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84cc80 00:14:28.782 [2024-05-14 21:56:29.153413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.782 [2024-05-14 21:56:29.153531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.782 [2024-05-14 21:56:29.153551] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.782 [2024-05-14 21:56:29.153577] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:28.782 [2024-05-14 21:56:29.153585] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.782 pt2 00:14:28.782 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:29.040 [2024-05-14 21:56:29.385310] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.040 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.299 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.299 "name": "raid_bdev1", 00:14:29.299 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:29.299 "strip_size_kb": 64, 00:14:29.299 "state": "configuring", 00:14:29.299 "raid_level": "raid0", 00:14:29.299 "superblock": true, 00:14:29.299 "num_base_bdevs": 4, 00:14:29.299 "num_base_bdevs_discovered": 1, 00:14:29.299 "num_base_bdevs_operational": 4, 00:14:29.299 "base_bdevs_list": [ 00:14:29.299 { 00:14:29.299 "name": "pt1", 00:14:29.299 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:29.299 "is_configured": true, 00:14:29.299 "data_offset": 2048, 00:14:29.299 "data_size": 63488 00:14:29.299 }, 00:14:29.299 { 00:14:29.299 "name": null, 00:14:29.299 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:29.299 "is_configured": false, 00:14:29.299 "data_offset": 2048, 00:14:29.299 "data_size": 63488 00:14:29.299 }, 00:14:29.299 { 00:14:29.299 "name": null, 00:14:29.299 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:29.299 "is_configured": false, 00:14:29.299 "data_offset": 2048, 00:14:29.299 "data_size": 63488 00:14:29.299 }, 00:14:29.299 { 00:14:29.299 "name": null, 00:14:29.299 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:29.299 "is_configured": false, 00:14:29.299 "data_offset": 2048, 00:14:29.299 "data_size": 63488 00:14:29.299 } 00:14:29.299 ] 00:14:29.299 }' 00:14:29.299 21:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.299 21:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.559 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:29.559 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.559 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.817 [2024-05-14 21:56:30.245314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.817 [2024-05-14 21:56:30.245404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.817 [2024-05-14 21:56:30.245433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84cc80 00:14:29.817 [2024-05-14 21:56:30.245442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.817 [2024-05-14 21:56:30.245567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.817 [2024-05-14 21:56:30.245587] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.817 [2024-05-14 21:56:30.245612] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:29.817 [2024-05-14 21:56:30.245621] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.817 pt2 00:14:29.817 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.817 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.817 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.075 [2024-05-14 21:56:30.521318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.075 [2024-05-14 21:56:30.521374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.075 [2024-05-14 21:56:30.521400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84c780 00:14:30.075 [2024-05-14 21:56:30.521409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.075 [2024-05-14 21:56:30.521527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.075 [2024-05-14 21:56:30.521539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.075 [2024-05-14 21:56:30.521561] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:30.075 [2024-05-14 21:56:30.521569] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.075 pt3 00:14:30.075 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.075 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.075 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:30.333 [2024-05-14 21:56:30.773326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:30.333 [2024-05-14 21:56:30.773386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.333 [2024-05-14 21:56:30.773430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c84d900 00:14:30.333 [2024-05-14 21:56:30.773439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.333 [2024-05-14 21:56:30.773563] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.333 [2024-05-14 21:56:30.773576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:30.333 [2024-05-14 21:56:30.773604] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:14:30.333 [2024-05-14 21:56:30.773618] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:30.333 [2024-05-14 21:56:30.773663] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c851300 00:14:30.333 [2024-05-14 21:56:30.773668] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.333 [2024-05-14 21:56:30.773693] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c8afe20 00:14:30.333 [2024-05-14 21:56:30.773748] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c851300 00:14:30.333 [2024-05-14 21:56:30.773752] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c851300 00:14:30.333 [2024-05-14 21:56:30.773775] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.333 pt4 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.333 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.334 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.334 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.334 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.334 21:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.591 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.591 "name": "raid_bdev1", 00:14:30.591 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:30.591 "strip_size_kb": 64, 00:14:30.591 "state": "online", 00:14:30.591 "raid_level": "raid0", 00:14:30.591 "superblock": true, 00:14:30.591 "num_base_bdevs": 4, 00:14:30.591 "num_base_bdevs_discovered": 4, 00:14:30.591 "num_base_bdevs_operational": 4, 00:14:30.591 "base_bdevs_list": [ 00:14:30.591 { 00:14:30.591 "name": "pt1", 00:14:30.591 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:30.591 "is_configured": true, 00:14:30.591 "data_offset": 2048, 00:14:30.591 "data_size": 63488 00:14:30.591 }, 00:14:30.591 { 00:14:30.591 "name": "pt2", 00:14:30.591 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:30.591 "is_configured": true, 00:14:30.591 "data_offset": 2048, 00:14:30.591 "data_size": 63488 00:14:30.591 }, 00:14:30.591 { 00:14:30.591 "name": "pt3", 00:14:30.591 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:30.591 "is_configured": true, 00:14:30.591 "data_offset": 2048, 00:14:30.591 "data_size": 63488 00:14:30.591 }, 00:14:30.591 { 00:14:30.591 "name": "pt4", 00:14:30.591 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:30.591 "is_configured": true, 00:14:30.591 "data_offset": 2048, 00:14:30.591 "data_size": 63488 00:14:30.591 } 00:14:30.591 ] 00:14:30.591 }' 00:14:30.591 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.591 21:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.847 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.847 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.848 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:31.106 [2024-05-14 21:56:31.589386] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:31.106 "name": "raid_bdev1", 00:14:31.106 "aliases": [ 00:14:31.106 "cda011bd-123c-11ef-8c90-4585f0cfab08" 00:14:31.106 ], 00:14:31.106 "product_name": "Raid Volume", 00:14:31.106 "block_size": 512, 00:14:31.106 "num_blocks": 253952, 00:14:31.106 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:31.106 "assigned_rate_limits": { 00:14:31.106 "rw_ios_per_sec": 0, 00:14:31.106 "rw_mbytes_per_sec": 0, 00:14:31.106 "r_mbytes_per_sec": 0, 00:14:31.106 "w_mbytes_per_sec": 0 00:14:31.106 }, 00:14:31.106 "claimed": false, 00:14:31.106 "zoned": false, 00:14:31.106 "supported_io_types": { 00:14:31.106 "read": true, 00:14:31.106 "write": true, 00:14:31.106 "unmap": true, 00:14:31.106 "write_zeroes": true, 00:14:31.106 "flush": true, 00:14:31.106 "reset": true, 00:14:31.106 "compare": false, 00:14:31.106 "compare_and_write": false, 00:14:31.106 "abort": false, 00:14:31.106 "nvme_admin": false, 00:14:31.106 "nvme_io": false 00:14:31.106 }, 00:14:31.106 "memory_domains": [ 00:14:31.106 { 00:14:31.106 "dma_device_id": "system", 00:14:31.106 "dma_device_type": 1 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.106 "dma_device_type": 2 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "system", 00:14:31.106 "dma_device_type": 1 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.106 "dma_device_type": 2 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "system", 00:14:31.106 "dma_device_type": 1 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.106 "dma_device_type": 2 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "system", 00:14:31.106 "dma_device_type": 1 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.106 "dma_device_type": 2 00:14:31.106 } 00:14:31.106 ], 00:14:31.106 "driver_specific": { 00:14:31.106 "raid": { 00:14:31.106 "uuid": "cda011bd-123c-11ef-8c90-4585f0cfab08", 00:14:31.106 "strip_size_kb": 64, 00:14:31.106 "state": "online", 00:14:31.106 "raid_level": "raid0", 00:14:31.106 "superblock": true, 00:14:31.106 "num_base_bdevs": 4, 00:14:31.106 "num_base_bdevs_discovered": 4, 00:14:31.106 "num_base_bdevs_operational": 4, 00:14:31.106 "base_bdevs_list": [ 00:14:31.106 { 00:14:31.106 "name": "pt1", 00:14:31.106 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:31.106 "is_configured": true, 00:14:31.106 "data_offset": 2048, 00:14:31.106 "data_size": 63488 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "name": "pt2", 00:14:31.106 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:31.106 "is_configured": true, 00:14:31.106 "data_offset": 2048, 00:14:31.106 "data_size": 63488 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "name": "pt3", 00:14:31.106 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:31.106 "is_configured": true, 00:14:31.106 "data_offset": 2048, 00:14:31.106 "data_size": 63488 00:14:31.106 }, 00:14:31.106 { 00:14:31.106 "name": "pt4", 00:14:31.106 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:31.106 "is_configured": true, 00:14:31.106 "data_offset": 2048, 00:14:31.106 "data_size": 63488 00:14:31.106 } 00:14:31.106 ] 00:14:31.106 } 00:14:31.106 } 00:14:31.106 }' 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:14:31.106 pt2 00:14:31.106 pt3 00:14:31.106 pt4' 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:31.106 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:31.364 "name": "pt1", 00:14:31.364 "aliases": [ 00:14:31.364 "587a4745-7fe4-fc59-b01b-1d905fcce7c2" 00:14:31.364 ], 00:14:31.364 "product_name": "passthru", 00:14:31.364 "block_size": 512, 00:14:31.364 "num_blocks": 65536, 00:14:31.364 "uuid": "587a4745-7fe4-fc59-b01b-1d905fcce7c2", 00:14:31.364 "assigned_rate_limits": { 00:14:31.364 "rw_ios_per_sec": 0, 00:14:31.364 "rw_mbytes_per_sec": 0, 00:14:31.364 "r_mbytes_per_sec": 0, 00:14:31.364 "w_mbytes_per_sec": 0 00:14:31.364 }, 00:14:31.364 "claimed": true, 00:14:31.364 "claim_type": "exclusive_write", 00:14:31.364 "zoned": false, 00:14:31.364 "supported_io_types": { 00:14:31.364 "read": true, 00:14:31.364 "write": true, 00:14:31.364 "unmap": true, 00:14:31.364 "write_zeroes": true, 00:14:31.364 "flush": true, 00:14:31.364 "reset": true, 00:14:31.364 "compare": false, 00:14:31.364 "compare_and_write": false, 00:14:31.364 "abort": true, 00:14:31.364 "nvme_admin": false, 00:14:31.364 "nvme_io": false 00:14:31.364 }, 00:14:31.364 "memory_domains": [ 00:14:31.364 { 00:14:31.364 "dma_device_id": "system", 00:14:31.364 "dma_device_type": 1 00:14:31.364 }, 00:14:31.364 { 00:14:31.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.364 "dma_device_type": 2 00:14:31.364 } 00:14:31.364 ], 00:14:31.364 "driver_specific": { 00:14:31.364 "passthru": { 00:14:31.364 "name": "pt1", 00:14:31.364 "base_bdev_name": "malloc1" 00:14:31.364 } 00:14:31.364 } 00:14:31.364 }' 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:31.364 21:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:31.622 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:31.622 "name": "pt2", 00:14:31.622 "aliases": [ 00:14:31.622 "1c13fa68-68cc-5851-ab0f-2413c87b7457" 00:14:31.622 ], 00:14:31.622 "product_name": "passthru", 00:14:31.622 "block_size": 512, 00:14:31.622 "num_blocks": 65536, 00:14:31.622 "uuid": "1c13fa68-68cc-5851-ab0f-2413c87b7457", 00:14:31.622 "assigned_rate_limits": { 00:14:31.622 "rw_ios_per_sec": 0, 00:14:31.622 "rw_mbytes_per_sec": 0, 00:14:31.622 "r_mbytes_per_sec": 0, 00:14:31.622 "w_mbytes_per_sec": 0 00:14:31.622 }, 00:14:31.622 "claimed": true, 00:14:31.622 "claim_type": "exclusive_write", 00:14:31.622 "zoned": false, 00:14:31.622 "supported_io_types": { 00:14:31.622 "read": true, 00:14:31.622 "write": true, 00:14:31.622 "unmap": true, 00:14:31.622 "write_zeroes": true, 00:14:31.622 "flush": true, 00:14:31.622 "reset": true, 00:14:31.622 "compare": false, 00:14:31.622 "compare_and_write": false, 00:14:31.622 "abort": true, 00:14:31.622 "nvme_admin": false, 00:14:31.622 "nvme_io": false 00:14:31.622 }, 00:14:31.622 "memory_domains": [ 00:14:31.622 { 00:14:31.622 "dma_device_id": "system", 00:14:31.622 "dma_device_type": 1 00:14:31.622 }, 00:14:31.622 { 00:14:31.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.622 "dma_device_type": 2 00:14:31.622 } 00:14:31.622 ], 00:14:31.622 "driver_specific": { 00:14:31.622 "passthru": { 00:14:31.622 "name": "pt2", 00:14:31.622 "base_bdev_name": "malloc2" 00:14:31.622 } 00:14:31.622 } 00:14:31.622 }' 00:14:31.622 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:31.880 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:32.192 "name": "pt3", 00:14:32.192 "aliases": [ 00:14:32.192 "6a622aee-1424-cf50-b79a-4179c6794fbf" 00:14:32.192 ], 00:14:32.192 "product_name": "passthru", 00:14:32.192 "block_size": 512, 00:14:32.192 "num_blocks": 65536, 00:14:32.192 "uuid": "6a622aee-1424-cf50-b79a-4179c6794fbf", 00:14:32.192 "assigned_rate_limits": { 00:14:32.192 "rw_ios_per_sec": 0, 00:14:32.192 "rw_mbytes_per_sec": 0, 00:14:32.192 "r_mbytes_per_sec": 0, 00:14:32.192 "w_mbytes_per_sec": 0 00:14:32.192 }, 00:14:32.192 "claimed": true, 00:14:32.192 "claim_type": "exclusive_write", 00:14:32.192 "zoned": false, 00:14:32.192 "supported_io_types": { 00:14:32.192 "read": true, 00:14:32.192 "write": true, 00:14:32.192 "unmap": true, 00:14:32.192 "write_zeroes": true, 00:14:32.192 "flush": true, 00:14:32.192 "reset": true, 00:14:32.192 "compare": false, 00:14:32.192 "compare_and_write": false, 00:14:32.192 "abort": true, 00:14:32.192 "nvme_admin": false, 00:14:32.192 "nvme_io": false 00:14:32.192 }, 00:14:32.192 "memory_domains": [ 00:14:32.192 { 00:14:32.192 "dma_device_id": "system", 00:14:32.192 "dma_device_type": 1 00:14:32.192 }, 00:14:32.192 { 00:14:32.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.192 "dma_device_type": 2 00:14:32.192 } 00:14:32.192 ], 00:14:32.192 "driver_specific": { 00:14:32.192 "passthru": { 00:14:32.192 "name": "pt3", 00:14:32.192 "base_bdev_name": "malloc3" 00:14:32.192 } 00:14:32.192 } 00:14:32.192 }' 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:32.192 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:32.193 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:32.193 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:32.193 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:32.450 "name": "pt4", 00:14:32.450 "aliases": [ 00:14:32.450 "63696672-170d-f35c-9d68-09c6944b8e34" 00:14:32.450 ], 00:14:32.450 "product_name": "passthru", 00:14:32.450 "block_size": 512, 00:14:32.450 "num_blocks": 65536, 00:14:32.450 "uuid": "63696672-170d-f35c-9d68-09c6944b8e34", 00:14:32.450 "assigned_rate_limits": { 00:14:32.450 "rw_ios_per_sec": 0, 00:14:32.450 "rw_mbytes_per_sec": 0, 00:14:32.450 "r_mbytes_per_sec": 0, 00:14:32.450 "w_mbytes_per_sec": 0 00:14:32.450 }, 00:14:32.450 "claimed": true, 00:14:32.450 "claim_type": "exclusive_write", 00:14:32.450 "zoned": false, 00:14:32.450 "supported_io_types": { 00:14:32.450 "read": true, 00:14:32.450 "write": true, 00:14:32.450 "unmap": true, 00:14:32.450 "write_zeroes": true, 00:14:32.450 "flush": true, 00:14:32.450 "reset": true, 00:14:32.450 "compare": false, 00:14:32.450 "compare_and_write": false, 00:14:32.450 "abort": true, 00:14:32.450 "nvme_admin": false, 00:14:32.450 "nvme_io": false 00:14:32.450 }, 00:14:32.450 "memory_domains": [ 00:14:32.450 { 00:14:32.450 "dma_device_id": "system", 00:14:32.450 "dma_device_type": 1 00:14:32.450 }, 00:14:32.450 { 00:14:32.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.450 "dma_device_type": 2 00:14:32.450 } 00:14:32.450 ], 00:14:32.450 "driver_specific": { 00:14:32.450 "passthru": { 00:14:32.450 "name": "pt4", 00:14:32.450 "base_bdev_name": "malloc4" 00:14:32.450 } 00:14:32.450 } 00:14:32.450 }' 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:32.450 21:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:32.707 [2024-05-14 21:56:33.097428] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cda011bd-123c-11ef-8c90-4585f0cfab08 '!=' cda011bd-123c-11ef-8c90-4585f0cfab08 ']' 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 58817 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 58817 ']' 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 58817 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 58817 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:14:32.707 killing process with pid 58817 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58817' 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 58817 00:14:32.707 [2024-05-14 21:56:33.126380] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.707 [2024-05-14 21:56:33.126404] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.707 [2024-05-14 21:56:33.126428] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.707 [2024-05-14 21:56:33.126436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c851300 name raid_bdev1, state offline 00:14:32.707 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 58817 00:14:32.707 [2024-05-14 21:56:33.149282] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.963 21:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:14:32.963 00:14:32.963 real 0m13.724s 00:14:32.963 user 0m24.490s 00:14:32.963 sys 0m2.150s 00:14:32.963 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.963 21:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.963 ************************************ 00:14:32.963 END TEST raid_superblock_test 00:14:32.963 ************************************ 00:14:32.963 21:56:33 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:14:32.963 21:56:33 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:32.963 21:56:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:32.963 21:56:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.963 21:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.963 ************************************ 00:14:32.963 START TEST raid_state_function_test 00:14:32.963 ************************************ 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=59216 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 59216' 00:14:32.963 Process raid pid: 59216 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 59216 /var/tmp/spdk-raid.sock 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 59216 ']' 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:32.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:32.963 21:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.963 [2024-05-14 21:56:33.385216] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:32.963 [2024-05-14 21:56:33.385494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:33.543 EAL: TSC is not safe to use in SMP mode 00:14:33.543 EAL: TSC is not invariant 00:14:33.543 [2024-05-14 21:56:33.925398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.543 [2024-05-14 21:56:34.014732] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:33.543 [2024-05-14 21:56:34.017018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.543 [2024-05-14 21:56:34.017802] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.543 [2024-05-14 21:56:34.017818] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.136 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:34.136 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:34.136 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:34.394 [2024-05-14 21:56:34.730330] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.394 [2024-05-14 21:56:34.730388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.394 [2024-05-14 21:56:34.730394] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.394 [2024-05-14 21:56:34.730403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.394 [2024-05-14 21:56:34.730406] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.394 [2024-05-14 21:56:34.730414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.394 [2024-05-14 21:56:34.730417] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.394 [2024-05-14 21:56:34.730425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.394 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.652 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.652 "name": "Existed_Raid", 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "strip_size_kb": 64, 00:14:34.652 "state": "configuring", 00:14:34.652 "raid_level": "concat", 00:14:34.652 "superblock": false, 00:14:34.652 "num_base_bdevs": 4, 00:14:34.652 "num_base_bdevs_discovered": 0, 00:14:34.652 "num_base_bdevs_operational": 4, 00:14:34.652 "base_bdevs_list": [ 00:14:34.652 { 00:14:34.652 "name": "BaseBdev1", 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 0, 00:14:34.652 "data_size": 0 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": "BaseBdev2", 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 0, 00:14:34.652 "data_size": 0 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": "BaseBdev3", 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 0, 00:14:34.652 "data_size": 0 00:14:34.652 }, 00:14:34.652 { 00:14:34.652 "name": "BaseBdev4", 00:14:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.652 "is_configured": false, 00:14:34.652 "data_offset": 0, 00:14:34.652 "data_size": 0 00:14:34.652 } 00:14:34.652 ] 00:14:34.652 }' 00:14:34.652 21:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.652 21:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.168 [2024-05-14 21:56:35.582336] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.168 [2024-05-14 21:56:35.582367] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b587300 name Existed_Raid, state configuring 00:14:35.168 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:35.426 [2024-05-14 21:56:35.822337] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.426 [2024-05-14 21:56:35.822391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.426 [2024-05-14 21:56:35.822396] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.426 [2024-05-14 21:56:35.822405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.426 [2024-05-14 21:56:35.822408] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.426 [2024-05-14 21:56:35.822416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.426 [2024-05-14 21:56:35.822419] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.426 [2024-05-14 21:56:35.822427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.426 21:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.683 [2024-05-14 21:56:36.119389] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.683 BaseBdev1 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:35.683 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:35.941 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.198 [ 00:14:36.198 { 00:14:36.198 "name": "BaseBdev1", 00:14:36.199 "aliases": [ 00:14:36.199 "d582120a-123c-11ef-8c90-4585f0cfab08" 00:14:36.199 ], 00:14:36.199 "product_name": "Malloc disk", 00:14:36.199 "block_size": 512, 00:14:36.199 "num_blocks": 65536, 00:14:36.199 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:36.199 "assigned_rate_limits": { 00:14:36.199 "rw_ios_per_sec": 0, 00:14:36.199 "rw_mbytes_per_sec": 0, 00:14:36.199 "r_mbytes_per_sec": 0, 00:14:36.199 "w_mbytes_per_sec": 0 00:14:36.199 }, 00:14:36.199 "claimed": true, 00:14:36.199 "claim_type": "exclusive_write", 00:14:36.199 "zoned": false, 00:14:36.199 "supported_io_types": { 00:14:36.199 "read": true, 00:14:36.199 "write": true, 00:14:36.199 "unmap": true, 00:14:36.199 "write_zeroes": true, 00:14:36.199 "flush": true, 00:14:36.199 "reset": true, 00:14:36.199 "compare": false, 00:14:36.199 "compare_and_write": false, 00:14:36.199 "abort": true, 00:14:36.199 "nvme_admin": false, 00:14:36.199 "nvme_io": false 00:14:36.199 }, 00:14:36.199 "memory_domains": [ 00:14:36.199 { 00:14:36.199 "dma_device_id": "system", 00:14:36.199 "dma_device_type": 1 00:14:36.199 }, 00:14:36.199 { 00:14:36.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.199 "dma_device_type": 2 00:14:36.199 } 00:14:36.199 ], 00:14:36.199 "driver_specific": {} 00:14:36.199 } 00:14:36.199 ] 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.199 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.457 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.457 "name": "Existed_Raid", 00:14:36.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.457 "strip_size_kb": 64, 00:14:36.457 "state": "configuring", 00:14:36.457 "raid_level": "concat", 00:14:36.457 "superblock": false, 00:14:36.457 "num_base_bdevs": 4, 00:14:36.457 "num_base_bdevs_discovered": 1, 00:14:36.457 "num_base_bdevs_operational": 4, 00:14:36.457 "base_bdevs_list": [ 00:14:36.457 { 00:14:36.457 "name": "BaseBdev1", 00:14:36.457 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:36.457 "is_configured": true, 00:14:36.457 "data_offset": 0, 00:14:36.457 "data_size": 65536 00:14:36.457 }, 00:14:36.457 { 00:14:36.457 "name": "BaseBdev2", 00:14:36.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.457 "is_configured": false, 00:14:36.457 "data_offset": 0, 00:14:36.457 "data_size": 0 00:14:36.457 }, 00:14:36.457 { 00:14:36.457 "name": "BaseBdev3", 00:14:36.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.457 "is_configured": false, 00:14:36.457 "data_offset": 0, 00:14:36.457 "data_size": 0 00:14:36.457 }, 00:14:36.457 { 00:14:36.457 "name": "BaseBdev4", 00:14:36.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.457 "is_configured": false, 00:14:36.457 "data_offset": 0, 00:14:36.457 "data_size": 0 00:14:36.457 } 00:14:36.457 ] 00:14:36.457 }' 00:14:36.457 21:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.457 21:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.025 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.025 [2024-05-14 21:56:37.562392] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.025 [2024-05-14 21:56:37.562425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b587300 name Existed_Raid, state configuring 00:14:37.025 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:37.282 [2024-05-14 21:56:37.786427] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.282 [2024-05-14 21:56:37.787307] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.282 [2024-05-14 21:56:37.787353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.282 [2024-05-14 21:56:37.787358] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.282 [2024-05-14 21:56:37.787367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.282 [2024-05-14 21:56:37.787371] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:37.282 [2024-05-14 21:56:37.787378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.282 21:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.540 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.540 "name": "Existed_Raid", 00:14:37.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.540 "strip_size_kb": 64, 00:14:37.540 "state": "configuring", 00:14:37.540 "raid_level": "concat", 00:14:37.540 "superblock": false, 00:14:37.540 "num_base_bdevs": 4, 00:14:37.540 "num_base_bdevs_discovered": 1, 00:14:37.540 "num_base_bdevs_operational": 4, 00:14:37.540 "base_bdevs_list": [ 00:14:37.540 { 00:14:37.540 "name": "BaseBdev1", 00:14:37.540 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:37.540 "is_configured": true, 00:14:37.540 "data_offset": 0, 00:14:37.540 "data_size": 65536 00:14:37.540 }, 00:14:37.540 { 00:14:37.540 "name": "BaseBdev2", 00:14:37.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.540 "is_configured": false, 00:14:37.540 "data_offset": 0, 00:14:37.540 "data_size": 0 00:14:37.540 }, 00:14:37.540 { 00:14:37.540 "name": "BaseBdev3", 00:14:37.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.540 "is_configured": false, 00:14:37.540 "data_offset": 0, 00:14:37.540 "data_size": 0 00:14:37.540 }, 00:14:37.540 { 00:14:37.540 "name": "BaseBdev4", 00:14:37.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.540 "is_configured": false, 00:14:37.540 "data_offset": 0, 00:14:37.540 "data_size": 0 00:14:37.540 } 00:14:37.540 ] 00:14:37.540 }' 00:14:37.540 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.540 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.798 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.057 [2024-05-14 21:56:38.590568] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.057 BaseBdev2 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:38.057 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.315 21:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.573 [ 00:14:38.573 { 00:14:38.573 "name": "BaseBdev2", 00:14:38.573 "aliases": [ 00:14:38.573 "d6fb46d5-123c-11ef-8c90-4585f0cfab08" 00:14:38.573 ], 00:14:38.573 "product_name": "Malloc disk", 00:14:38.573 "block_size": 512, 00:14:38.573 "num_blocks": 65536, 00:14:38.573 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:38.573 "assigned_rate_limits": { 00:14:38.573 "rw_ios_per_sec": 0, 00:14:38.573 "rw_mbytes_per_sec": 0, 00:14:38.573 "r_mbytes_per_sec": 0, 00:14:38.573 "w_mbytes_per_sec": 0 00:14:38.573 }, 00:14:38.573 "claimed": true, 00:14:38.573 "claim_type": "exclusive_write", 00:14:38.573 "zoned": false, 00:14:38.573 "supported_io_types": { 00:14:38.573 "read": true, 00:14:38.573 "write": true, 00:14:38.573 "unmap": true, 00:14:38.573 "write_zeroes": true, 00:14:38.573 "flush": true, 00:14:38.573 "reset": true, 00:14:38.573 "compare": false, 00:14:38.573 "compare_and_write": false, 00:14:38.573 "abort": true, 00:14:38.573 "nvme_admin": false, 00:14:38.573 "nvme_io": false 00:14:38.573 }, 00:14:38.573 "memory_domains": [ 00:14:38.573 { 00:14:38.573 "dma_device_id": "system", 00:14:38.573 "dma_device_type": 1 00:14:38.573 }, 00:14:38.573 { 00:14:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.573 "dma_device_type": 2 00:14:38.573 } 00:14:38.573 ], 00:14:38.573 "driver_specific": {} 00:14:38.573 } 00:14:38.573 ] 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.574 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.832 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.832 "name": "Existed_Raid", 00:14:38.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.832 "strip_size_kb": 64, 00:14:38.832 "state": "configuring", 00:14:38.832 "raid_level": "concat", 00:14:38.832 "superblock": false, 00:14:38.832 "num_base_bdevs": 4, 00:14:38.832 "num_base_bdevs_discovered": 2, 00:14:38.832 "num_base_bdevs_operational": 4, 00:14:38.832 "base_bdevs_list": [ 00:14:38.832 { 00:14:38.832 "name": "BaseBdev1", 00:14:38.832 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:38.832 "is_configured": true, 00:14:38.832 "data_offset": 0, 00:14:38.832 "data_size": 65536 00:14:38.832 }, 00:14:38.832 { 00:14:38.832 "name": "BaseBdev2", 00:14:38.832 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:38.832 "is_configured": true, 00:14:38.832 "data_offset": 0, 00:14:38.832 "data_size": 65536 00:14:38.832 }, 00:14:38.832 { 00:14:38.832 "name": "BaseBdev3", 00:14:38.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.832 "is_configured": false, 00:14:38.832 "data_offset": 0, 00:14:38.832 "data_size": 0 00:14:38.832 }, 00:14:38.832 { 00:14:38.832 "name": "BaseBdev4", 00:14:38.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.832 "is_configured": false, 00:14:38.832 "data_offset": 0, 00:14:38.832 "data_size": 0 00:14:38.832 } 00:14:38.832 ] 00:14:38.832 }' 00:14:38.832 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.832 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.089 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.348 [2024-05-14 21:56:39.878610] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.348 BaseBdev3 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:39.348 21:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.606 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.864 [ 00:14:39.864 { 00:14:39.864 "name": "BaseBdev3", 00:14:39.864 "aliases": [ 00:14:39.864 "d7bfd162-123c-11ef-8c90-4585f0cfab08" 00:14:39.864 ], 00:14:39.864 "product_name": "Malloc disk", 00:14:39.864 "block_size": 512, 00:14:39.864 "num_blocks": 65536, 00:14:39.864 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:39.864 "assigned_rate_limits": { 00:14:39.864 "rw_ios_per_sec": 0, 00:14:39.864 "rw_mbytes_per_sec": 0, 00:14:39.864 "r_mbytes_per_sec": 0, 00:14:39.864 "w_mbytes_per_sec": 0 00:14:39.864 }, 00:14:39.864 "claimed": true, 00:14:39.864 "claim_type": "exclusive_write", 00:14:39.864 "zoned": false, 00:14:39.864 "supported_io_types": { 00:14:39.864 "read": true, 00:14:39.864 "write": true, 00:14:39.864 "unmap": true, 00:14:39.864 "write_zeroes": true, 00:14:39.864 "flush": true, 00:14:39.864 "reset": true, 00:14:39.864 "compare": false, 00:14:39.864 "compare_and_write": false, 00:14:39.864 "abort": true, 00:14:39.864 "nvme_admin": false, 00:14:39.864 "nvme_io": false 00:14:39.864 }, 00:14:39.864 "memory_domains": [ 00:14:39.864 { 00:14:39.864 "dma_device_id": "system", 00:14:39.864 "dma_device_type": 1 00:14:39.864 }, 00:14:39.864 { 00:14:39.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.864 "dma_device_type": 2 00:14:39.864 } 00:14:39.864 ], 00:14:39.864 "driver_specific": {} 00:14:39.864 } 00:14:39.864 ] 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.864 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.122 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.122 "name": "Existed_Raid", 00:14:40.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.122 "strip_size_kb": 64, 00:14:40.122 "state": "configuring", 00:14:40.122 "raid_level": "concat", 00:14:40.122 "superblock": false, 00:14:40.122 "num_base_bdevs": 4, 00:14:40.122 "num_base_bdevs_discovered": 3, 00:14:40.122 "num_base_bdevs_operational": 4, 00:14:40.122 "base_bdevs_list": [ 00:14:40.122 { 00:14:40.122 "name": "BaseBdev1", 00:14:40.122 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:40.122 "is_configured": true, 00:14:40.122 "data_offset": 0, 00:14:40.122 "data_size": 65536 00:14:40.122 }, 00:14:40.122 { 00:14:40.122 "name": "BaseBdev2", 00:14:40.122 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:40.122 "is_configured": true, 00:14:40.122 "data_offset": 0, 00:14:40.122 "data_size": 65536 00:14:40.122 }, 00:14:40.122 { 00:14:40.122 "name": "BaseBdev3", 00:14:40.122 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:40.122 "is_configured": true, 00:14:40.122 "data_offset": 0, 00:14:40.122 "data_size": 65536 00:14:40.122 }, 00:14:40.122 { 00:14:40.122 "name": "BaseBdev4", 00:14:40.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.122 "is_configured": false, 00:14:40.122 "data_offset": 0, 00:14:40.122 "data_size": 0 00:14:40.123 } 00:14:40.123 ] 00:14:40.123 }' 00:14:40.123 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.123 21:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.688 21:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.946 [2024-05-14 21:56:41.282687] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.946 [2024-05-14 21:56:41.282719] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b587300 00:14:40.946 [2024-05-14 21:56:41.282724] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:40.946 [2024-05-14 21:56:41.282765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5e5ec0 00:14:40.946 [2024-05-14 21:56:41.282854] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b587300 00:14:40.946 [2024-05-14 21:56:41.282859] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b587300 00:14:40.946 [2024-05-14 21:56:41.282892] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.946 BaseBdev4 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.946 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.205 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:41.463 [ 00:14:41.463 { 00:14:41.463 "name": "BaseBdev4", 00:14:41.463 "aliases": [ 00:14:41.463 "d8961071-123c-11ef-8c90-4585f0cfab08" 00:14:41.463 ], 00:14:41.463 "product_name": "Malloc disk", 00:14:41.463 "block_size": 512, 00:14:41.463 "num_blocks": 65536, 00:14:41.463 "uuid": "d8961071-123c-11ef-8c90-4585f0cfab08", 00:14:41.463 "assigned_rate_limits": { 00:14:41.463 "rw_ios_per_sec": 0, 00:14:41.463 "rw_mbytes_per_sec": 0, 00:14:41.463 "r_mbytes_per_sec": 0, 00:14:41.463 "w_mbytes_per_sec": 0 00:14:41.463 }, 00:14:41.463 "claimed": true, 00:14:41.463 "claim_type": "exclusive_write", 00:14:41.463 "zoned": false, 00:14:41.463 "supported_io_types": { 00:14:41.463 "read": true, 00:14:41.463 "write": true, 00:14:41.463 "unmap": true, 00:14:41.463 "write_zeroes": true, 00:14:41.463 "flush": true, 00:14:41.463 "reset": true, 00:14:41.463 "compare": false, 00:14:41.463 "compare_and_write": false, 00:14:41.463 "abort": true, 00:14:41.463 "nvme_admin": false, 00:14:41.463 "nvme_io": false 00:14:41.463 }, 00:14:41.463 "memory_domains": [ 00:14:41.463 { 00:14:41.463 "dma_device_id": "system", 00:14:41.463 "dma_device_type": 1 00:14:41.463 }, 00:14:41.463 { 00:14:41.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.463 "dma_device_type": 2 00:14:41.463 } 00:14:41.463 ], 00:14:41.463 "driver_specific": {} 00:14:41.463 } 00:14:41.463 ] 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.463 21:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.722 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.722 "name": "Existed_Raid", 00:14:41.722 "uuid": "d8961762-123c-11ef-8c90-4585f0cfab08", 00:14:41.722 "strip_size_kb": 64, 00:14:41.722 "state": "online", 00:14:41.722 "raid_level": "concat", 00:14:41.722 "superblock": false, 00:14:41.722 "num_base_bdevs": 4, 00:14:41.722 "num_base_bdevs_discovered": 4, 00:14:41.722 "num_base_bdevs_operational": 4, 00:14:41.722 "base_bdevs_list": [ 00:14:41.722 { 00:14:41.722 "name": "BaseBdev1", 00:14:41.722 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:41.722 "is_configured": true, 00:14:41.722 "data_offset": 0, 00:14:41.722 "data_size": 65536 00:14:41.722 }, 00:14:41.722 { 00:14:41.722 "name": "BaseBdev2", 00:14:41.722 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:41.722 "is_configured": true, 00:14:41.722 "data_offset": 0, 00:14:41.722 "data_size": 65536 00:14:41.722 }, 00:14:41.722 { 00:14:41.722 "name": "BaseBdev3", 00:14:41.722 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:41.722 "is_configured": true, 00:14:41.722 "data_offset": 0, 00:14:41.722 "data_size": 65536 00:14:41.722 }, 00:14:41.722 { 00:14:41.722 "name": "BaseBdev4", 00:14:41.722 "uuid": "d8961071-123c-11ef-8c90-4585f0cfab08", 00:14:41.722 "is_configured": true, 00:14:41.722 "data_offset": 0, 00:14:41.722 "data_size": 65536 00:14:41.722 } 00:14:41.722 ] 00:14:41.722 }' 00:14:41.722 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.722 21:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:41.979 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:42.237 [2024-05-14 21:56:42.738640] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.237 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:42.237 "name": "Existed_Raid", 00:14:42.237 "aliases": [ 00:14:42.237 "d8961762-123c-11ef-8c90-4585f0cfab08" 00:14:42.237 ], 00:14:42.237 "product_name": "Raid Volume", 00:14:42.237 "block_size": 512, 00:14:42.237 "num_blocks": 262144, 00:14:42.237 "uuid": "d8961762-123c-11ef-8c90-4585f0cfab08", 00:14:42.237 "assigned_rate_limits": { 00:14:42.237 "rw_ios_per_sec": 0, 00:14:42.237 "rw_mbytes_per_sec": 0, 00:14:42.237 "r_mbytes_per_sec": 0, 00:14:42.237 "w_mbytes_per_sec": 0 00:14:42.237 }, 00:14:42.237 "claimed": false, 00:14:42.237 "zoned": false, 00:14:42.237 "supported_io_types": { 00:14:42.237 "read": true, 00:14:42.237 "write": true, 00:14:42.237 "unmap": true, 00:14:42.237 "write_zeroes": true, 00:14:42.237 "flush": true, 00:14:42.237 "reset": true, 00:14:42.237 "compare": false, 00:14:42.237 "compare_and_write": false, 00:14:42.237 "abort": false, 00:14:42.237 "nvme_admin": false, 00:14:42.237 "nvme_io": false 00:14:42.237 }, 00:14:42.237 "memory_domains": [ 00:14:42.237 { 00:14:42.237 "dma_device_id": "system", 00:14:42.237 "dma_device_type": 1 00:14:42.237 }, 00:14:42.237 { 00:14:42.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.237 "dma_device_type": 2 00:14:42.237 }, 00:14:42.237 { 00:14:42.237 "dma_device_id": "system", 00:14:42.237 "dma_device_type": 1 00:14:42.237 }, 00:14:42.237 { 00:14:42.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.237 "dma_device_type": 2 00:14:42.237 }, 00:14:42.237 { 00:14:42.237 "dma_device_id": "system", 00:14:42.237 "dma_device_type": 1 00:14:42.237 }, 00:14:42.238 { 00:14:42.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.238 "dma_device_type": 2 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "dma_device_id": "system", 00:14:42.238 "dma_device_type": 1 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.238 "dma_device_type": 2 00:14:42.238 } 00:14:42.238 ], 00:14:42.238 "driver_specific": { 00:14:42.238 "raid": { 00:14:42.238 "uuid": "d8961762-123c-11ef-8c90-4585f0cfab08", 00:14:42.238 "strip_size_kb": 64, 00:14:42.238 "state": "online", 00:14:42.238 "raid_level": "concat", 00:14:42.238 "superblock": false, 00:14:42.238 "num_base_bdevs": 4, 00:14:42.238 "num_base_bdevs_discovered": 4, 00:14:42.238 "num_base_bdevs_operational": 4, 00:14:42.238 "base_bdevs_list": [ 00:14:42.238 { 00:14:42.238 "name": "BaseBdev1", 00:14:42.238 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 0, 00:14:42.238 "data_size": 65536 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "name": "BaseBdev2", 00:14:42.238 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 0, 00:14:42.238 "data_size": 65536 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "name": "BaseBdev3", 00:14:42.238 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 0, 00:14:42.238 "data_size": 65536 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "name": "BaseBdev4", 00:14:42.238 "uuid": "d8961071-123c-11ef-8c90-4585f0cfab08", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 0, 00:14:42.238 "data_size": 65536 00:14:42.238 } 00:14:42.238 ] 00:14:42.238 } 00:14:42.238 } 00:14:42.238 }' 00:14:42.238 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.238 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:42.238 BaseBdev2 00:14:42.238 BaseBdev3 00:14:42.238 BaseBdev4' 00:14:42.238 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:42.238 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:42.238 21:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:42.496 "name": "BaseBdev1", 00:14:42.496 "aliases": [ 00:14:42.496 "d582120a-123c-11ef-8c90-4585f0cfab08" 00:14:42.496 ], 00:14:42.496 "product_name": "Malloc disk", 00:14:42.496 "block_size": 512, 00:14:42.496 "num_blocks": 65536, 00:14:42.496 "uuid": "d582120a-123c-11ef-8c90-4585f0cfab08", 00:14:42.496 "assigned_rate_limits": { 00:14:42.496 "rw_ios_per_sec": 0, 00:14:42.496 "rw_mbytes_per_sec": 0, 00:14:42.496 "r_mbytes_per_sec": 0, 00:14:42.496 "w_mbytes_per_sec": 0 00:14:42.496 }, 00:14:42.496 "claimed": true, 00:14:42.496 "claim_type": "exclusive_write", 00:14:42.496 "zoned": false, 00:14:42.496 "supported_io_types": { 00:14:42.496 "read": true, 00:14:42.496 "write": true, 00:14:42.496 "unmap": true, 00:14:42.496 "write_zeroes": true, 00:14:42.496 "flush": true, 00:14:42.496 "reset": true, 00:14:42.496 "compare": false, 00:14:42.496 "compare_and_write": false, 00:14:42.496 "abort": true, 00:14:42.496 "nvme_admin": false, 00:14:42.496 "nvme_io": false 00:14:42.496 }, 00:14:42.496 "memory_domains": [ 00:14:42.496 { 00:14:42.496 "dma_device_id": "system", 00:14:42.496 "dma_device_type": 1 00:14:42.496 }, 00:14:42.496 { 00:14:42.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.496 "dma_device_type": 2 00:14:42.496 } 00:14:42.496 ], 00:14:42.496 "driver_specific": {} 00:14:42.496 }' 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:42.496 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:42.754 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:43.011 "name": "BaseBdev2", 00:14:43.011 "aliases": [ 00:14:43.011 "d6fb46d5-123c-11ef-8c90-4585f0cfab08" 00:14:43.011 ], 00:14:43.011 "product_name": "Malloc disk", 00:14:43.011 "block_size": 512, 00:14:43.011 "num_blocks": 65536, 00:14:43.011 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:43.011 "assigned_rate_limits": { 00:14:43.011 "rw_ios_per_sec": 0, 00:14:43.011 "rw_mbytes_per_sec": 0, 00:14:43.011 "r_mbytes_per_sec": 0, 00:14:43.011 "w_mbytes_per_sec": 0 00:14:43.011 }, 00:14:43.011 "claimed": true, 00:14:43.011 "claim_type": "exclusive_write", 00:14:43.011 "zoned": false, 00:14:43.011 "supported_io_types": { 00:14:43.011 "read": true, 00:14:43.011 "write": true, 00:14:43.011 "unmap": true, 00:14:43.011 "write_zeroes": true, 00:14:43.011 "flush": true, 00:14:43.011 "reset": true, 00:14:43.011 "compare": false, 00:14:43.011 "compare_and_write": false, 00:14:43.011 "abort": true, 00:14:43.011 "nvme_admin": false, 00:14:43.011 "nvme_io": false 00:14:43.011 }, 00:14:43.011 "memory_domains": [ 00:14:43.011 { 00:14:43.011 "dma_device_id": "system", 00:14:43.011 "dma_device_type": 1 00:14:43.011 }, 00:14:43.011 { 00:14:43.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.011 "dma_device_type": 2 00:14:43.011 } 00:14:43.011 ], 00:14:43.011 "driver_specific": {} 00:14:43.011 }' 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:43.011 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:43.270 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:43.270 "name": "BaseBdev3", 00:14:43.270 "aliases": [ 00:14:43.270 "d7bfd162-123c-11ef-8c90-4585f0cfab08" 00:14:43.270 ], 00:14:43.270 "product_name": "Malloc disk", 00:14:43.270 "block_size": 512, 00:14:43.270 "num_blocks": 65536, 00:14:43.270 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:43.270 "assigned_rate_limits": { 00:14:43.270 "rw_ios_per_sec": 0, 00:14:43.270 "rw_mbytes_per_sec": 0, 00:14:43.270 "r_mbytes_per_sec": 0, 00:14:43.270 "w_mbytes_per_sec": 0 00:14:43.270 }, 00:14:43.270 "claimed": true, 00:14:43.270 "claim_type": "exclusive_write", 00:14:43.270 "zoned": false, 00:14:43.270 "supported_io_types": { 00:14:43.270 "read": true, 00:14:43.270 "write": true, 00:14:43.270 "unmap": true, 00:14:43.270 "write_zeroes": true, 00:14:43.270 "flush": true, 00:14:43.270 "reset": true, 00:14:43.270 "compare": false, 00:14:43.270 "compare_and_write": false, 00:14:43.270 "abort": true, 00:14:43.270 "nvme_admin": false, 00:14:43.270 "nvme_io": false 00:14:43.270 }, 00:14:43.270 "memory_domains": [ 00:14:43.270 { 00:14:43.270 "dma_device_id": "system", 00:14:43.270 "dma_device_type": 1 00:14:43.270 }, 00:14:43.270 { 00:14:43.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.271 "dma_device_type": 2 00:14:43.271 } 00:14:43.271 ], 00:14:43.271 "driver_specific": {} 00:14:43.271 }' 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:43.271 21:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:43.837 "name": "BaseBdev4", 00:14:43.837 "aliases": [ 00:14:43.837 "d8961071-123c-11ef-8c90-4585f0cfab08" 00:14:43.837 ], 00:14:43.837 "product_name": "Malloc disk", 00:14:43.837 "block_size": 512, 00:14:43.837 "num_blocks": 65536, 00:14:43.837 "uuid": "d8961071-123c-11ef-8c90-4585f0cfab08", 00:14:43.837 "assigned_rate_limits": { 00:14:43.837 "rw_ios_per_sec": 0, 00:14:43.837 "rw_mbytes_per_sec": 0, 00:14:43.837 "r_mbytes_per_sec": 0, 00:14:43.837 "w_mbytes_per_sec": 0 00:14:43.837 }, 00:14:43.837 "claimed": true, 00:14:43.837 "claim_type": "exclusive_write", 00:14:43.837 "zoned": false, 00:14:43.837 "supported_io_types": { 00:14:43.837 "read": true, 00:14:43.837 "write": true, 00:14:43.837 "unmap": true, 00:14:43.837 "write_zeroes": true, 00:14:43.837 "flush": true, 00:14:43.837 "reset": true, 00:14:43.837 "compare": false, 00:14:43.837 "compare_and_write": false, 00:14:43.837 "abort": true, 00:14:43.837 "nvme_admin": false, 00:14:43.837 "nvme_io": false 00:14:43.837 }, 00:14:43.837 "memory_domains": [ 00:14:43.837 { 00:14:43.837 "dma_device_id": "system", 00:14:43.837 "dma_device_type": 1 00:14:43.837 }, 00:14:43.837 { 00:14:43.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.837 "dma_device_type": 2 00:14:43.837 } 00:14:43.837 ], 00:14:43.837 "driver_specific": {} 00:14:43.837 }' 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:43.837 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:44.096 [2024-05-14 21:56:44.458649] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.096 [2024-05-14 21:56:44.458679] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.096 [2024-05-14 21:56:44.458695] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.096 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.354 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.354 "name": "Existed_Raid", 00:14:44.354 "uuid": "d8961762-123c-11ef-8c90-4585f0cfab08", 00:14:44.354 "strip_size_kb": 64, 00:14:44.354 "state": "offline", 00:14:44.354 "raid_level": "concat", 00:14:44.354 "superblock": false, 00:14:44.354 "num_base_bdevs": 4, 00:14:44.354 "num_base_bdevs_discovered": 3, 00:14:44.354 "num_base_bdevs_operational": 3, 00:14:44.354 "base_bdevs_list": [ 00:14:44.354 { 00:14:44.354 "name": null, 00:14:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.354 "is_configured": false, 00:14:44.354 "data_offset": 0, 00:14:44.354 "data_size": 65536 00:14:44.354 }, 00:14:44.354 { 00:14:44.354 "name": "BaseBdev2", 00:14:44.354 "uuid": "d6fb46d5-123c-11ef-8c90-4585f0cfab08", 00:14:44.354 "is_configured": true, 00:14:44.354 "data_offset": 0, 00:14:44.354 "data_size": 65536 00:14:44.354 }, 00:14:44.354 { 00:14:44.354 "name": "BaseBdev3", 00:14:44.354 "uuid": "d7bfd162-123c-11ef-8c90-4585f0cfab08", 00:14:44.354 "is_configured": true, 00:14:44.354 "data_offset": 0, 00:14:44.354 "data_size": 65536 00:14:44.354 }, 00:14:44.354 { 00:14:44.354 "name": "BaseBdev4", 00:14:44.354 "uuid": "d8961071-123c-11ef-8c90-4585f0cfab08", 00:14:44.354 "is_configured": true, 00:14:44.354 "data_offset": 0, 00:14:44.354 "data_size": 65536 00:14:44.354 } 00:14:44.354 ] 00:14:44.354 }' 00:14:44.354 21:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.354 21:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.611 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.611 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.611 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.611 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:44.868 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:44.868 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.868 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:45.125 [2024-05-14 21:56:45.584932] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.125 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.125 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.125 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.125 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:45.383 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:45.383 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.383 21:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:45.640 [2024-05-14 21:56:46.099029] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.640 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.640 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.640 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.640 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:45.898 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:45.898 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.898 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:46.156 [2024-05-14 21:56:46.628834] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:46.156 [2024-05-14 21:56:46.628865] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b587300 name Existed_Raid, state offline 00:14:46.156 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.156 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.156 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.156 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:46.413 21:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.670 BaseBdev2 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:46.671 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.984 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.241 [ 00:14:47.241 { 00:14:47.241 "name": "BaseBdev2", 00:14:47.241 "aliases": [ 00:14:47.241 "dc1448a5-123c-11ef-8c90-4585f0cfab08" 00:14:47.241 ], 00:14:47.241 "product_name": "Malloc disk", 00:14:47.241 "block_size": 512, 00:14:47.241 "num_blocks": 65536, 00:14:47.242 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:47.242 "assigned_rate_limits": { 00:14:47.242 "rw_ios_per_sec": 0, 00:14:47.242 "rw_mbytes_per_sec": 0, 00:14:47.242 "r_mbytes_per_sec": 0, 00:14:47.242 "w_mbytes_per_sec": 0 00:14:47.242 }, 00:14:47.242 "claimed": false, 00:14:47.242 "zoned": false, 00:14:47.242 "supported_io_types": { 00:14:47.242 "read": true, 00:14:47.242 "write": true, 00:14:47.242 "unmap": true, 00:14:47.242 "write_zeroes": true, 00:14:47.242 "flush": true, 00:14:47.242 "reset": true, 00:14:47.242 "compare": false, 00:14:47.242 "compare_and_write": false, 00:14:47.242 "abort": true, 00:14:47.242 "nvme_admin": false, 00:14:47.242 "nvme_io": false 00:14:47.242 }, 00:14:47.242 "memory_domains": [ 00:14:47.242 { 00:14:47.242 "dma_device_id": "system", 00:14:47.242 "dma_device_type": 1 00:14:47.242 }, 00:14:47.242 { 00:14:47.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.242 "dma_device_type": 2 00:14:47.242 } 00:14:47.242 ], 00:14:47.242 "driver_specific": {} 00:14:47.242 } 00:14:47.242 ] 00:14:47.242 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:47.242 21:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:47.242 21:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:47.242 21:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.500 BaseBdev3 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:47.500 21:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.758 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.016 [ 00:14:48.016 { 00:14:48.016 "name": "BaseBdev3", 00:14:48.016 "aliases": [ 00:14:48.016 "dc88dc56-123c-11ef-8c90-4585f0cfab08" 00:14:48.016 ], 00:14:48.016 "product_name": "Malloc disk", 00:14:48.016 "block_size": 512, 00:14:48.016 "num_blocks": 65536, 00:14:48.016 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:48.016 "assigned_rate_limits": { 00:14:48.016 "rw_ios_per_sec": 0, 00:14:48.016 "rw_mbytes_per_sec": 0, 00:14:48.016 "r_mbytes_per_sec": 0, 00:14:48.016 "w_mbytes_per_sec": 0 00:14:48.016 }, 00:14:48.016 "claimed": false, 00:14:48.016 "zoned": false, 00:14:48.016 "supported_io_types": { 00:14:48.016 "read": true, 00:14:48.016 "write": true, 00:14:48.016 "unmap": true, 00:14:48.016 "write_zeroes": true, 00:14:48.016 "flush": true, 00:14:48.016 "reset": true, 00:14:48.016 "compare": false, 00:14:48.016 "compare_and_write": false, 00:14:48.016 "abort": true, 00:14:48.016 "nvme_admin": false, 00:14:48.016 "nvme_io": false 00:14:48.016 }, 00:14:48.016 "memory_domains": [ 00:14:48.016 { 00:14:48.016 "dma_device_id": "system", 00:14:48.016 "dma_device_type": 1 00:14:48.016 }, 00:14:48.016 { 00:14:48.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.016 "dma_device_type": 2 00:14:48.016 } 00:14:48.016 ], 00:14:48.016 "driver_specific": {} 00:14:48.016 } 00:14:48.016 ] 00:14:48.016 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:48.016 21:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:48.016 21:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:48.016 21:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.274 BaseBdev4 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:48.274 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.532 21:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:48.532 [ 00:14:48.532 { 00:14:48.532 "name": "BaseBdev4", 00:14:48.532 "aliases": [ 00:14:48.532 "dcf61d68-123c-11ef-8c90-4585f0cfab08" 00:14:48.532 ], 00:14:48.532 "product_name": "Malloc disk", 00:14:48.532 "block_size": 512, 00:14:48.532 "num_blocks": 65536, 00:14:48.532 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:48.532 "assigned_rate_limits": { 00:14:48.532 "rw_ios_per_sec": 0, 00:14:48.532 "rw_mbytes_per_sec": 0, 00:14:48.532 "r_mbytes_per_sec": 0, 00:14:48.532 "w_mbytes_per_sec": 0 00:14:48.532 }, 00:14:48.532 "claimed": false, 00:14:48.532 "zoned": false, 00:14:48.532 "supported_io_types": { 00:14:48.532 "read": true, 00:14:48.532 "write": true, 00:14:48.532 "unmap": true, 00:14:48.532 "write_zeroes": true, 00:14:48.532 "flush": true, 00:14:48.532 "reset": true, 00:14:48.532 "compare": false, 00:14:48.532 "compare_and_write": false, 00:14:48.532 "abort": true, 00:14:48.532 "nvme_admin": false, 00:14:48.532 "nvme_io": false 00:14:48.532 }, 00:14:48.532 "memory_domains": [ 00:14:48.532 { 00:14:48.532 "dma_device_id": "system", 00:14:48.532 "dma_device_type": 1 00:14:48.532 }, 00:14:48.532 { 00:14:48.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.532 "dma_device_type": 2 00:14:48.532 } 00:14:48.532 ], 00:14:48.532 "driver_specific": {} 00:14:48.532 } 00:14:48.532 ] 00:14:48.532 21:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:48.532 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:48.532 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:48.532 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:48.790 [2024-05-14 21:56:49.346966] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.790 [2024-05-14 21:56:49.347015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.790 [2024-05-14 21:56:49.347025] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.790 [2024-05-14 21:56:49.347570] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.790 [2024-05-14 21:56:49.347589] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.790 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.047 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.047 "name": "Existed_Raid", 00:14:49.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.047 "strip_size_kb": 64, 00:14:49.047 "state": "configuring", 00:14:49.047 "raid_level": "concat", 00:14:49.047 "superblock": false, 00:14:49.047 "num_base_bdevs": 4, 00:14:49.047 "num_base_bdevs_discovered": 3, 00:14:49.047 "num_base_bdevs_operational": 4, 00:14:49.047 "base_bdevs_list": [ 00:14:49.047 { 00:14:49.047 "name": "BaseBdev1", 00:14:49.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.047 "is_configured": false, 00:14:49.047 "data_offset": 0, 00:14:49.047 "data_size": 0 00:14:49.047 }, 00:14:49.047 { 00:14:49.047 "name": "BaseBdev2", 00:14:49.047 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:49.047 "is_configured": true, 00:14:49.047 "data_offset": 0, 00:14:49.047 "data_size": 65536 00:14:49.047 }, 00:14:49.047 { 00:14:49.047 "name": "BaseBdev3", 00:14:49.047 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:49.047 "is_configured": true, 00:14:49.047 "data_offset": 0, 00:14:49.047 "data_size": 65536 00:14:49.047 }, 00:14:49.047 { 00:14:49.047 "name": "BaseBdev4", 00:14:49.047 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:49.047 "is_configured": true, 00:14:49.047 "data_offset": 0, 00:14:49.047 "data_size": 65536 00:14:49.047 } 00:14:49.047 ] 00:14:49.047 }' 00:14:49.047 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.047 21:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.613 21:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:49.871 [2024-05-14 21:56:50.202968] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.871 "name": "Existed_Raid", 00:14:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.871 "strip_size_kb": 64, 00:14:49.871 "state": "configuring", 00:14:49.871 "raid_level": "concat", 00:14:49.871 "superblock": false, 00:14:49.871 "num_base_bdevs": 4, 00:14:49.871 "num_base_bdevs_discovered": 2, 00:14:49.871 "num_base_bdevs_operational": 4, 00:14:49.871 "base_bdevs_list": [ 00:14:49.871 { 00:14:49.871 "name": "BaseBdev1", 00:14:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.871 "is_configured": false, 00:14:49.871 "data_offset": 0, 00:14:49.871 "data_size": 0 00:14:49.871 }, 00:14:49.871 { 00:14:49.871 "name": null, 00:14:49.871 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:49.871 "is_configured": false, 00:14:49.871 "data_offset": 0, 00:14:49.871 "data_size": 65536 00:14:49.871 }, 00:14:49.871 { 00:14:49.871 "name": "BaseBdev3", 00:14:49.871 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:49.871 "is_configured": true, 00:14:49.871 "data_offset": 0, 00:14:49.871 "data_size": 65536 00:14:49.871 }, 00:14:49.871 { 00:14:49.871 "name": "BaseBdev4", 00:14:49.871 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:49.871 "is_configured": true, 00:14:49.871 "data_offset": 0, 00:14:49.871 "data_size": 65536 00:14:49.871 } 00:14:49.871 ] 00:14:49.871 }' 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.871 21:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.437 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.437 21:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:50.437 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:50.437 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.695 [2024-05-14 21:56:51.231115] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.695 BaseBdev1 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:50.695 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.952 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.211 [ 00:14:51.211 { 00:14:51.211 "name": "BaseBdev1", 00:14:51.211 "aliases": [ 00:14:51.211 "de8412cf-123c-11ef-8c90-4585f0cfab08" 00:14:51.211 ], 00:14:51.211 "product_name": "Malloc disk", 00:14:51.211 "block_size": 512, 00:14:51.211 "num_blocks": 65536, 00:14:51.211 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:51.211 "assigned_rate_limits": { 00:14:51.211 "rw_ios_per_sec": 0, 00:14:51.211 "rw_mbytes_per_sec": 0, 00:14:51.211 "r_mbytes_per_sec": 0, 00:14:51.211 "w_mbytes_per_sec": 0 00:14:51.211 }, 00:14:51.211 "claimed": true, 00:14:51.211 "claim_type": "exclusive_write", 00:14:51.211 "zoned": false, 00:14:51.211 "supported_io_types": { 00:14:51.211 "read": true, 00:14:51.211 "write": true, 00:14:51.211 "unmap": true, 00:14:51.211 "write_zeroes": true, 00:14:51.211 "flush": true, 00:14:51.211 "reset": true, 00:14:51.211 "compare": false, 00:14:51.211 "compare_and_write": false, 00:14:51.211 "abort": true, 00:14:51.211 "nvme_admin": false, 00:14:51.211 "nvme_io": false 00:14:51.211 }, 00:14:51.211 "memory_domains": [ 00:14:51.211 { 00:14:51.211 "dma_device_id": "system", 00:14:51.211 "dma_device_type": 1 00:14:51.211 }, 00:14:51.211 { 00:14:51.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.211 "dma_device_type": 2 00:14:51.211 } 00:14:51.211 ], 00:14:51.211 "driver_specific": {} 00:14:51.211 } 00:14:51.211 ] 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.211 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.212 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.212 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.212 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.212 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.469 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.469 "name": "Existed_Raid", 00:14:51.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.469 "strip_size_kb": 64, 00:14:51.469 "state": "configuring", 00:14:51.469 "raid_level": "concat", 00:14:51.469 "superblock": false, 00:14:51.469 "num_base_bdevs": 4, 00:14:51.469 "num_base_bdevs_discovered": 3, 00:14:51.469 "num_base_bdevs_operational": 4, 00:14:51.469 "base_bdevs_list": [ 00:14:51.469 { 00:14:51.469 "name": "BaseBdev1", 00:14:51.470 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:51.470 "is_configured": true, 00:14:51.470 "data_offset": 0, 00:14:51.470 "data_size": 65536 00:14:51.470 }, 00:14:51.470 { 00:14:51.470 "name": null, 00:14:51.470 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:51.470 "is_configured": false, 00:14:51.470 "data_offset": 0, 00:14:51.470 "data_size": 65536 00:14:51.470 }, 00:14:51.470 { 00:14:51.470 "name": "BaseBdev3", 00:14:51.470 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:51.470 "is_configured": true, 00:14:51.470 "data_offset": 0, 00:14:51.470 "data_size": 65536 00:14:51.470 }, 00:14:51.470 { 00:14:51.470 "name": "BaseBdev4", 00:14:51.470 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:51.470 "is_configured": true, 00:14:51.470 "data_offset": 0, 00:14:51.470 "data_size": 65536 00:14:51.470 } 00:14:51.470 ] 00:14:51.470 }' 00:14:51.470 21:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.470 21:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.727 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.727 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.984 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:51.984 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:52.241 [2024-05-14 21:56:52.815015] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.241 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.499 21:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.757 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.757 "name": "Existed_Raid", 00:14:52.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.757 "strip_size_kb": 64, 00:14:52.757 "state": "configuring", 00:14:52.757 "raid_level": "concat", 00:14:52.757 "superblock": false, 00:14:52.757 "num_base_bdevs": 4, 00:14:52.757 "num_base_bdevs_discovered": 2, 00:14:52.757 "num_base_bdevs_operational": 4, 00:14:52.757 "base_bdevs_list": [ 00:14:52.757 { 00:14:52.757 "name": "BaseBdev1", 00:14:52.757 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:52.757 "is_configured": true, 00:14:52.757 "data_offset": 0, 00:14:52.757 "data_size": 65536 00:14:52.757 }, 00:14:52.757 { 00:14:52.757 "name": null, 00:14:52.757 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:52.757 "is_configured": false, 00:14:52.757 "data_offset": 0, 00:14:52.757 "data_size": 65536 00:14:52.757 }, 00:14:52.757 { 00:14:52.757 "name": null, 00:14:52.757 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:52.757 "is_configured": false, 00:14:52.757 "data_offset": 0, 00:14:52.757 "data_size": 65536 00:14:52.757 }, 00:14:52.757 { 00:14:52.757 "name": "BaseBdev4", 00:14:52.757 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:52.757 "is_configured": true, 00:14:52.757 "data_offset": 0, 00:14:52.757 "data_size": 65536 00:14:52.757 } 00:14:52.757 ] 00:14:52.757 }' 00:14:52.757 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.757 21:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.015 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.015 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:53.273 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:53.273 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:53.532 [2024-05-14 21:56:53.919039] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.532 21:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.790 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.790 "name": "Existed_Raid", 00:14:53.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.790 "strip_size_kb": 64, 00:14:53.790 "state": "configuring", 00:14:53.790 "raid_level": "concat", 00:14:53.790 "superblock": false, 00:14:53.790 "num_base_bdevs": 4, 00:14:53.790 "num_base_bdevs_discovered": 3, 00:14:53.790 "num_base_bdevs_operational": 4, 00:14:53.790 "base_bdevs_list": [ 00:14:53.790 { 00:14:53.790 "name": "BaseBdev1", 00:14:53.790 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:53.790 "is_configured": true, 00:14:53.790 "data_offset": 0, 00:14:53.790 "data_size": 65536 00:14:53.790 }, 00:14:53.790 { 00:14:53.790 "name": null, 00:14:53.790 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:53.790 "is_configured": false, 00:14:53.790 "data_offset": 0, 00:14:53.790 "data_size": 65536 00:14:53.790 }, 00:14:53.790 { 00:14:53.790 "name": "BaseBdev3", 00:14:53.790 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:53.790 "is_configured": true, 00:14:53.790 "data_offset": 0, 00:14:53.790 "data_size": 65536 00:14:53.790 }, 00:14:53.790 { 00:14:53.790 "name": "BaseBdev4", 00:14:53.790 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:53.790 "is_configured": true, 00:14:53.790 "data_offset": 0, 00:14:53.790 "data_size": 65536 00:14:53.790 } 00:14:53.790 ] 00:14:53.790 }' 00:14:53.790 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.790 21:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.067 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.067 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.325 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:54.325 21:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:54.584 [2024-05-14 21:56:55.075067] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.584 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.842 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.842 "name": "Existed_Raid", 00:14:54.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.842 "strip_size_kb": 64, 00:14:54.842 "state": "configuring", 00:14:54.842 "raid_level": "concat", 00:14:54.842 "superblock": false, 00:14:54.842 "num_base_bdevs": 4, 00:14:54.842 "num_base_bdevs_discovered": 2, 00:14:54.842 "num_base_bdevs_operational": 4, 00:14:54.842 "base_bdevs_list": [ 00:14:54.842 { 00:14:54.842 "name": null, 00:14:54.842 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:54.842 "is_configured": false, 00:14:54.842 "data_offset": 0, 00:14:54.842 "data_size": 65536 00:14:54.842 }, 00:14:54.842 { 00:14:54.842 "name": null, 00:14:54.842 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:54.842 "is_configured": false, 00:14:54.842 "data_offset": 0, 00:14:54.842 "data_size": 65536 00:14:54.842 }, 00:14:54.842 { 00:14:54.842 "name": "BaseBdev3", 00:14:54.842 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:54.842 "is_configured": true, 00:14:54.842 "data_offset": 0, 00:14:54.842 "data_size": 65536 00:14:54.842 }, 00:14:54.842 { 00:14:54.842 "name": "BaseBdev4", 00:14:54.842 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:54.842 "is_configured": true, 00:14:54.842 "data_offset": 0, 00:14:54.842 "data_size": 65536 00:14:54.842 } 00:14:54.842 ] 00:14:54.842 }' 00:14:54.842 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.842 21:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.100 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.100 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.358 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:55.358 21:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:55.616 [2024-05-14 21:56:56.145054] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.616 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.874 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.874 "name": "Existed_Raid", 00:14:55.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.874 "strip_size_kb": 64, 00:14:55.874 "state": "configuring", 00:14:55.874 "raid_level": "concat", 00:14:55.874 "superblock": false, 00:14:55.874 "num_base_bdevs": 4, 00:14:55.874 "num_base_bdevs_discovered": 3, 00:14:55.874 "num_base_bdevs_operational": 4, 00:14:55.874 "base_bdevs_list": [ 00:14:55.874 { 00:14:55.874 "name": null, 00:14:55.874 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:55.874 "is_configured": false, 00:14:55.874 "data_offset": 0, 00:14:55.874 "data_size": 65536 00:14:55.874 }, 00:14:55.874 { 00:14:55.874 "name": "BaseBdev2", 00:14:55.874 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:55.874 "is_configured": true, 00:14:55.874 "data_offset": 0, 00:14:55.874 "data_size": 65536 00:14:55.874 }, 00:14:55.874 { 00:14:55.874 "name": "BaseBdev3", 00:14:55.874 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:55.874 "is_configured": true, 00:14:55.874 "data_offset": 0, 00:14:55.874 "data_size": 65536 00:14:55.874 }, 00:14:55.874 { 00:14:55.874 "name": "BaseBdev4", 00:14:55.874 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:55.874 "is_configured": true, 00:14:55.874 "data_offset": 0, 00:14:55.874 "data_size": 65536 00:14:55.874 } 00:14:55.874 ] 00:14:55.874 }' 00:14:55.874 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.874 21:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.132 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.132 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.697 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:56.697 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.697 21:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:56.697 21:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u de8412cf-123c-11ef-8c90-4585f0cfab08 00:14:56.955 [2024-05-14 21:56:57.465204] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:56.955 [2024-05-14 21:56:57.465233] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b587300 00:14:56.955 [2024-05-14 21:56:57.465237] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:56.955 [2024-05-14 21:56:57.465261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5e5e20 00:14:56.955 [2024-05-14 21:56:57.465333] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b587300 00:14:56.955 [2024-05-14 21:56:57.465338] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b587300 00:14:56.955 [2024-05-14 21:56:57.465372] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.955 NewBaseBdev 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:56.955 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.213 21:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:57.471 [ 00:14:57.471 { 00:14:57.471 "name": "NewBaseBdev", 00:14:57.471 "aliases": [ 00:14:57.471 "de8412cf-123c-11ef-8c90-4585f0cfab08" 00:14:57.471 ], 00:14:57.471 "product_name": "Malloc disk", 00:14:57.471 "block_size": 512, 00:14:57.471 "num_blocks": 65536, 00:14:57.471 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:57.471 "assigned_rate_limits": { 00:14:57.471 "rw_ios_per_sec": 0, 00:14:57.471 "rw_mbytes_per_sec": 0, 00:14:57.471 "r_mbytes_per_sec": 0, 00:14:57.471 "w_mbytes_per_sec": 0 00:14:57.471 }, 00:14:57.471 "claimed": true, 00:14:57.471 "claim_type": "exclusive_write", 00:14:57.471 "zoned": false, 00:14:57.471 "supported_io_types": { 00:14:57.471 "read": true, 00:14:57.471 "write": true, 00:14:57.471 "unmap": true, 00:14:57.471 "write_zeroes": true, 00:14:57.471 "flush": true, 00:14:57.471 "reset": true, 00:14:57.471 "compare": false, 00:14:57.471 "compare_and_write": false, 00:14:57.471 "abort": true, 00:14:57.471 "nvme_admin": false, 00:14:57.471 "nvme_io": false 00:14:57.471 }, 00:14:57.471 "memory_domains": [ 00:14:57.471 { 00:14:57.471 "dma_device_id": "system", 00:14:57.471 "dma_device_type": 1 00:14:57.471 }, 00:14:57.472 { 00:14:57.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.472 "dma_device_type": 2 00:14:57.472 } 00:14:57.472 ], 00:14:57.472 "driver_specific": {} 00:14:57.472 } 00:14:57.472 ] 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.472 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.730 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.730 "name": "Existed_Raid", 00:14:57.730 "uuid": "e23b5819-123c-11ef-8c90-4585f0cfab08", 00:14:57.730 "strip_size_kb": 64, 00:14:57.730 "state": "online", 00:14:57.730 "raid_level": "concat", 00:14:57.730 "superblock": false, 00:14:57.730 "num_base_bdevs": 4, 00:14:57.730 "num_base_bdevs_discovered": 4, 00:14:57.730 "num_base_bdevs_operational": 4, 00:14:57.730 "base_bdevs_list": [ 00:14:57.730 { 00:14:57.730 "name": "NewBaseBdev", 00:14:57.730 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:57.730 "is_configured": true, 00:14:57.730 "data_offset": 0, 00:14:57.730 "data_size": 65536 00:14:57.730 }, 00:14:57.730 { 00:14:57.730 "name": "BaseBdev2", 00:14:57.730 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:57.730 "is_configured": true, 00:14:57.730 "data_offset": 0, 00:14:57.730 "data_size": 65536 00:14:57.730 }, 00:14:57.730 { 00:14:57.730 "name": "BaseBdev3", 00:14:57.730 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:57.730 "is_configured": true, 00:14:57.730 "data_offset": 0, 00:14:57.730 "data_size": 65536 00:14:57.730 }, 00:14:57.730 { 00:14:57.730 "name": "BaseBdev4", 00:14:57.730 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:57.730 "is_configured": true, 00:14:57.730 "data_offset": 0, 00:14:57.730 "data_size": 65536 00:14:57.730 } 00:14:57.730 ] 00:14:57.730 }' 00:14:57.730 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.730 21:56:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:58.296 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:58.553 [2024-05-14 21:56:58.897135] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:58.553 "name": "Existed_Raid", 00:14:58.553 "aliases": [ 00:14:58.553 "e23b5819-123c-11ef-8c90-4585f0cfab08" 00:14:58.553 ], 00:14:58.553 "product_name": "Raid Volume", 00:14:58.553 "block_size": 512, 00:14:58.553 "num_blocks": 262144, 00:14:58.553 "uuid": "e23b5819-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "assigned_rate_limits": { 00:14:58.553 "rw_ios_per_sec": 0, 00:14:58.553 "rw_mbytes_per_sec": 0, 00:14:58.553 "r_mbytes_per_sec": 0, 00:14:58.553 "w_mbytes_per_sec": 0 00:14:58.553 }, 00:14:58.553 "claimed": false, 00:14:58.553 "zoned": false, 00:14:58.553 "supported_io_types": { 00:14:58.553 "read": true, 00:14:58.553 "write": true, 00:14:58.553 "unmap": true, 00:14:58.553 "write_zeroes": true, 00:14:58.553 "flush": true, 00:14:58.553 "reset": true, 00:14:58.553 "compare": false, 00:14:58.553 "compare_and_write": false, 00:14:58.553 "abort": false, 00:14:58.553 "nvme_admin": false, 00:14:58.553 "nvme_io": false 00:14:58.553 }, 00:14:58.553 "memory_domains": [ 00:14:58.553 { 00:14:58.553 "dma_device_id": "system", 00:14:58.553 "dma_device_type": 1 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.553 "dma_device_type": 2 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "system", 00:14:58.553 "dma_device_type": 1 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.553 "dma_device_type": 2 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "system", 00:14:58.553 "dma_device_type": 1 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.553 "dma_device_type": 2 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "system", 00:14:58.553 "dma_device_type": 1 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.553 "dma_device_type": 2 00:14:58.553 } 00:14:58.553 ], 00:14:58.553 "driver_specific": { 00:14:58.553 "raid": { 00:14:58.553 "uuid": "e23b5819-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "strip_size_kb": 64, 00:14:58.553 "state": "online", 00:14:58.553 "raid_level": "concat", 00:14:58.553 "superblock": false, 00:14:58.553 "num_base_bdevs": 4, 00:14:58.553 "num_base_bdevs_discovered": 4, 00:14:58.553 "num_base_bdevs_operational": 4, 00:14:58.553 "base_bdevs_list": [ 00:14:58.553 { 00:14:58.553 "name": "NewBaseBdev", 00:14:58.553 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "is_configured": true, 00:14:58.553 "data_offset": 0, 00:14:58.553 "data_size": 65536 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "name": "BaseBdev2", 00:14:58.553 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "is_configured": true, 00:14:58.553 "data_offset": 0, 00:14:58.553 "data_size": 65536 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "name": "BaseBdev3", 00:14:58.553 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "is_configured": true, 00:14:58.553 "data_offset": 0, 00:14:58.553 "data_size": 65536 00:14:58.553 }, 00:14:58.553 { 00:14:58.553 "name": "BaseBdev4", 00:14:58.553 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:58.553 "is_configured": true, 00:14:58.553 "data_offset": 0, 00:14:58.553 "data_size": 65536 00:14:58.553 } 00:14:58.553 ] 00:14:58.553 } 00:14:58.553 } 00:14:58.553 }' 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:58.553 BaseBdev2 00:14:58.553 BaseBdev3 00:14:58.553 BaseBdev4' 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:58.553 21:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:58.811 "name": "NewBaseBdev", 00:14:58.811 "aliases": [ 00:14:58.811 "de8412cf-123c-11ef-8c90-4585f0cfab08" 00:14:58.811 ], 00:14:58.811 "product_name": "Malloc disk", 00:14:58.811 "block_size": 512, 00:14:58.811 "num_blocks": 65536, 00:14:58.811 "uuid": "de8412cf-123c-11ef-8c90-4585f0cfab08", 00:14:58.811 "assigned_rate_limits": { 00:14:58.811 "rw_ios_per_sec": 0, 00:14:58.811 "rw_mbytes_per_sec": 0, 00:14:58.811 "r_mbytes_per_sec": 0, 00:14:58.811 "w_mbytes_per_sec": 0 00:14:58.811 }, 00:14:58.811 "claimed": true, 00:14:58.811 "claim_type": "exclusive_write", 00:14:58.811 "zoned": false, 00:14:58.811 "supported_io_types": { 00:14:58.811 "read": true, 00:14:58.811 "write": true, 00:14:58.811 "unmap": true, 00:14:58.811 "write_zeroes": true, 00:14:58.811 "flush": true, 00:14:58.811 "reset": true, 00:14:58.811 "compare": false, 00:14:58.811 "compare_and_write": false, 00:14:58.811 "abort": true, 00:14:58.811 "nvme_admin": false, 00:14:58.811 "nvme_io": false 00:14:58.811 }, 00:14:58.811 "memory_domains": [ 00:14:58.811 { 00:14:58.811 "dma_device_id": "system", 00:14:58.811 "dma_device_type": 1 00:14:58.811 }, 00:14:58.811 { 00:14:58.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.811 "dma_device_type": 2 00:14:58.811 } 00:14:58.811 ], 00:14:58.811 "driver_specific": {} 00:14:58.811 }' 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:58.811 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:59.068 "name": "BaseBdev2", 00:14:59.068 "aliases": [ 00:14:59.068 "dc1448a5-123c-11ef-8c90-4585f0cfab08" 00:14:59.068 ], 00:14:59.068 "product_name": "Malloc disk", 00:14:59.068 "block_size": 512, 00:14:59.068 "num_blocks": 65536, 00:14:59.068 "uuid": "dc1448a5-123c-11ef-8c90-4585f0cfab08", 00:14:59.068 "assigned_rate_limits": { 00:14:59.068 "rw_ios_per_sec": 0, 00:14:59.068 "rw_mbytes_per_sec": 0, 00:14:59.068 "r_mbytes_per_sec": 0, 00:14:59.068 "w_mbytes_per_sec": 0 00:14:59.068 }, 00:14:59.068 "claimed": true, 00:14:59.068 "claim_type": "exclusive_write", 00:14:59.068 "zoned": false, 00:14:59.068 "supported_io_types": { 00:14:59.068 "read": true, 00:14:59.068 "write": true, 00:14:59.068 "unmap": true, 00:14:59.068 "write_zeroes": true, 00:14:59.068 "flush": true, 00:14:59.068 "reset": true, 00:14:59.068 "compare": false, 00:14:59.068 "compare_and_write": false, 00:14:59.068 "abort": true, 00:14:59.068 "nvme_admin": false, 00:14:59.068 "nvme_io": false 00:14:59.068 }, 00:14:59.068 "memory_domains": [ 00:14:59.068 { 00:14:59.068 "dma_device_id": "system", 00:14:59.068 "dma_device_type": 1 00:14:59.068 }, 00:14:59.068 { 00:14:59.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.068 "dma_device_type": 2 00:14:59.068 } 00:14:59.068 ], 00:14:59.068 "driver_specific": {} 00:14:59.068 }' 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:59.068 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:59.326 "name": "BaseBdev3", 00:14:59.326 "aliases": [ 00:14:59.326 "dc88dc56-123c-11ef-8c90-4585f0cfab08" 00:14:59.326 ], 00:14:59.326 "product_name": "Malloc disk", 00:14:59.326 "block_size": 512, 00:14:59.326 "num_blocks": 65536, 00:14:59.326 "uuid": "dc88dc56-123c-11ef-8c90-4585f0cfab08", 00:14:59.326 "assigned_rate_limits": { 00:14:59.326 "rw_ios_per_sec": 0, 00:14:59.326 "rw_mbytes_per_sec": 0, 00:14:59.326 "r_mbytes_per_sec": 0, 00:14:59.326 "w_mbytes_per_sec": 0 00:14:59.326 }, 00:14:59.326 "claimed": true, 00:14:59.326 "claim_type": "exclusive_write", 00:14:59.326 "zoned": false, 00:14:59.326 "supported_io_types": { 00:14:59.326 "read": true, 00:14:59.326 "write": true, 00:14:59.326 "unmap": true, 00:14:59.326 "write_zeroes": true, 00:14:59.326 "flush": true, 00:14:59.326 "reset": true, 00:14:59.326 "compare": false, 00:14:59.326 "compare_and_write": false, 00:14:59.326 "abort": true, 00:14:59.326 "nvme_admin": false, 00:14:59.326 "nvme_io": false 00:14:59.326 }, 00:14:59.326 "memory_domains": [ 00:14:59.326 { 00:14:59.326 "dma_device_id": "system", 00:14:59.326 "dma_device_type": 1 00:14:59.326 }, 00:14:59.326 { 00:14:59.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.326 "dma_device_type": 2 00:14:59.326 } 00:14:59.326 ], 00:14:59.326 "driver_specific": {} 00:14:59.326 }' 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:59.326 21:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:59.584 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:59.584 "name": "BaseBdev4", 00:14:59.584 "aliases": [ 00:14:59.584 "dcf61d68-123c-11ef-8c90-4585f0cfab08" 00:14:59.584 ], 00:14:59.584 "product_name": "Malloc disk", 00:14:59.584 "block_size": 512, 00:14:59.584 "num_blocks": 65536, 00:14:59.584 "uuid": "dcf61d68-123c-11ef-8c90-4585f0cfab08", 00:14:59.584 "assigned_rate_limits": { 00:14:59.584 "rw_ios_per_sec": 0, 00:14:59.584 "rw_mbytes_per_sec": 0, 00:14:59.584 "r_mbytes_per_sec": 0, 00:14:59.584 "w_mbytes_per_sec": 0 00:14:59.584 }, 00:14:59.584 "claimed": true, 00:14:59.584 "claim_type": "exclusive_write", 00:14:59.584 "zoned": false, 00:14:59.584 "supported_io_types": { 00:14:59.584 "read": true, 00:14:59.584 "write": true, 00:14:59.584 "unmap": true, 00:14:59.584 "write_zeroes": true, 00:14:59.584 "flush": true, 00:14:59.584 "reset": true, 00:14:59.584 "compare": false, 00:14:59.584 "compare_and_write": false, 00:14:59.584 "abort": true, 00:14:59.584 "nvme_admin": false, 00:14:59.584 "nvme_io": false 00:14:59.584 }, 00:14:59.584 "memory_domains": [ 00:14:59.584 { 00:14:59.584 "dma_device_id": "system", 00:14:59.584 "dma_device_type": 1 00:14:59.584 }, 00:14:59.584 { 00:14:59.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.584 "dma_device_type": 2 00:14:59.584 } 00:14:59.584 ], 00:14:59.584 "driver_specific": {} 00:14:59.584 }' 00:14:59.584 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.841 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.842 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:59.842 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:59.842 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.100 [2024-05-14 21:57:00.441121] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.100 [2024-05-14 21:57:00.441144] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.100 [2024-05-14 21:57:00.441165] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.100 [2024-05-14 21:57:00.441180] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.100 [2024-05-14 21:57:00.441185] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b587300 name Existed_Raid, state offline 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 59216 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 59216 ']' 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 59216 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 59216 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:00.100 killing process with pid 59216 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59216' 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 59216 00:15:00.100 [2024-05-14 21:57:00.467152] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 59216 00:15:00.100 [2024-05-14 21:57:00.489721] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:15:00.100 ************************************ 00:15:00.100 END TEST raid_state_function_test 00:15:00.100 ************************************ 00:15:00.100 00:15:00.100 real 0m27.292s 00:15:00.100 user 0m49.997s 00:15:00.100 sys 0m3.715s 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:00.100 21:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 21:57:00 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:00.358 21:57:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:00.358 21:57:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.358 21:57:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 ************************************ 00:15:00.358 START TEST raid_state_function_test_sb 00:15:00.358 ************************************ 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=60034 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 60034' 00:15:00.358 Process raid pid: 60034 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 60034 /var/tmp/spdk-raid.sock 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 60034 ']' 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.358 21:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 [2024-05-14 21:57:00.725192] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:00.358 [2024-05-14 21:57:00.725342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:00.923 EAL: TSC is not safe to use in SMP mode 00:15:00.923 EAL: TSC is not invariant 00:15:00.923 [2024-05-14 21:57:01.243284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.923 [2024-05-14 21:57:01.328058] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:00.923 [2024-05-14 21:57:01.330263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.923 [2024-05-14 21:57:01.331013] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.923 [2024-05-14 21:57:01.331028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.181 21:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:01.181 21:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:01.181 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:01.439 [2024-05-14 21:57:01.958517] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.439 [2024-05-14 21:57:01.958572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.439 [2024-05-14 21:57:01.958578] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.439 [2024-05-14 21:57:01.958586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.439 [2024-05-14 21:57:01.958590] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.439 [2024-05-14 21:57:01.958597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.440 [2024-05-14 21:57:01.958601] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.440 [2024-05-14 21:57:01.958608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.440 21:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.697 21:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.697 "name": "Existed_Raid", 00:15:01.697 "uuid": "e4e8f659-123c-11ef-8c90-4585f0cfab08", 00:15:01.697 "strip_size_kb": 64, 00:15:01.697 "state": "configuring", 00:15:01.697 "raid_level": "concat", 00:15:01.697 "superblock": true, 00:15:01.697 "num_base_bdevs": 4, 00:15:01.697 "num_base_bdevs_discovered": 0, 00:15:01.697 "num_base_bdevs_operational": 4, 00:15:01.697 "base_bdevs_list": [ 00:15:01.697 { 00:15:01.697 "name": "BaseBdev1", 00:15:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.697 "is_configured": false, 00:15:01.697 "data_offset": 0, 00:15:01.697 "data_size": 0 00:15:01.697 }, 00:15:01.697 { 00:15:01.697 "name": "BaseBdev2", 00:15:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.697 "is_configured": false, 00:15:01.697 "data_offset": 0, 00:15:01.697 "data_size": 0 00:15:01.697 }, 00:15:01.697 { 00:15:01.697 "name": "BaseBdev3", 00:15:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.697 "is_configured": false, 00:15:01.697 "data_offset": 0, 00:15:01.697 "data_size": 0 00:15:01.697 }, 00:15:01.697 { 00:15:01.697 "name": "BaseBdev4", 00:15:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.697 "is_configured": false, 00:15:01.697 "data_offset": 0, 00:15:01.697 "data_size": 0 00:15:01.697 } 00:15:01.697 ] 00:15:01.697 }' 00:15:01.697 21:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.697 21:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.265 21:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.265 [2024-05-14 21:57:02.826507] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.265 [2024-05-14 21:57:02.826539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb7b300 name Existed_Raid, state configuring 00:15:02.265 21:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:02.523 [2024-05-14 21:57:03.066527] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.523 [2024-05-14 21:57:03.066589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.523 [2024-05-14 21:57:03.066594] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.523 [2024-05-14 21:57:03.066602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.523 [2024-05-14 21:57:03.066605] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.523 [2024-05-14 21:57:03.066612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.523 [2024-05-14 21:57:03.066616] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:02.523 [2024-05-14 21:57:03.066623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:02.523 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.788 [2024-05-14 21:57:03.351510] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.788 BaseBdev1 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:02.788 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.057 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.621 [ 00:15:03.621 { 00:15:03.621 "name": "BaseBdev1", 00:15:03.621 "aliases": [ 00:15:03.621 "e5bd5dd9-123c-11ef-8c90-4585f0cfab08" 00:15:03.621 ], 00:15:03.621 "product_name": "Malloc disk", 00:15:03.621 "block_size": 512, 00:15:03.621 "num_blocks": 65536, 00:15:03.621 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:03.621 "assigned_rate_limits": { 00:15:03.621 "rw_ios_per_sec": 0, 00:15:03.621 "rw_mbytes_per_sec": 0, 00:15:03.621 "r_mbytes_per_sec": 0, 00:15:03.621 "w_mbytes_per_sec": 0 00:15:03.621 }, 00:15:03.621 "claimed": true, 00:15:03.621 "claim_type": "exclusive_write", 00:15:03.621 "zoned": false, 00:15:03.621 "supported_io_types": { 00:15:03.621 "read": true, 00:15:03.621 "write": true, 00:15:03.621 "unmap": true, 00:15:03.621 "write_zeroes": true, 00:15:03.621 "flush": true, 00:15:03.621 "reset": true, 00:15:03.621 "compare": false, 00:15:03.621 "compare_and_write": false, 00:15:03.621 "abort": true, 00:15:03.621 "nvme_admin": false, 00:15:03.621 "nvme_io": false 00:15:03.621 }, 00:15:03.621 "memory_domains": [ 00:15:03.621 { 00:15:03.621 "dma_device_id": "system", 00:15:03.621 "dma_device_type": 1 00:15:03.621 }, 00:15:03.621 { 00:15:03.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.621 "dma_device_type": 2 00:15:03.621 } 00:15:03.621 ], 00:15:03.621 "driver_specific": {} 00:15:03.621 } 00:15:03.621 ] 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.621 21:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.621 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.621 "name": "Existed_Raid", 00:15:03.621 "uuid": "e59207fb-123c-11ef-8c90-4585f0cfab08", 00:15:03.621 "strip_size_kb": 64, 00:15:03.621 "state": "configuring", 00:15:03.621 "raid_level": "concat", 00:15:03.621 "superblock": true, 00:15:03.621 "num_base_bdevs": 4, 00:15:03.621 "num_base_bdevs_discovered": 1, 00:15:03.621 "num_base_bdevs_operational": 4, 00:15:03.621 "base_bdevs_list": [ 00:15:03.621 { 00:15:03.621 "name": "BaseBdev1", 00:15:03.621 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:03.621 "is_configured": true, 00:15:03.621 "data_offset": 2048, 00:15:03.621 "data_size": 63488 00:15:03.621 }, 00:15:03.621 { 00:15:03.621 "name": "BaseBdev2", 00:15:03.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.621 "is_configured": false, 00:15:03.621 "data_offset": 0, 00:15:03.621 "data_size": 0 00:15:03.621 }, 00:15:03.621 { 00:15:03.621 "name": "BaseBdev3", 00:15:03.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.621 "is_configured": false, 00:15:03.621 "data_offset": 0, 00:15:03.621 "data_size": 0 00:15:03.622 }, 00:15:03.622 { 00:15:03.622 "name": "BaseBdev4", 00:15:03.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.622 "is_configured": false, 00:15:03.622 "data_offset": 0, 00:15:03.622 "data_size": 0 00:15:03.622 } 00:15:03.622 ] 00:15:03.622 }' 00:15:03.622 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.622 21:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.188 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.188 [2024-05-14 21:57:04.710547] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.188 [2024-05-14 21:57:04.710577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb7b300 name Existed_Raid, state configuring 00:15:04.188 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:04.447 [2024-05-14 21:57:04.970573] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.447 [2024-05-14 21:57:04.971410] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.447 [2024-05-14 21:57:04.971453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.447 [2024-05-14 21:57:04.971458] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.447 [2024-05-14 21:57:04.971466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.448 [2024-05-14 21:57:04.971470] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:04.448 [2024-05-14 21:57:04.971477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.448 21:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.706 21:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.706 "name": "Existed_Raid", 00:15:04.706 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:04.706 "strip_size_kb": 64, 00:15:04.706 "state": "configuring", 00:15:04.706 "raid_level": "concat", 00:15:04.706 "superblock": true, 00:15:04.706 "num_base_bdevs": 4, 00:15:04.706 "num_base_bdevs_discovered": 1, 00:15:04.706 "num_base_bdevs_operational": 4, 00:15:04.706 "base_bdevs_list": [ 00:15:04.706 { 00:15:04.706 "name": "BaseBdev1", 00:15:04.706 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:04.706 "is_configured": true, 00:15:04.706 "data_offset": 2048, 00:15:04.706 "data_size": 63488 00:15:04.706 }, 00:15:04.706 { 00:15:04.706 "name": "BaseBdev2", 00:15:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.706 "is_configured": false, 00:15:04.706 "data_offset": 0, 00:15:04.706 "data_size": 0 00:15:04.706 }, 00:15:04.706 { 00:15:04.706 "name": "BaseBdev3", 00:15:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.706 "is_configured": false, 00:15:04.706 "data_offset": 0, 00:15:04.706 "data_size": 0 00:15:04.706 }, 00:15:04.706 { 00:15:04.706 "name": "BaseBdev4", 00:15:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.706 "is_configured": false, 00:15:04.706 "data_offset": 0, 00:15:04.706 "data_size": 0 00:15:04.706 } 00:15:04.706 ] 00:15:04.706 }' 00:15:04.706 21:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.706 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.278 [2024-05-14 21:57:05.826721] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.278 BaseBdev2 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:05.278 21:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.537 21:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.795 [ 00:15:05.795 { 00:15:05.795 "name": "BaseBdev2", 00:15:05.795 "aliases": [ 00:15:05.795 "e7372ebd-123c-11ef-8c90-4585f0cfab08" 00:15:05.795 ], 00:15:05.795 "product_name": "Malloc disk", 00:15:05.795 "block_size": 512, 00:15:05.795 "num_blocks": 65536, 00:15:05.795 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:05.795 "assigned_rate_limits": { 00:15:05.795 "rw_ios_per_sec": 0, 00:15:05.795 "rw_mbytes_per_sec": 0, 00:15:05.795 "r_mbytes_per_sec": 0, 00:15:05.795 "w_mbytes_per_sec": 0 00:15:05.795 }, 00:15:05.795 "claimed": true, 00:15:05.795 "claim_type": "exclusive_write", 00:15:05.795 "zoned": false, 00:15:05.795 "supported_io_types": { 00:15:05.795 "read": true, 00:15:05.795 "write": true, 00:15:05.795 "unmap": true, 00:15:05.795 "write_zeroes": true, 00:15:05.795 "flush": true, 00:15:05.795 "reset": true, 00:15:05.795 "compare": false, 00:15:05.795 "compare_and_write": false, 00:15:05.795 "abort": true, 00:15:05.795 "nvme_admin": false, 00:15:05.795 "nvme_io": false 00:15:05.795 }, 00:15:05.795 "memory_domains": [ 00:15:05.795 { 00:15:05.795 "dma_device_id": "system", 00:15:05.795 "dma_device_type": 1 00:15:05.795 }, 00:15:05.795 { 00:15:05.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.795 "dma_device_type": 2 00:15:05.795 } 00:15:05.795 ], 00:15:05.795 "driver_specific": {} 00:15:05.795 } 00:15:05.795 ] 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.795 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.052 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.052 "name": "Existed_Raid", 00:15:06.053 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:06.053 "strip_size_kb": 64, 00:15:06.053 "state": "configuring", 00:15:06.053 "raid_level": "concat", 00:15:06.053 "superblock": true, 00:15:06.053 "num_base_bdevs": 4, 00:15:06.053 "num_base_bdevs_discovered": 2, 00:15:06.053 "num_base_bdevs_operational": 4, 00:15:06.053 "base_bdevs_list": [ 00:15:06.053 { 00:15:06.053 "name": "BaseBdev1", 00:15:06.053 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:06.053 "is_configured": true, 00:15:06.053 "data_offset": 2048, 00:15:06.053 "data_size": 63488 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "name": "BaseBdev2", 00:15:06.053 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:06.053 "is_configured": true, 00:15:06.053 "data_offset": 2048, 00:15:06.053 "data_size": 63488 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "name": "BaseBdev3", 00:15:06.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.053 "is_configured": false, 00:15:06.053 "data_offset": 0, 00:15:06.053 "data_size": 0 00:15:06.053 }, 00:15:06.053 { 00:15:06.053 "name": "BaseBdev4", 00:15:06.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.053 "is_configured": false, 00:15:06.053 "data_offset": 0, 00:15:06.053 "data_size": 0 00:15:06.053 } 00:15:06.053 ] 00:15:06.053 }' 00:15:06.053 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.053 21:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.311 21:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:06.569 [2024-05-14 21:57:07.154706] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.828 BaseBdev3 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.828 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.088 [ 00:15:07.088 { 00:15:07.088 "name": "BaseBdev3", 00:15:07.088 "aliases": [ 00:15:07.088 "e801d276-123c-11ef-8c90-4585f0cfab08" 00:15:07.088 ], 00:15:07.088 "product_name": "Malloc disk", 00:15:07.088 "block_size": 512, 00:15:07.088 "num_blocks": 65536, 00:15:07.088 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:07.088 "assigned_rate_limits": { 00:15:07.088 "rw_ios_per_sec": 0, 00:15:07.088 "rw_mbytes_per_sec": 0, 00:15:07.088 "r_mbytes_per_sec": 0, 00:15:07.088 "w_mbytes_per_sec": 0 00:15:07.088 }, 00:15:07.088 "claimed": true, 00:15:07.088 "claim_type": "exclusive_write", 00:15:07.088 "zoned": false, 00:15:07.088 "supported_io_types": { 00:15:07.088 "read": true, 00:15:07.088 "write": true, 00:15:07.088 "unmap": true, 00:15:07.088 "write_zeroes": true, 00:15:07.088 "flush": true, 00:15:07.088 "reset": true, 00:15:07.088 "compare": false, 00:15:07.088 "compare_and_write": false, 00:15:07.088 "abort": true, 00:15:07.088 "nvme_admin": false, 00:15:07.088 "nvme_io": false 00:15:07.088 }, 00:15:07.088 "memory_domains": [ 00:15:07.088 { 00:15:07.088 "dma_device_id": "system", 00:15:07.088 "dma_device_type": 1 00:15:07.088 }, 00:15:07.088 { 00:15:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.088 "dma_device_type": 2 00:15:07.088 } 00:15:07.088 ], 00:15:07.088 "driver_specific": {} 00:15:07.088 } 00:15:07.088 ] 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.088 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.347 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.347 "name": "Existed_Raid", 00:15:07.347 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:07.347 "strip_size_kb": 64, 00:15:07.347 "state": "configuring", 00:15:07.347 "raid_level": "concat", 00:15:07.347 "superblock": true, 00:15:07.347 "num_base_bdevs": 4, 00:15:07.347 "num_base_bdevs_discovered": 3, 00:15:07.347 "num_base_bdevs_operational": 4, 00:15:07.347 "base_bdevs_list": [ 00:15:07.347 { 00:15:07.347 "name": "BaseBdev1", 00:15:07.347 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:07.347 "is_configured": true, 00:15:07.347 "data_offset": 2048, 00:15:07.347 "data_size": 63488 00:15:07.347 }, 00:15:07.347 { 00:15:07.347 "name": "BaseBdev2", 00:15:07.347 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:07.347 "is_configured": true, 00:15:07.347 "data_offset": 2048, 00:15:07.347 "data_size": 63488 00:15:07.347 }, 00:15:07.347 { 00:15:07.347 "name": "BaseBdev3", 00:15:07.347 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:07.347 "is_configured": true, 00:15:07.347 "data_offset": 2048, 00:15:07.347 "data_size": 63488 00:15:07.347 }, 00:15:07.347 { 00:15:07.347 "name": "BaseBdev4", 00:15:07.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.347 "is_configured": false, 00:15:07.347 "data_offset": 0, 00:15:07.347 "data_size": 0 00:15:07.347 } 00:15:07.347 ] 00:15:07.347 }' 00:15:07.347 21:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.347 21:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:07.914 [2024-05-14 21:57:08.414757] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.914 [2024-05-14 21:57:08.414833] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cb7b300 00:15:07.914 [2024-05-14 21:57:08.414838] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.914 [2024-05-14 21:57:08.414859] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cbd9ec0 00:15:07.914 [2024-05-14 21:57:08.414913] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cb7b300 00:15:07.914 [2024-05-14 21:57:08.414918] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cb7b300 00:15:07.914 [2024-05-14 21:57:08.414939] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.914 BaseBdev4 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:07.914 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.172 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:08.431 [ 00:15:08.431 { 00:15:08.431 "name": "BaseBdev4", 00:15:08.431 "aliases": [ 00:15:08.431 "e8c21670-123c-11ef-8c90-4585f0cfab08" 00:15:08.431 ], 00:15:08.431 "product_name": "Malloc disk", 00:15:08.431 "block_size": 512, 00:15:08.431 "num_blocks": 65536, 00:15:08.431 "uuid": "e8c21670-123c-11ef-8c90-4585f0cfab08", 00:15:08.431 "assigned_rate_limits": { 00:15:08.431 "rw_ios_per_sec": 0, 00:15:08.431 "rw_mbytes_per_sec": 0, 00:15:08.431 "r_mbytes_per_sec": 0, 00:15:08.431 "w_mbytes_per_sec": 0 00:15:08.431 }, 00:15:08.431 "claimed": true, 00:15:08.431 "claim_type": "exclusive_write", 00:15:08.431 "zoned": false, 00:15:08.431 "supported_io_types": { 00:15:08.431 "read": true, 00:15:08.431 "write": true, 00:15:08.431 "unmap": true, 00:15:08.431 "write_zeroes": true, 00:15:08.431 "flush": true, 00:15:08.431 "reset": true, 00:15:08.431 "compare": false, 00:15:08.431 "compare_and_write": false, 00:15:08.431 "abort": true, 00:15:08.431 "nvme_admin": false, 00:15:08.431 "nvme_io": false 00:15:08.431 }, 00:15:08.431 "memory_domains": [ 00:15:08.431 { 00:15:08.431 "dma_device_id": "system", 00:15:08.431 "dma_device_type": 1 00:15:08.431 }, 00:15:08.431 { 00:15:08.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.431 "dma_device_type": 2 00:15:08.431 } 00:15:08.431 ], 00:15:08.431 "driver_specific": {} 00:15:08.431 } 00:15:08.431 ] 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.431 21:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.690 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.690 "name": "Existed_Raid", 00:15:08.690 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:08.690 "strip_size_kb": 64, 00:15:08.690 "state": "online", 00:15:08.690 "raid_level": "concat", 00:15:08.690 "superblock": true, 00:15:08.690 "num_base_bdevs": 4, 00:15:08.690 "num_base_bdevs_discovered": 4, 00:15:08.690 "num_base_bdevs_operational": 4, 00:15:08.690 "base_bdevs_list": [ 00:15:08.690 { 00:15:08.690 "name": "BaseBdev1", 00:15:08.690 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:08.690 "is_configured": true, 00:15:08.690 "data_offset": 2048, 00:15:08.690 "data_size": 63488 00:15:08.690 }, 00:15:08.690 { 00:15:08.690 "name": "BaseBdev2", 00:15:08.690 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:08.690 "is_configured": true, 00:15:08.690 "data_offset": 2048, 00:15:08.690 "data_size": 63488 00:15:08.690 }, 00:15:08.690 { 00:15:08.690 "name": "BaseBdev3", 00:15:08.690 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:08.690 "is_configured": true, 00:15:08.690 "data_offset": 2048, 00:15:08.690 "data_size": 63488 00:15:08.690 }, 00:15:08.690 { 00:15:08.690 "name": "BaseBdev4", 00:15:08.690 "uuid": "e8c21670-123c-11ef-8c90-4585f0cfab08", 00:15:08.690 "is_configured": true, 00:15:08.690 "data_offset": 2048, 00:15:08.690 "data_size": 63488 00:15:08.690 } 00:15:08.690 ] 00:15:08.690 }' 00:15:08.690 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.690 21:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:08.949 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:09.208 [2024-05-14 21:57:09.706686] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:09.208 "name": "Existed_Raid", 00:15:09.208 "aliases": [ 00:15:09.208 "e6b490a0-123c-11ef-8c90-4585f0cfab08" 00:15:09.208 ], 00:15:09.208 "product_name": "Raid Volume", 00:15:09.208 "block_size": 512, 00:15:09.208 "num_blocks": 253952, 00:15:09.208 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "assigned_rate_limits": { 00:15:09.208 "rw_ios_per_sec": 0, 00:15:09.208 "rw_mbytes_per_sec": 0, 00:15:09.208 "r_mbytes_per_sec": 0, 00:15:09.208 "w_mbytes_per_sec": 0 00:15:09.208 }, 00:15:09.208 "claimed": false, 00:15:09.208 "zoned": false, 00:15:09.208 "supported_io_types": { 00:15:09.208 "read": true, 00:15:09.208 "write": true, 00:15:09.208 "unmap": true, 00:15:09.208 "write_zeroes": true, 00:15:09.208 "flush": true, 00:15:09.208 "reset": true, 00:15:09.208 "compare": false, 00:15:09.208 "compare_and_write": false, 00:15:09.208 "abort": false, 00:15:09.208 "nvme_admin": false, 00:15:09.208 "nvme_io": false 00:15:09.208 }, 00:15:09.208 "memory_domains": [ 00:15:09.208 { 00:15:09.208 "dma_device_id": "system", 00:15:09.208 "dma_device_type": 1 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.208 "dma_device_type": 2 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "system", 00:15:09.208 "dma_device_type": 1 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.208 "dma_device_type": 2 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "system", 00:15:09.208 "dma_device_type": 1 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.208 "dma_device_type": 2 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "system", 00:15:09.208 "dma_device_type": 1 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.208 "dma_device_type": 2 00:15:09.208 } 00:15:09.208 ], 00:15:09.208 "driver_specific": { 00:15:09.208 "raid": { 00:15:09.208 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "strip_size_kb": 64, 00:15:09.208 "state": "online", 00:15:09.208 "raid_level": "concat", 00:15:09.208 "superblock": true, 00:15:09.208 "num_base_bdevs": 4, 00:15:09.208 "num_base_bdevs_discovered": 4, 00:15:09.208 "num_base_bdevs_operational": 4, 00:15:09.208 "base_bdevs_list": [ 00:15:09.208 { 00:15:09.208 "name": "BaseBdev1", 00:15:09.208 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "name": "BaseBdev2", 00:15:09.208 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "name": "BaseBdev3", 00:15:09.208 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "name": "BaseBdev4", 00:15:09.208 "uuid": "e8c21670-123c-11ef-8c90-4585f0cfab08", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.208 } 00:15:09.208 ] 00:15:09.208 } 00:15:09.208 } 00:15:09.208 }' 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:09.208 BaseBdev2 00:15:09.208 BaseBdev3 00:15:09.208 BaseBdev4' 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:09.208 21:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:09.466 "name": "BaseBdev1", 00:15:09.466 "aliases": [ 00:15:09.466 "e5bd5dd9-123c-11ef-8c90-4585f0cfab08" 00:15:09.466 ], 00:15:09.466 "product_name": "Malloc disk", 00:15:09.466 "block_size": 512, 00:15:09.466 "num_blocks": 65536, 00:15:09.466 "uuid": "e5bd5dd9-123c-11ef-8c90-4585f0cfab08", 00:15:09.466 "assigned_rate_limits": { 00:15:09.466 "rw_ios_per_sec": 0, 00:15:09.466 "rw_mbytes_per_sec": 0, 00:15:09.466 "r_mbytes_per_sec": 0, 00:15:09.466 "w_mbytes_per_sec": 0 00:15:09.466 }, 00:15:09.466 "claimed": true, 00:15:09.466 "claim_type": "exclusive_write", 00:15:09.466 "zoned": false, 00:15:09.466 "supported_io_types": { 00:15:09.466 "read": true, 00:15:09.466 "write": true, 00:15:09.466 "unmap": true, 00:15:09.466 "write_zeroes": true, 00:15:09.466 "flush": true, 00:15:09.466 "reset": true, 00:15:09.466 "compare": false, 00:15:09.466 "compare_and_write": false, 00:15:09.466 "abort": true, 00:15:09.466 "nvme_admin": false, 00:15:09.466 "nvme_io": false 00:15:09.466 }, 00:15:09.466 "memory_domains": [ 00:15:09.466 { 00:15:09.466 "dma_device_id": "system", 00:15:09.466 "dma_device_type": 1 00:15:09.466 }, 00:15:09.466 { 00:15:09.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.466 "dma_device_type": 2 00:15:09.466 } 00:15:09.466 ], 00:15:09.466 "driver_specific": {} 00:15:09.466 }' 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.466 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:09.724 "name": "BaseBdev2", 00:15:09.724 "aliases": [ 00:15:09.724 "e7372ebd-123c-11ef-8c90-4585f0cfab08" 00:15:09.724 ], 00:15:09.724 "product_name": "Malloc disk", 00:15:09.724 "block_size": 512, 00:15:09.724 "num_blocks": 65536, 00:15:09.724 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:09.724 "assigned_rate_limits": { 00:15:09.724 "rw_ios_per_sec": 0, 00:15:09.724 "rw_mbytes_per_sec": 0, 00:15:09.724 "r_mbytes_per_sec": 0, 00:15:09.724 "w_mbytes_per_sec": 0 00:15:09.724 }, 00:15:09.724 "claimed": true, 00:15:09.724 "claim_type": "exclusive_write", 00:15:09.724 "zoned": false, 00:15:09.724 "supported_io_types": { 00:15:09.724 "read": true, 00:15:09.724 "write": true, 00:15:09.724 "unmap": true, 00:15:09.724 "write_zeroes": true, 00:15:09.724 "flush": true, 00:15:09.724 "reset": true, 00:15:09.724 "compare": false, 00:15:09.724 "compare_and_write": false, 00:15:09.724 "abort": true, 00:15:09.724 "nvme_admin": false, 00:15:09.724 "nvme_io": false 00:15:09.724 }, 00:15:09.724 "memory_domains": [ 00:15:09.724 { 00:15:09.724 "dma_device_id": "system", 00:15:09.724 "dma_device_type": 1 00:15:09.724 }, 00:15:09.724 { 00:15:09.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.724 "dma_device_type": 2 00:15:09.724 } 00:15:09.724 ], 00:15:09.724 "driver_specific": {} 00:15:09.724 }' 00:15:09.724 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:09.982 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:10.240 "name": "BaseBdev3", 00:15:10.240 "aliases": [ 00:15:10.240 "e801d276-123c-11ef-8c90-4585f0cfab08" 00:15:10.240 ], 00:15:10.240 "product_name": "Malloc disk", 00:15:10.240 "block_size": 512, 00:15:10.240 "num_blocks": 65536, 00:15:10.240 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:10.240 "assigned_rate_limits": { 00:15:10.240 "rw_ios_per_sec": 0, 00:15:10.240 "rw_mbytes_per_sec": 0, 00:15:10.240 "r_mbytes_per_sec": 0, 00:15:10.240 "w_mbytes_per_sec": 0 00:15:10.240 }, 00:15:10.240 "claimed": true, 00:15:10.240 "claim_type": "exclusive_write", 00:15:10.240 "zoned": false, 00:15:10.240 "supported_io_types": { 00:15:10.240 "read": true, 00:15:10.240 "write": true, 00:15:10.240 "unmap": true, 00:15:10.240 "write_zeroes": true, 00:15:10.240 "flush": true, 00:15:10.240 "reset": true, 00:15:10.240 "compare": false, 00:15:10.240 "compare_and_write": false, 00:15:10.240 "abort": true, 00:15:10.240 "nvme_admin": false, 00:15:10.240 "nvme_io": false 00:15:10.240 }, 00:15:10.240 "memory_domains": [ 00:15:10.240 { 00:15:10.240 "dma_device_id": "system", 00:15:10.240 "dma_device_type": 1 00:15:10.240 }, 00:15:10.240 { 00:15:10.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.240 "dma_device_type": 2 00:15:10.240 } 00:15:10.240 ], 00:15:10.240 "driver_specific": {} 00:15:10.240 }' 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:10.240 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:10.500 "name": "BaseBdev4", 00:15:10.500 "aliases": [ 00:15:10.500 "e8c21670-123c-11ef-8c90-4585f0cfab08" 00:15:10.500 ], 00:15:10.500 "product_name": "Malloc disk", 00:15:10.500 "block_size": 512, 00:15:10.500 "num_blocks": 65536, 00:15:10.500 "uuid": "e8c21670-123c-11ef-8c90-4585f0cfab08", 00:15:10.500 "assigned_rate_limits": { 00:15:10.500 "rw_ios_per_sec": 0, 00:15:10.500 "rw_mbytes_per_sec": 0, 00:15:10.500 "r_mbytes_per_sec": 0, 00:15:10.500 "w_mbytes_per_sec": 0 00:15:10.500 }, 00:15:10.500 "claimed": true, 00:15:10.500 "claim_type": "exclusive_write", 00:15:10.500 "zoned": false, 00:15:10.500 "supported_io_types": { 00:15:10.500 "read": true, 00:15:10.500 "write": true, 00:15:10.500 "unmap": true, 00:15:10.500 "write_zeroes": true, 00:15:10.500 "flush": true, 00:15:10.500 "reset": true, 00:15:10.500 "compare": false, 00:15:10.500 "compare_and_write": false, 00:15:10.500 "abort": true, 00:15:10.500 "nvme_admin": false, 00:15:10.500 "nvme_io": false 00:15:10.500 }, 00:15:10.500 "memory_domains": [ 00:15:10.500 { 00:15:10.500 "dma_device_id": "system", 00:15:10.500 "dma_device_type": 1 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.500 "dma_device_type": 2 00:15:10.500 } 00:15:10.500 ], 00:15:10.500 "driver_specific": {} 00:15:10.500 }' 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.500 21:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:10.500 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:10.500 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.770 [2024-05-14 21:57:11.250699] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.770 [2024-05-14 21:57:11.250722] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.770 [2024-05-14 21:57:11.250766] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.770 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.028 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.028 "name": "Existed_Raid", 00:15:11.028 "uuid": "e6b490a0-123c-11ef-8c90-4585f0cfab08", 00:15:11.028 "strip_size_kb": 64, 00:15:11.028 "state": "offline", 00:15:11.028 "raid_level": "concat", 00:15:11.028 "superblock": true, 00:15:11.028 "num_base_bdevs": 4, 00:15:11.028 "num_base_bdevs_discovered": 3, 00:15:11.028 "num_base_bdevs_operational": 3, 00:15:11.028 "base_bdevs_list": [ 00:15:11.028 { 00:15:11.028 "name": null, 00:15:11.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.028 "is_configured": false, 00:15:11.028 "data_offset": 2048, 00:15:11.028 "data_size": 63488 00:15:11.028 }, 00:15:11.028 { 00:15:11.028 "name": "BaseBdev2", 00:15:11.028 "uuid": "e7372ebd-123c-11ef-8c90-4585f0cfab08", 00:15:11.028 "is_configured": true, 00:15:11.028 "data_offset": 2048, 00:15:11.028 "data_size": 63488 00:15:11.028 }, 00:15:11.028 { 00:15:11.028 "name": "BaseBdev3", 00:15:11.028 "uuid": "e801d276-123c-11ef-8c90-4585f0cfab08", 00:15:11.028 "is_configured": true, 00:15:11.028 "data_offset": 2048, 00:15:11.028 "data_size": 63488 00:15:11.028 }, 00:15:11.028 { 00:15:11.028 "name": "BaseBdev4", 00:15:11.028 "uuid": "e8c21670-123c-11ef-8c90-4585f0cfab08", 00:15:11.028 "is_configured": true, 00:15:11.028 "data_offset": 2048, 00:15:11.028 "data_size": 63488 00:15:11.028 } 00:15:11.028 ] 00:15:11.028 }' 00:15:11.028 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.028 21:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.286 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:11.286 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:11.286 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.286 21:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:11.544 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:11.544 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.544 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:11.803 [2024-05-14 21:57:12.320870] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.803 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:11.803 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:11.803 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.803 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:12.369 [2024-05-14 21:57:12.914748] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.369 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.370 21:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:12.628 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:12.628 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:12.628 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:12.885 [2024-05-14 21:57:13.400795] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:12.885 [2024-05-14 21:57:13.400830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb7b300 name Existed_Raid, state offline 00:15:12.885 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.885 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.885 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.885 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:13.143 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.401 BaseBdev2 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:13.660 21:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.660 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.917 [ 00:15:13.918 { 00:15:13.918 "name": "BaseBdev2", 00:15:13.918 "aliases": [ 00:15:13.918 "ec1289f3-123c-11ef-8c90-4585f0cfab08" 00:15:13.918 ], 00:15:13.918 "product_name": "Malloc disk", 00:15:13.918 "block_size": 512, 00:15:13.918 "num_blocks": 65536, 00:15:13.918 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:13.918 "assigned_rate_limits": { 00:15:13.918 "rw_ios_per_sec": 0, 00:15:13.918 "rw_mbytes_per_sec": 0, 00:15:13.918 "r_mbytes_per_sec": 0, 00:15:13.918 "w_mbytes_per_sec": 0 00:15:13.918 }, 00:15:13.918 "claimed": false, 00:15:13.918 "zoned": false, 00:15:13.918 "supported_io_types": { 00:15:13.918 "read": true, 00:15:13.918 "write": true, 00:15:13.918 "unmap": true, 00:15:13.918 "write_zeroes": true, 00:15:13.918 "flush": true, 00:15:13.918 "reset": true, 00:15:13.918 "compare": false, 00:15:13.918 "compare_and_write": false, 00:15:13.918 "abort": true, 00:15:13.918 "nvme_admin": false, 00:15:13.918 "nvme_io": false 00:15:13.918 }, 00:15:13.918 "memory_domains": [ 00:15:13.918 { 00:15:13.918 "dma_device_id": "system", 00:15:13.918 "dma_device_type": 1 00:15:13.918 }, 00:15:13.918 { 00:15:13.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.918 "dma_device_type": 2 00:15:13.918 } 00:15:13.918 ], 00:15:13.918 "driver_specific": {} 00:15:13.918 } 00:15:13.918 ] 00:15:13.918 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:13.918 21:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:13.918 21:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:13.918 21:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.176 BaseBdev3 00:15:14.433 21:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:14.433 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:14.433 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:14.434 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:14.434 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:14.434 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:14.434 21:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.727 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.007 [ 00:15:15.007 { 00:15:15.007 "name": "BaseBdev3", 00:15:15.007 "aliases": [ 00:15:15.007 "ec88f29e-123c-11ef-8c90-4585f0cfab08" 00:15:15.007 ], 00:15:15.007 "product_name": "Malloc disk", 00:15:15.007 "block_size": 512, 00:15:15.007 "num_blocks": 65536, 00:15:15.007 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:15.007 "assigned_rate_limits": { 00:15:15.007 "rw_ios_per_sec": 0, 00:15:15.007 "rw_mbytes_per_sec": 0, 00:15:15.007 "r_mbytes_per_sec": 0, 00:15:15.007 "w_mbytes_per_sec": 0 00:15:15.007 }, 00:15:15.007 "claimed": false, 00:15:15.007 "zoned": false, 00:15:15.007 "supported_io_types": { 00:15:15.007 "read": true, 00:15:15.007 "write": true, 00:15:15.007 "unmap": true, 00:15:15.007 "write_zeroes": true, 00:15:15.007 "flush": true, 00:15:15.007 "reset": true, 00:15:15.007 "compare": false, 00:15:15.007 "compare_and_write": false, 00:15:15.007 "abort": true, 00:15:15.007 "nvme_admin": false, 00:15:15.007 "nvme_io": false 00:15:15.007 }, 00:15:15.007 "memory_domains": [ 00:15:15.007 { 00:15:15.007 "dma_device_id": "system", 00:15:15.007 "dma_device_type": 1 00:15:15.007 }, 00:15:15.007 { 00:15:15.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.007 "dma_device_type": 2 00:15:15.007 } 00:15:15.007 ], 00:15:15.007 "driver_specific": {} 00:15:15.007 } 00:15:15.007 ] 00:15:15.007 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:15.007 21:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:15.007 21:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:15.007 21:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:15.007 BaseBdev4 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.267 21:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:15.843 [ 00:15:15.843 { 00:15:15.843 "name": "BaseBdev4", 00:15:15.843 "aliases": [ 00:15:15.843 "ed091f74-123c-11ef-8c90-4585f0cfab08" 00:15:15.843 ], 00:15:15.843 "product_name": "Malloc disk", 00:15:15.843 "block_size": 512, 00:15:15.843 "num_blocks": 65536, 00:15:15.843 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:15.843 "assigned_rate_limits": { 00:15:15.843 "rw_ios_per_sec": 0, 00:15:15.843 "rw_mbytes_per_sec": 0, 00:15:15.843 "r_mbytes_per_sec": 0, 00:15:15.843 "w_mbytes_per_sec": 0 00:15:15.843 }, 00:15:15.843 "claimed": false, 00:15:15.843 "zoned": false, 00:15:15.843 "supported_io_types": { 00:15:15.843 "read": true, 00:15:15.843 "write": true, 00:15:15.843 "unmap": true, 00:15:15.843 "write_zeroes": true, 00:15:15.843 "flush": true, 00:15:15.843 "reset": true, 00:15:15.843 "compare": false, 00:15:15.843 "compare_and_write": false, 00:15:15.843 "abort": true, 00:15:15.843 "nvme_admin": false, 00:15:15.843 "nvme_io": false 00:15:15.843 }, 00:15:15.843 "memory_domains": [ 00:15:15.843 { 00:15:15.843 "dma_device_id": "system", 00:15:15.843 "dma_device_type": 1 00:15:15.843 }, 00:15:15.843 { 00:15:15.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.843 "dma_device_type": 2 00:15:15.843 } 00:15:15.843 ], 00:15:15.843 "driver_specific": {} 00:15:15.843 } 00:15:15.843 ] 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:15.843 [2024-05-14 21:57:16.387060] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.843 [2024-05-14 21:57:16.387115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.843 [2024-05-14 21:57:16.387124] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.843 [2024-05-14 21:57:16.387669] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.843 [2024-05-14 21:57:16.387687] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:15.843 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.101 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.101 "name": "Existed_Raid", 00:15:16.101 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:16.101 "strip_size_kb": 64, 00:15:16.101 "state": "configuring", 00:15:16.101 "raid_level": "concat", 00:15:16.101 "superblock": true, 00:15:16.101 "num_base_bdevs": 4, 00:15:16.101 "num_base_bdevs_discovered": 3, 00:15:16.101 "num_base_bdevs_operational": 4, 00:15:16.101 "base_bdevs_list": [ 00:15:16.101 { 00:15:16.101 "name": "BaseBdev1", 00:15:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.101 "is_configured": false, 00:15:16.101 "data_offset": 0, 00:15:16.101 "data_size": 0 00:15:16.101 }, 00:15:16.101 { 00:15:16.101 "name": "BaseBdev2", 00:15:16.101 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:16.101 "is_configured": true, 00:15:16.101 "data_offset": 2048, 00:15:16.101 "data_size": 63488 00:15:16.101 }, 00:15:16.101 { 00:15:16.101 "name": "BaseBdev3", 00:15:16.101 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:16.101 "is_configured": true, 00:15:16.101 "data_offset": 2048, 00:15:16.101 "data_size": 63488 00:15:16.101 }, 00:15:16.102 { 00:15:16.102 "name": "BaseBdev4", 00:15:16.102 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:16.102 "is_configured": true, 00:15:16.102 "data_offset": 2048, 00:15:16.102 "data_size": 63488 00:15:16.102 } 00:15:16.102 ] 00:15:16.102 }' 00:15:16.102 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.102 21:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.668 21:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:16.926 [2024-05-14 21:57:17.307074] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.926 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.184 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.184 "name": "Existed_Raid", 00:15:17.184 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:17.184 "strip_size_kb": 64, 00:15:17.184 "state": "configuring", 00:15:17.184 "raid_level": "concat", 00:15:17.184 "superblock": true, 00:15:17.184 "num_base_bdevs": 4, 00:15:17.184 "num_base_bdevs_discovered": 2, 00:15:17.184 "num_base_bdevs_operational": 4, 00:15:17.184 "base_bdevs_list": [ 00:15:17.184 { 00:15:17.184 "name": "BaseBdev1", 00:15:17.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.184 "is_configured": false, 00:15:17.184 "data_offset": 0, 00:15:17.184 "data_size": 0 00:15:17.184 }, 00:15:17.184 { 00:15:17.184 "name": null, 00:15:17.184 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:17.184 "is_configured": false, 00:15:17.184 "data_offset": 2048, 00:15:17.184 "data_size": 63488 00:15:17.184 }, 00:15:17.184 { 00:15:17.184 "name": "BaseBdev3", 00:15:17.184 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:17.184 "is_configured": true, 00:15:17.184 "data_offset": 2048, 00:15:17.184 "data_size": 63488 00:15:17.184 }, 00:15:17.184 { 00:15:17.184 "name": "BaseBdev4", 00:15:17.184 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:17.184 "is_configured": true, 00:15:17.184 "data_offset": 2048, 00:15:17.184 "data_size": 63488 00:15:17.184 } 00:15:17.184 ] 00:15:17.184 }' 00:15:17.184 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.184 21:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.442 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.442 21:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.699 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:17.699 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.957 [2024-05-14 21:57:18.395225] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.957 BaseBdev1 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:17.957 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.215 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.474 [ 00:15:18.474 { 00:15:18.474 "name": "BaseBdev1", 00:15:18.474 "aliases": [ 00:15:18.474 "eeb4fcc7-123c-11ef-8c90-4585f0cfab08" 00:15:18.474 ], 00:15:18.474 "product_name": "Malloc disk", 00:15:18.474 "block_size": 512, 00:15:18.474 "num_blocks": 65536, 00:15:18.474 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:18.474 "assigned_rate_limits": { 00:15:18.474 "rw_ios_per_sec": 0, 00:15:18.474 "rw_mbytes_per_sec": 0, 00:15:18.474 "r_mbytes_per_sec": 0, 00:15:18.474 "w_mbytes_per_sec": 0 00:15:18.474 }, 00:15:18.474 "claimed": true, 00:15:18.474 "claim_type": "exclusive_write", 00:15:18.474 "zoned": false, 00:15:18.474 "supported_io_types": { 00:15:18.474 "read": true, 00:15:18.474 "write": true, 00:15:18.474 "unmap": true, 00:15:18.474 "write_zeroes": true, 00:15:18.474 "flush": true, 00:15:18.474 "reset": true, 00:15:18.474 "compare": false, 00:15:18.474 "compare_and_write": false, 00:15:18.474 "abort": true, 00:15:18.474 "nvme_admin": false, 00:15:18.474 "nvme_io": false 00:15:18.474 }, 00:15:18.474 "memory_domains": [ 00:15:18.474 { 00:15:18.474 "dma_device_id": "system", 00:15:18.474 "dma_device_type": 1 00:15:18.474 }, 00:15:18.474 { 00:15:18.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.474 "dma_device_type": 2 00:15:18.474 } 00:15:18.474 ], 00:15:18.474 "driver_specific": {} 00:15:18.474 } 00:15:18.474 ] 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.474 21:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.746 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.746 "name": "Existed_Raid", 00:15:18.746 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:18.746 "strip_size_kb": 64, 00:15:18.746 "state": "configuring", 00:15:18.746 "raid_level": "concat", 00:15:18.746 "superblock": true, 00:15:18.746 "num_base_bdevs": 4, 00:15:18.746 "num_base_bdevs_discovered": 3, 00:15:18.746 "num_base_bdevs_operational": 4, 00:15:18.746 "base_bdevs_list": [ 00:15:18.746 { 00:15:18.746 "name": "BaseBdev1", 00:15:18.746 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:18.746 "is_configured": true, 00:15:18.746 "data_offset": 2048, 00:15:18.746 "data_size": 63488 00:15:18.746 }, 00:15:18.746 { 00:15:18.746 "name": null, 00:15:18.746 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:18.746 "is_configured": false, 00:15:18.746 "data_offset": 2048, 00:15:18.746 "data_size": 63488 00:15:18.746 }, 00:15:18.746 { 00:15:18.746 "name": "BaseBdev3", 00:15:18.746 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:18.746 "is_configured": true, 00:15:18.746 "data_offset": 2048, 00:15:18.746 "data_size": 63488 00:15:18.746 }, 00:15:18.746 { 00:15:18.746 "name": "BaseBdev4", 00:15:18.746 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:18.746 "is_configured": true, 00:15:18.746 "data_offset": 2048, 00:15:18.746 "data_size": 63488 00:15:18.746 } 00:15:18.746 ] 00:15:18.746 }' 00:15:18.746 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.746 21:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.040 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.297 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:19.297 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:19.555 [2024-05-14 21:57:19.975127] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.555 21:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.813 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.813 "name": "Existed_Raid", 00:15:19.813 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:19.813 "strip_size_kb": 64, 00:15:19.813 "state": "configuring", 00:15:19.813 "raid_level": "concat", 00:15:19.813 "superblock": true, 00:15:19.813 "num_base_bdevs": 4, 00:15:19.813 "num_base_bdevs_discovered": 2, 00:15:19.813 "num_base_bdevs_operational": 4, 00:15:19.813 "base_bdevs_list": [ 00:15:19.813 { 00:15:19.813 "name": "BaseBdev1", 00:15:19.813 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:19.813 "is_configured": true, 00:15:19.813 "data_offset": 2048, 00:15:19.813 "data_size": 63488 00:15:19.813 }, 00:15:19.813 { 00:15:19.813 "name": null, 00:15:19.813 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:19.813 "is_configured": false, 00:15:19.813 "data_offset": 2048, 00:15:19.813 "data_size": 63488 00:15:19.813 }, 00:15:19.813 { 00:15:19.813 "name": null, 00:15:19.813 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:19.813 "is_configured": false, 00:15:19.813 "data_offset": 2048, 00:15:19.813 "data_size": 63488 00:15:19.813 }, 00:15:19.813 { 00:15:19.813 "name": "BaseBdev4", 00:15:19.813 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:19.813 "is_configured": true, 00:15:19.813 "data_offset": 2048, 00:15:19.813 "data_size": 63488 00:15:19.813 } 00:15:19.813 ] 00:15:19.813 }' 00:15:19.813 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.813 21:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.071 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.071 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.329 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:15:20.329 21:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:20.587 [2024-05-14 21:57:21.119152] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.587 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.588 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.846 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.846 "name": "Existed_Raid", 00:15:20.846 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:20.846 "strip_size_kb": 64, 00:15:20.846 "state": "configuring", 00:15:20.846 "raid_level": "concat", 00:15:20.846 "superblock": true, 00:15:20.846 "num_base_bdevs": 4, 00:15:20.846 "num_base_bdevs_discovered": 3, 00:15:20.846 "num_base_bdevs_operational": 4, 00:15:20.846 "base_bdevs_list": [ 00:15:20.846 { 00:15:20.846 "name": "BaseBdev1", 00:15:20.846 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:20.846 "is_configured": true, 00:15:20.846 "data_offset": 2048, 00:15:20.846 "data_size": 63488 00:15:20.846 }, 00:15:20.846 { 00:15:20.846 "name": null, 00:15:20.846 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:20.846 "is_configured": false, 00:15:20.846 "data_offset": 2048, 00:15:20.846 "data_size": 63488 00:15:20.846 }, 00:15:20.846 { 00:15:20.846 "name": "BaseBdev3", 00:15:20.846 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:20.846 "is_configured": true, 00:15:20.846 "data_offset": 2048, 00:15:20.846 "data_size": 63488 00:15:20.846 }, 00:15:20.846 { 00:15:20.846 "name": "BaseBdev4", 00:15:20.846 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:20.846 "is_configured": true, 00:15:20.846 "data_offset": 2048, 00:15:20.846 "data_size": 63488 00:15:20.846 } 00:15:20.846 ] 00:15:20.846 }' 00:15:20.846 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.846 21:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.105 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.105 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.363 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:21.363 21:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.621 [2024-05-14 21:57:22.143176] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.621 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.880 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.880 "name": "Existed_Raid", 00:15:21.880 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:21.880 "strip_size_kb": 64, 00:15:21.880 "state": "configuring", 00:15:21.880 "raid_level": "concat", 00:15:21.880 "superblock": true, 00:15:21.880 "num_base_bdevs": 4, 00:15:21.880 "num_base_bdevs_discovered": 2, 00:15:21.880 "num_base_bdevs_operational": 4, 00:15:21.880 "base_bdevs_list": [ 00:15:21.880 { 00:15:21.880 "name": null, 00:15:21.880 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:21.880 "is_configured": false, 00:15:21.880 "data_offset": 2048, 00:15:21.880 "data_size": 63488 00:15:21.880 }, 00:15:21.880 { 00:15:21.880 "name": null, 00:15:21.880 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:21.880 "is_configured": false, 00:15:21.880 "data_offset": 2048, 00:15:21.880 "data_size": 63488 00:15:21.880 }, 00:15:21.880 { 00:15:21.880 "name": "BaseBdev3", 00:15:21.880 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:21.880 "is_configured": true, 00:15:21.880 "data_offset": 2048, 00:15:21.880 "data_size": 63488 00:15:21.880 }, 00:15:21.880 { 00:15:21.880 "name": "BaseBdev4", 00:15:21.880 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:21.880 "is_configured": true, 00:15:21.880 "data_offset": 2048, 00:15:21.880 "data_size": 63488 00:15:21.880 } 00:15:21.880 ] 00:15:21.880 }' 00:15:21.880 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.880 21:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.138 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.138 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:22.396 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:22.396 21:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:22.654 [2024-05-14 21:57:23.232871] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.969 "name": "Existed_Raid", 00:15:22.969 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:22.969 "strip_size_kb": 64, 00:15:22.969 "state": "configuring", 00:15:22.969 "raid_level": "concat", 00:15:22.969 "superblock": true, 00:15:22.969 "num_base_bdevs": 4, 00:15:22.969 "num_base_bdevs_discovered": 3, 00:15:22.969 "num_base_bdevs_operational": 4, 00:15:22.969 "base_bdevs_list": [ 00:15:22.969 { 00:15:22.969 "name": null, 00:15:22.969 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:22.969 "is_configured": false, 00:15:22.969 "data_offset": 2048, 00:15:22.969 "data_size": 63488 00:15:22.969 }, 00:15:22.969 { 00:15:22.969 "name": "BaseBdev2", 00:15:22.969 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:22.969 "is_configured": true, 00:15:22.969 "data_offset": 2048, 00:15:22.969 "data_size": 63488 00:15:22.969 }, 00:15:22.969 { 00:15:22.969 "name": "BaseBdev3", 00:15:22.969 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:22.969 "is_configured": true, 00:15:22.969 "data_offset": 2048, 00:15:22.969 "data_size": 63488 00:15:22.969 }, 00:15:22.969 { 00:15:22.969 "name": "BaseBdev4", 00:15:22.969 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:22.969 "is_configured": true, 00:15:22.969 "data_offset": 2048, 00:15:22.969 "data_size": 63488 00:15:22.969 } 00:15:22.969 ] 00:15:22.969 }' 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.969 21:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.535 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.535 21:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:23.535 21:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:15:23.535 21:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.535 21:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:23.793 21:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u eeb4fcc7-123c-11ef-8c90-4585f0cfab08 00:15:24.051 [2024-05-14 21:57:24.557019] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:24.051 [2024-05-14 21:57:24.557075] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cb7b300 00:15:24.051 [2024-05-14 21:57:24.557080] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:24.051 [2024-05-14 21:57:24.557107] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cbd9e20 00:15:24.051 [2024-05-14 21:57:24.557155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cb7b300 00:15:24.051 [2024-05-14 21:57:24.557174] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cb7b300 00:15:24.051 [2024-05-14 21:57:24.557196] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.051 NewBaseBdev 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:24.051 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.310 21:57:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:24.568 [ 00:15:24.568 { 00:15:24.568 "name": "NewBaseBdev", 00:15:24.569 "aliases": [ 00:15:24.569 "eeb4fcc7-123c-11ef-8c90-4585f0cfab08" 00:15:24.569 ], 00:15:24.569 "product_name": "Malloc disk", 00:15:24.569 "block_size": 512, 00:15:24.569 "num_blocks": 65536, 00:15:24.569 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:24.569 "assigned_rate_limits": { 00:15:24.569 "rw_ios_per_sec": 0, 00:15:24.569 "rw_mbytes_per_sec": 0, 00:15:24.569 "r_mbytes_per_sec": 0, 00:15:24.569 "w_mbytes_per_sec": 0 00:15:24.569 }, 00:15:24.569 "claimed": true, 00:15:24.569 "claim_type": "exclusive_write", 00:15:24.569 "zoned": false, 00:15:24.569 "supported_io_types": { 00:15:24.569 "read": true, 00:15:24.569 "write": true, 00:15:24.569 "unmap": true, 00:15:24.569 "write_zeroes": true, 00:15:24.569 "flush": true, 00:15:24.569 "reset": true, 00:15:24.569 "compare": false, 00:15:24.569 "compare_and_write": false, 00:15:24.569 "abort": true, 00:15:24.569 "nvme_admin": false, 00:15:24.569 "nvme_io": false 00:15:24.569 }, 00:15:24.569 "memory_domains": [ 00:15:24.569 { 00:15:24.569 "dma_device_id": "system", 00:15:24.569 "dma_device_type": 1 00:15:24.569 }, 00:15:24.569 { 00:15:24.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.569 "dma_device_type": 2 00:15:24.569 } 00:15:24.569 ], 00:15:24.569 "driver_specific": {} 00:15:24.569 } 00:15:24.569 ] 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.569 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.827 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.827 "name": "Existed_Raid", 00:15:24.827 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:24.827 "strip_size_kb": 64, 00:15:24.827 "state": "online", 00:15:24.827 "raid_level": "concat", 00:15:24.827 "superblock": true, 00:15:24.827 "num_base_bdevs": 4, 00:15:24.827 "num_base_bdevs_discovered": 4, 00:15:24.827 "num_base_bdevs_operational": 4, 00:15:24.827 "base_bdevs_list": [ 00:15:24.827 { 00:15:24.827 "name": "NewBaseBdev", 00:15:24.827 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:24.827 "is_configured": true, 00:15:24.827 "data_offset": 2048, 00:15:24.827 "data_size": 63488 00:15:24.827 }, 00:15:24.827 { 00:15:24.827 "name": "BaseBdev2", 00:15:24.827 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:24.827 "is_configured": true, 00:15:24.827 "data_offset": 2048, 00:15:24.827 "data_size": 63488 00:15:24.827 }, 00:15:24.827 { 00:15:24.827 "name": "BaseBdev3", 00:15:24.827 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:24.827 "is_configured": true, 00:15:24.827 "data_offset": 2048, 00:15:24.827 "data_size": 63488 00:15:24.827 }, 00:15:24.827 { 00:15:24.827 "name": "BaseBdev4", 00:15:24.828 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:24.828 "is_configured": true, 00:15:24.828 "data_offset": 2048, 00:15:24.828 "data_size": 63488 00:15:24.828 } 00:15:24.828 ] 00:15:24.828 }' 00:15:24.828 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.828 21:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:25.086 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:25.345 [2024-05-14 21:57:25.920955] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:25.603 "name": "Existed_Raid", 00:15:25.603 "aliases": [ 00:15:25.603 "ed829551-123c-11ef-8c90-4585f0cfab08" 00:15:25.603 ], 00:15:25.603 "product_name": "Raid Volume", 00:15:25.603 "block_size": 512, 00:15:25.603 "num_blocks": 253952, 00:15:25.603 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "assigned_rate_limits": { 00:15:25.603 "rw_ios_per_sec": 0, 00:15:25.603 "rw_mbytes_per_sec": 0, 00:15:25.603 "r_mbytes_per_sec": 0, 00:15:25.603 "w_mbytes_per_sec": 0 00:15:25.603 }, 00:15:25.603 "claimed": false, 00:15:25.603 "zoned": false, 00:15:25.603 "supported_io_types": { 00:15:25.603 "read": true, 00:15:25.603 "write": true, 00:15:25.603 "unmap": true, 00:15:25.603 "write_zeroes": true, 00:15:25.603 "flush": true, 00:15:25.603 "reset": true, 00:15:25.603 "compare": false, 00:15:25.603 "compare_and_write": false, 00:15:25.603 "abort": false, 00:15:25.603 "nvme_admin": false, 00:15:25.603 "nvme_io": false 00:15:25.603 }, 00:15:25.603 "memory_domains": [ 00:15:25.603 { 00:15:25.603 "dma_device_id": "system", 00:15:25.603 "dma_device_type": 1 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.603 "dma_device_type": 2 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "system", 00:15:25.603 "dma_device_type": 1 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.603 "dma_device_type": 2 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "system", 00:15:25.603 "dma_device_type": 1 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.603 "dma_device_type": 2 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "system", 00:15:25.603 "dma_device_type": 1 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.603 "dma_device_type": 2 00:15:25.603 } 00:15:25.603 ], 00:15:25.603 "driver_specific": { 00:15:25.603 "raid": { 00:15:25.603 "uuid": "ed829551-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "strip_size_kb": 64, 00:15:25.603 "state": "online", 00:15:25.603 "raid_level": "concat", 00:15:25.603 "superblock": true, 00:15:25.603 "num_base_bdevs": 4, 00:15:25.603 "num_base_bdevs_discovered": 4, 00:15:25.603 "num_base_bdevs_operational": 4, 00:15:25.603 "base_bdevs_list": [ 00:15:25.603 { 00:15:25.603 "name": "NewBaseBdev", 00:15:25.603 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "is_configured": true, 00:15:25.603 "data_offset": 2048, 00:15:25.603 "data_size": 63488 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "name": "BaseBdev2", 00:15:25.603 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "is_configured": true, 00:15:25.603 "data_offset": 2048, 00:15:25.603 "data_size": 63488 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "name": "BaseBdev3", 00:15:25.603 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "is_configured": true, 00:15:25.603 "data_offset": 2048, 00:15:25.603 "data_size": 63488 00:15:25.603 }, 00:15:25.603 { 00:15:25.603 "name": "BaseBdev4", 00:15:25.603 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "is_configured": true, 00:15:25.603 "data_offset": 2048, 00:15:25.603 "data_size": 63488 00:15:25.603 } 00:15:25.603 ] 00:15:25.603 } 00:15:25.603 } 00:15:25.603 }' 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:15:25.603 BaseBdev2 00:15:25.603 BaseBdev3 00:15:25.603 BaseBdev4' 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:25.603 21:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:25.603 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:25.603 "name": "NewBaseBdev", 00:15:25.603 "aliases": [ 00:15:25.603 "eeb4fcc7-123c-11ef-8c90-4585f0cfab08" 00:15:25.603 ], 00:15:25.603 "product_name": "Malloc disk", 00:15:25.603 "block_size": 512, 00:15:25.603 "num_blocks": 65536, 00:15:25.603 "uuid": "eeb4fcc7-123c-11ef-8c90-4585f0cfab08", 00:15:25.603 "assigned_rate_limits": { 00:15:25.603 "rw_ios_per_sec": 0, 00:15:25.603 "rw_mbytes_per_sec": 0, 00:15:25.603 "r_mbytes_per_sec": 0, 00:15:25.603 "w_mbytes_per_sec": 0 00:15:25.603 }, 00:15:25.603 "claimed": true, 00:15:25.603 "claim_type": "exclusive_write", 00:15:25.603 "zoned": false, 00:15:25.603 "supported_io_types": { 00:15:25.603 "read": true, 00:15:25.603 "write": true, 00:15:25.603 "unmap": true, 00:15:25.603 "write_zeroes": true, 00:15:25.603 "flush": true, 00:15:25.603 "reset": true, 00:15:25.603 "compare": false, 00:15:25.603 "compare_and_write": false, 00:15:25.604 "abort": true, 00:15:25.604 "nvme_admin": false, 00:15:25.604 "nvme_io": false 00:15:25.604 }, 00:15:25.604 "memory_domains": [ 00:15:25.604 { 00:15:25.604 "dma_device_id": "system", 00:15:25.604 "dma_device_type": 1 00:15:25.604 }, 00:15:25.604 { 00:15:25.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.604 "dma_device_type": 2 00:15:25.604 } 00:15:25.604 ], 00:15:25.604 "driver_specific": {} 00:15:25.604 }' 00:15:25.604 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:25.861 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:26.119 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:26.119 "name": "BaseBdev2", 00:15:26.120 "aliases": [ 00:15:26.120 "ec1289f3-123c-11ef-8c90-4585f0cfab08" 00:15:26.120 ], 00:15:26.120 "product_name": "Malloc disk", 00:15:26.120 "block_size": 512, 00:15:26.120 "num_blocks": 65536, 00:15:26.120 "uuid": "ec1289f3-123c-11ef-8c90-4585f0cfab08", 00:15:26.120 "assigned_rate_limits": { 00:15:26.120 "rw_ios_per_sec": 0, 00:15:26.120 "rw_mbytes_per_sec": 0, 00:15:26.120 "r_mbytes_per_sec": 0, 00:15:26.120 "w_mbytes_per_sec": 0 00:15:26.120 }, 00:15:26.120 "claimed": true, 00:15:26.120 "claim_type": "exclusive_write", 00:15:26.120 "zoned": false, 00:15:26.120 "supported_io_types": { 00:15:26.120 "read": true, 00:15:26.120 "write": true, 00:15:26.120 "unmap": true, 00:15:26.120 "write_zeroes": true, 00:15:26.120 "flush": true, 00:15:26.120 "reset": true, 00:15:26.120 "compare": false, 00:15:26.120 "compare_and_write": false, 00:15:26.120 "abort": true, 00:15:26.120 "nvme_admin": false, 00:15:26.120 "nvme_io": false 00:15:26.120 }, 00:15:26.120 "memory_domains": [ 00:15:26.120 { 00:15:26.120 "dma_device_id": "system", 00:15:26.120 "dma_device_type": 1 00:15:26.120 }, 00:15:26.120 { 00:15:26.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.120 "dma_device_type": 2 00:15:26.120 } 00:15:26.120 ], 00:15:26.120 "driver_specific": {} 00:15:26.120 }' 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:26.120 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:26.378 "name": "BaseBdev3", 00:15:26.378 "aliases": [ 00:15:26.378 "ec88f29e-123c-11ef-8c90-4585f0cfab08" 00:15:26.378 ], 00:15:26.378 "product_name": "Malloc disk", 00:15:26.378 "block_size": 512, 00:15:26.378 "num_blocks": 65536, 00:15:26.378 "uuid": "ec88f29e-123c-11ef-8c90-4585f0cfab08", 00:15:26.378 "assigned_rate_limits": { 00:15:26.378 "rw_ios_per_sec": 0, 00:15:26.378 "rw_mbytes_per_sec": 0, 00:15:26.378 "r_mbytes_per_sec": 0, 00:15:26.378 "w_mbytes_per_sec": 0 00:15:26.378 }, 00:15:26.378 "claimed": true, 00:15:26.378 "claim_type": "exclusive_write", 00:15:26.378 "zoned": false, 00:15:26.378 "supported_io_types": { 00:15:26.378 "read": true, 00:15:26.378 "write": true, 00:15:26.378 "unmap": true, 00:15:26.378 "write_zeroes": true, 00:15:26.378 "flush": true, 00:15:26.378 "reset": true, 00:15:26.378 "compare": false, 00:15:26.378 "compare_and_write": false, 00:15:26.378 "abort": true, 00:15:26.378 "nvme_admin": false, 00:15:26.378 "nvme_io": false 00:15:26.378 }, 00:15:26.378 "memory_domains": [ 00:15:26.378 { 00:15:26.378 "dma_device_id": "system", 00:15:26.378 "dma_device_type": 1 00:15:26.378 }, 00:15:26.378 { 00:15:26.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.378 "dma_device_type": 2 00:15:26.378 } 00:15:26.378 ], 00:15:26.378 "driver_specific": {} 00:15:26.378 }' 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:26.378 21:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:26.637 "name": "BaseBdev4", 00:15:26.637 "aliases": [ 00:15:26.637 "ed091f74-123c-11ef-8c90-4585f0cfab08" 00:15:26.637 ], 00:15:26.637 "product_name": "Malloc disk", 00:15:26.637 "block_size": 512, 00:15:26.637 "num_blocks": 65536, 00:15:26.637 "uuid": "ed091f74-123c-11ef-8c90-4585f0cfab08", 00:15:26.637 "assigned_rate_limits": { 00:15:26.637 "rw_ios_per_sec": 0, 00:15:26.637 "rw_mbytes_per_sec": 0, 00:15:26.637 "r_mbytes_per_sec": 0, 00:15:26.637 "w_mbytes_per_sec": 0 00:15:26.637 }, 00:15:26.637 "claimed": true, 00:15:26.637 "claim_type": "exclusive_write", 00:15:26.637 "zoned": false, 00:15:26.637 "supported_io_types": { 00:15:26.637 "read": true, 00:15:26.637 "write": true, 00:15:26.637 "unmap": true, 00:15:26.637 "write_zeroes": true, 00:15:26.637 "flush": true, 00:15:26.637 "reset": true, 00:15:26.637 "compare": false, 00:15:26.637 "compare_and_write": false, 00:15:26.637 "abort": true, 00:15:26.637 "nvme_admin": false, 00:15:26.637 "nvme_io": false 00:15:26.637 }, 00:15:26.637 "memory_domains": [ 00:15:26.637 { 00:15:26.637 "dma_device_id": "system", 00:15:26.637 "dma_device_type": 1 00:15:26.637 }, 00:15:26.637 { 00:15:26.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.637 "dma_device_type": 2 00:15:26.637 } 00:15:26.637 ], 00:15:26.637 "driver_specific": {} 00:15:26.637 }' 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.637 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:26.896 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.896 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.896 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:26.896 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:26.896 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.896 [2024-05-14 21:57:27.472930] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.896 [2024-05-14 21:57:27.472957] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.896 [2024-05-14 21:57:27.472979] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.896 [2024-05-14 21:57:27.472995] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.896 [2024-05-14 21:57:27.473000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb7b300 name Existed_Raid, state offline 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 60034 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 60034 ']' 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 60034 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 60034 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:27.154 killing process with pid 60034 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60034' 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 60034 00:15:27.154 [2024-05-14 21:57:27.502110] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 60034 00:15:27.154 [2024-05-14 21:57:27.526857] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:15:27.154 00:15:27.154 real 0m26.996s 00:15:27.154 user 0m49.529s 00:15:27.154 sys 0m3.578s 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.154 21:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.154 ************************************ 00:15:27.154 END TEST raid_state_function_test_sb 00:15:27.154 ************************************ 00:15:27.412 21:57:27 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:27.412 21:57:27 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:27.413 21:57:27 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.413 21:57:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.413 ************************************ 00:15:27.413 START TEST raid_superblock_test 00:15:27.413 ************************************ 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60849 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60849 /var/tmp/spdk-raid.sock 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 60849 ']' 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:27.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:27.413 21:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.413 [2024-05-14 21:57:27.769420] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:27.413 [2024-05-14 21:57:27.769684] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:27.978 EAL: TSC is not safe to use in SMP mode 00:15:27.979 EAL: TSC is not invariant 00:15:27.979 [2024-05-14 21:57:28.334488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.979 [2024-05-14 21:57:28.423431] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:27.979 [2024-05-14 21:57:28.425872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.979 [2024-05-14 21:57:28.426696] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.979 [2024-05-14 21:57:28.426716] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.544 21:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:28.544 malloc1 00:15:28.544 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.803 [2024-05-14 21:57:29.304586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.803 [2024-05-14 21:57:29.304668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.803 [2024-05-14 21:57:29.305320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb7780 00:15:28.803 [2024-05-14 21:57:29.305361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.803 [2024-05-14 21:57:29.306241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.803 [2024-05-14 21:57:29.306277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.803 pt1 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.803 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:29.061 malloc2 00:15:29.061 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.319 [2024-05-14 21:57:29.824590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.319 [2024-05-14 21:57:29.824649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.319 [2024-05-14 21:57:29.824677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb7c80 00:15:29.319 [2024-05-14 21:57:29.824685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.319 [2024-05-14 21:57:29.825333] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.319 [2024-05-14 21:57:29.825362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.319 pt2 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.319 21:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:29.577 malloc3 00:15:29.577 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:29.835 [2024-05-14 21:57:30.316589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:29.835 [2024-05-14 21:57:30.316657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.835 [2024-05-14 21:57:30.316697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb8180 00:15:29.835 [2024-05-14 21:57:30.316705] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.835 [2024-05-14 21:57:30.317420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.835 [2024-05-14 21:57:30.317447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:29.835 pt3 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.835 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:30.093 malloc4 00:15:30.093 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:30.350 [2024-05-14 21:57:30.812611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:30.350 [2024-05-14 21:57:30.812684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.351 [2024-05-14 21:57:30.812740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb8680 00:15:30.351 [2024-05-14 21:57:30.812748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.351 [2024-05-14 21:57:30.813422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.351 [2024-05-14 21:57:30.813448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:30.351 pt4 00:15:30.351 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:30.351 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.351 21:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:30.608 [2024-05-14 21:57:31.056632] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:30.608 [2024-05-14 21:57:31.057255] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.608 [2024-05-14 21:57:31.057276] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:30.608 [2024-05-14 21:57:31.057288] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:30.608 [2024-05-14 21:57:31.057342] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cfbc300 00:15:30.608 [2024-05-14 21:57:31.057348] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:30.608 [2024-05-14 21:57:31.057382] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d01ae20 00:15:30.608 [2024-05-14 21:57:31.057456] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cfbc300 00:15:30.608 [2024-05-14 21:57:31.057460] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cfbc300 00:15:30.608 [2024-05-14 21:57:31.057488] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.608 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.890 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.890 "name": "raid_bdev1", 00:15:30.890 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:30.890 "strip_size_kb": 64, 00:15:30.890 "state": "online", 00:15:30.890 "raid_level": "concat", 00:15:30.890 "superblock": true, 00:15:30.890 "num_base_bdevs": 4, 00:15:30.890 "num_base_bdevs_discovered": 4, 00:15:30.890 "num_base_bdevs_operational": 4, 00:15:30.890 "base_bdevs_list": [ 00:15:30.890 { 00:15:30.890 "name": "pt1", 00:15:30.890 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 }, 00:15:30.890 { 00:15:30.890 "name": "pt2", 00:15:30.890 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 }, 00:15:30.890 { 00:15:30.890 "name": "pt3", 00:15:30.890 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 }, 00:15:30.890 { 00:15:30.890 "name": "pt4", 00:15:30.890 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 } 00:15:30.890 ] 00:15:30.890 }' 00:15:30.890 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.890 21:57:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:31.182 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.440 [2024-05-14 21:57:31.952683] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.440 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:31.440 "name": "raid_bdev1", 00:15:31.440 "aliases": [ 00:15:31.440 "f640fb5a-123c-11ef-8c90-4585f0cfab08" 00:15:31.440 ], 00:15:31.440 "product_name": "Raid Volume", 00:15:31.440 "block_size": 512, 00:15:31.440 "num_blocks": 253952, 00:15:31.440 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:31.440 "assigned_rate_limits": { 00:15:31.440 "rw_ios_per_sec": 0, 00:15:31.440 "rw_mbytes_per_sec": 0, 00:15:31.440 "r_mbytes_per_sec": 0, 00:15:31.440 "w_mbytes_per_sec": 0 00:15:31.440 }, 00:15:31.440 "claimed": false, 00:15:31.440 "zoned": false, 00:15:31.440 "supported_io_types": { 00:15:31.440 "read": true, 00:15:31.440 "write": true, 00:15:31.440 "unmap": true, 00:15:31.440 "write_zeroes": true, 00:15:31.440 "flush": true, 00:15:31.440 "reset": true, 00:15:31.440 "compare": false, 00:15:31.440 "compare_and_write": false, 00:15:31.440 "abort": false, 00:15:31.440 "nvme_admin": false, 00:15:31.440 "nvme_io": false 00:15:31.440 }, 00:15:31.440 "memory_domains": [ 00:15:31.440 { 00:15:31.440 "dma_device_id": "system", 00:15:31.440 "dma_device_type": 1 00:15:31.440 }, 00:15:31.440 { 00:15:31.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.440 "dma_device_type": 2 00:15:31.440 }, 00:15:31.440 { 00:15:31.440 "dma_device_id": "system", 00:15:31.440 "dma_device_type": 1 00:15:31.440 }, 00:15:31.440 { 00:15:31.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.440 "dma_device_type": 2 00:15:31.440 }, 00:15:31.440 { 00:15:31.440 "dma_device_id": "system", 00:15:31.440 "dma_device_type": 1 00:15:31.440 }, 00:15:31.440 { 00:15:31.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.441 "dma_device_type": 2 00:15:31.441 }, 00:15:31.441 { 00:15:31.441 "dma_device_id": "system", 00:15:31.441 "dma_device_type": 1 00:15:31.441 }, 00:15:31.441 { 00:15:31.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.441 "dma_device_type": 2 00:15:31.441 } 00:15:31.441 ], 00:15:31.441 "driver_specific": { 00:15:31.441 "raid": { 00:15:31.441 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:31.441 "strip_size_kb": 64, 00:15:31.441 "state": "online", 00:15:31.441 "raid_level": "concat", 00:15:31.441 "superblock": true, 00:15:31.441 "num_base_bdevs": 4, 00:15:31.441 "num_base_bdevs_discovered": 4, 00:15:31.441 "num_base_bdevs_operational": 4, 00:15:31.441 "base_bdevs_list": [ 00:15:31.441 { 00:15:31.441 "name": "pt1", 00:15:31.441 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:31.441 "is_configured": true, 00:15:31.441 "data_offset": 2048, 00:15:31.441 "data_size": 63488 00:15:31.441 }, 00:15:31.441 { 00:15:31.441 "name": "pt2", 00:15:31.441 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:31.441 "is_configured": true, 00:15:31.441 "data_offset": 2048, 00:15:31.441 "data_size": 63488 00:15:31.441 }, 00:15:31.441 { 00:15:31.441 "name": "pt3", 00:15:31.441 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:31.441 "is_configured": true, 00:15:31.441 "data_offset": 2048, 00:15:31.441 "data_size": 63488 00:15:31.441 }, 00:15:31.441 { 00:15:31.441 "name": "pt4", 00:15:31.441 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:31.441 "is_configured": true, 00:15:31.441 "data_offset": 2048, 00:15:31.441 "data_size": 63488 00:15:31.441 } 00:15:31.441 ] 00:15:31.441 } 00:15:31.441 } 00:15:31.441 }' 00:15:31.441 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.441 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:31.441 pt2 00:15:31.441 pt3 00:15:31.441 pt4' 00:15:31.441 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:31.441 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:31.441 21:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:31.699 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:31.699 "name": "pt1", 00:15:31.699 "aliases": [ 00:15:31.699 "51c73d35-983f-1f57-a0f8-f54fb3836c95" 00:15:31.699 ], 00:15:31.699 "product_name": "passthru", 00:15:31.699 "block_size": 512, 00:15:31.699 "num_blocks": 65536, 00:15:31.699 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:31.699 "assigned_rate_limits": { 00:15:31.699 "rw_ios_per_sec": 0, 00:15:31.699 "rw_mbytes_per_sec": 0, 00:15:31.699 "r_mbytes_per_sec": 0, 00:15:31.699 "w_mbytes_per_sec": 0 00:15:31.699 }, 00:15:31.699 "claimed": true, 00:15:31.699 "claim_type": "exclusive_write", 00:15:31.699 "zoned": false, 00:15:31.699 "supported_io_types": { 00:15:31.699 "read": true, 00:15:31.699 "write": true, 00:15:31.699 "unmap": true, 00:15:31.699 "write_zeroes": true, 00:15:31.699 "flush": true, 00:15:31.699 "reset": true, 00:15:31.699 "compare": false, 00:15:31.699 "compare_and_write": false, 00:15:31.699 "abort": true, 00:15:31.699 "nvme_admin": false, 00:15:31.699 "nvme_io": false 00:15:31.699 }, 00:15:31.699 "memory_domains": [ 00:15:31.699 { 00:15:31.699 "dma_device_id": "system", 00:15:31.699 "dma_device_type": 1 00:15:31.699 }, 00:15:31.699 { 00:15:31.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.699 "dma_device_type": 2 00:15:31.699 } 00:15:31.699 ], 00:15:31.699 "driver_specific": { 00:15:31.699 "passthru": { 00:15:31.700 "name": "pt1", 00:15:31.700 "base_bdev_name": "malloc1" 00:15:31.700 } 00:15:31.700 } 00:15:31.700 }' 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:31.700 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:31.957 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:31.957 "name": "pt2", 00:15:31.957 "aliases": [ 00:15:31.957 "d35e230c-3e82-8951-928f-7a4439a0e347" 00:15:31.957 ], 00:15:31.957 "product_name": "passthru", 00:15:31.957 "block_size": 512, 00:15:31.957 "num_blocks": 65536, 00:15:31.957 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:31.957 "assigned_rate_limits": { 00:15:31.957 "rw_ios_per_sec": 0, 00:15:31.957 "rw_mbytes_per_sec": 0, 00:15:31.957 "r_mbytes_per_sec": 0, 00:15:31.957 "w_mbytes_per_sec": 0 00:15:31.957 }, 00:15:31.957 "claimed": true, 00:15:31.957 "claim_type": "exclusive_write", 00:15:31.957 "zoned": false, 00:15:31.957 "supported_io_types": { 00:15:31.957 "read": true, 00:15:31.957 "write": true, 00:15:31.957 "unmap": true, 00:15:31.957 "write_zeroes": true, 00:15:31.957 "flush": true, 00:15:31.957 "reset": true, 00:15:31.957 "compare": false, 00:15:31.957 "compare_and_write": false, 00:15:31.957 "abort": true, 00:15:31.957 "nvme_admin": false, 00:15:31.957 "nvme_io": false 00:15:31.957 }, 00:15:31.957 "memory_domains": [ 00:15:31.957 { 00:15:31.957 "dma_device_id": "system", 00:15:31.957 "dma_device_type": 1 00:15:31.957 }, 00:15:31.957 { 00:15:31.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.957 "dma_device_type": 2 00:15:31.957 } 00:15:31.957 ], 00:15:31.957 "driver_specific": { 00:15:31.957 "passthru": { 00:15:31.957 "name": "pt2", 00:15:31.957 "base_bdev_name": "malloc2" 00:15:31.957 } 00:15:31.957 } 00:15:31.957 }' 00:15:31.957 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:32.215 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:32.473 "name": "pt3", 00:15:32.473 "aliases": [ 00:15:32.473 "fdf95421-348f-c559-9a93-69aadb5e2733" 00:15:32.473 ], 00:15:32.473 "product_name": "passthru", 00:15:32.473 "block_size": 512, 00:15:32.473 "num_blocks": 65536, 00:15:32.473 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:32.473 "assigned_rate_limits": { 00:15:32.473 "rw_ios_per_sec": 0, 00:15:32.473 "rw_mbytes_per_sec": 0, 00:15:32.473 "r_mbytes_per_sec": 0, 00:15:32.473 "w_mbytes_per_sec": 0 00:15:32.473 }, 00:15:32.473 "claimed": true, 00:15:32.473 "claim_type": "exclusive_write", 00:15:32.473 "zoned": false, 00:15:32.473 "supported_io_types": { 00:15:32.473 "read": true, 00:15:32.473 "write": true, 00:15:32.473 "unmap": true, 00:15:32.473 "write_zeroes": true, 00:15:32.473 "flush": true, 00:15:32.473 "reset": true, 00:15:32.473 "compare": false, 00:15:32.473 "compare_and_write": false, 00:15:32.473 "abort": true, 00:15:32.473 "nvme_admin": false, 00:15:32.473 "nvme_io": false 00:15:32.473 }, 00:15:32.473 "memory_domains": [ 00:15:32.473 { 00:15:32.473 "dma_device_id": "system", 00:15:32.473 "dma_device_type": 1 00:15:32.473 }, 00:15:32.473 { 00:15:32.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.473 "dma_device_type": 2 00:15:32.473 } 00:15:32.473 ], 00:15:32.473 "driver_specific": { 00:15:32.473 "passthru": { 00:15:32.473 "name": "pt3", 00:15:32.473 "base_bdev_name": "malloc3" 00:15:32.473 } 00:15:32.473 } 00:15:32.473 }' 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:32.473 21:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:32.730 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:32.730 "name": "pt4", 00:15:32.730 "aliases": [ 00:15:32.730 "e4f993e0-5b43-2d52-b1cb-753ab793496e" 00:15:32.730 ], 00:15:32.730 "product_name": "passthru", 00:15:32.730 "block_size": 512, 00:15:32.731 "num_blocks": 65536, 00:15:32.731 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:32.731 "assigned_rate_limits": { 00:15:32.731 "rw_ios_per_sec": 0, 00:15:32.731 "rw_mbytes_per_sec": 0, 00:15:32.731 "r_mbytes_per_sec": 0, 00:15:32.731 "w_mbytes_per_sec": 0 00:15:32.731 }, 00:15:32.731 "claimed": true, 00:15:32.731 "claim_type": "exclusive_write", 00:15:32.731 "zoned": false, 00:15:32.731 "supported_io_types": { 00:15:32.731 "read": true, 00:15:32.731 "write": true, 00:15:32.731 "unmap": true, 00:15:32.731 "write_zeroes": true, 00:15:32.731 "flush": true, 00:15:32.731 "reset": true, 00:15:32.731 "compare": false, 00:15:32.731 "compare_and_write": false, 00:15:32.731 "abort": true, 00:15:32.731 "nvme_admin": false, 00:15:32.731 "nvme_io": false 00:15:32.731 }, 00:15:32.731 "memory_domains": [ 00:15:32.731 { 00:15:32.731 "dma_device_id": "system", 00:15:32.731 "dma_device_type": 1 00:15:32.731 }, 00:15:32.731 { 00:15:32.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.731 "dma_device_type": 2 00:15:32.731 } 00:15:32.731 ], 00:15:32.731 "driver_specific": { 00:15:32.731 "passthru": { 00:15:32.731 "name": "pt4", 00:15:32.731 "base_bdev_name": "malloc4" 00:15:32.731 } 00:15:32.731 } 00:15:32.731 }' 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:32.731 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:33.295 [2024-05-14 21:57:33.596775] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.295 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f640fb5a-123c-11ef-8c90-4585f0cfab08 00:15:33.295 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f640fb5a-123c-11ef-8c90-4585f0cfab08 ']' 00:15:33.295 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:33.552 [2024-05-14 21:57:33.892819] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.552 [2024-05-14 21:57:33.892843] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.553 [2024-05-14 21:57:33.892865] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.553 [2024-05-14 21:57:33.892882] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.553 [2024-05-14 21:57:33.892886] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cfbc300 name raid_bdev1, state offline 00:15:33.553 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:33.553 21:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.810 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:33.810 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:33.810 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:33.810 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:34.068 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.068 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:34.326 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.326 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:34.584 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.584 21:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:34.841 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:34.841 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:35.100 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.362 [2024-05-14 21:57:35.752888] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.362 [2024-05-14 21:57:35.753478] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.362 [2024-05-14 21:57:35.753495] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:35.362 [2024-05-14 21:57:35.753520] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:35.362 [2024-05-14 21:57:35.753535] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.362 [2024-05-14 21:57:35.753573] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.362 [2024-05-14 21:57:35.753585] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:35.362 [2024-05-14 21:57:35.753595] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:35.362 [2024-05-14 21:57:35.753603] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.362 [2024-05-14 21:57:35.753608] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cfbc300 name raid_bdev1, state configuring 00:15:35.362 request: 00:15:35.362 { 00:15:35.362 "name": "raid_bdev1", 00:15:35.362 "raid_level": "concat", 00:15:35.362 "base_bdevs": [ 00:15:35.362 "malloc1", 00:15:35.362 "malloc2", 00:15:35.363 "malloc3", 00:15:35.363 "malloc4" 00:15:35.363 ], 00:15:35.363 "superblock": false, 00:15:35.363 "strip_size_kb": 64, 00:15:35.363 "method": "bdev_raid_create", 00:15:35.363 "req_id": 1 00:15:35.363 } 00:15:35.363 Got JSON-RPC error response 00:15:35.363 response: 00:15:35.363 { 00:15:35.363 "code": -17, 00:15:35.363 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.363 } 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.363 21:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:35.620 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:35.620 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:35.620 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.911 [2024-05-14 21:57:36.368896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.911 [2024-05-14 21:57:36.368954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.911 [2024-05-14 21:57:36.368982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb8680 00:15:35.911 [2024-05-14 21:57:36.368991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.911 [2024-05-14 21:57:36.369631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.911 [2024-05-14 21:57:36.369657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.911 [2024-05-14 21:57:36.369682] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:35.911 [2024-05-14 21:57:36.369694] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.911 pt1 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.911 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.169 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.169 "name": "raid_bdev1", 00:15:36.169 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:36.169 "strip_size_kb": 64, 00:15:36.169 "state": "configuring", 00:15:36.169 "raid_level": "concat", 00:15:36.169 "superblock": true, 00:15:36.169 "num_base_bdevs": 4, 00:15:36.169 "num_base_bdevs_discovered": 1, 00:15:36.169 "num_base_bdevs_operational": 4, 00:15:36.169 "base_bdevs_list": [ 00:15:36.169 { 00:15:36.169 "name": "pt1", 00:15:36.169 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:36.169 "is_configured": true, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "name": null, 00:15:36.169 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:36.169 "is_configured": false, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "name": null, 00:15:36.169 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:36.169 "is_configured": false, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "name": null, 00:15:36.169 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:36.169 "is_configured": false, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 } 00:15:36.169 ] 00:15:36.169 }' 00:15:36.169 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.169 21:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.430 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:36.430 21:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.688 [2024-05-14 21:57:37.272899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.688 [2024-05-14 21:57:37.272957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.688 [2024-05-14 21:57:37.272995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb7c80 00:15:36.688 [2024-05-14 21:57:37.273003] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.688 [2024-05-14 21:57:37.273131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.688 [2024-05-14 21:57:37.273159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.688 [2024-05-14 21:57:37.273183] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:36.688 [2024-05-14 21:57:37.273208] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.945 pt2 00:15:36.945 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:37.203 [2024-05-14 21:57:37.556914] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.203 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.461 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.461 "name": "raid_bdev1", 00:15:37.461 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:37.461 "strip_size_kb": 64, 00:15:37.461 "state": "configuring", 00:15:37.461 "raid_level": "concat", 00:15:37.461 "superblock": true, 00:15:37.461 "num_base_bdevs": 4, 00:15:37.461 "num_base_bdevs_discovered": 1, 00:15:37.461 "num_base_bdevs_operational": 4, 00:15:37.461 "base_bdevs_list": [ 00:15:37.461 { 00:15:37.461 "name": "pt1", 00:15:37.461 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:37.461 "is_configured": true, 00:15:37.461 "data_offset": 2048, 00:15:37.461 "data_size": 63488 00:15:37.461 }, 00:15:37.461 { 00:15:37.461 "name": null, 00:15:37.461 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:37.461 "is_configured": false, 00:15:37.461 "data_offset": 2048, 00:15:37.461 "data_size": 63488 00:15:37.461 }, 00:15:37.461 { 00:15:37.461 "name": null, 00:15:37.461 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:37.461 "is_configured": false, 00:15:37.461 "data_offset": 2048, 00:15:37.461 "data_size": 63488 00:15:37.461 }, 00:15:37.461 { 00:15:37.461 "name": null, 00:15:37.461 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:37.461 "is_configured": false, 00:15:37.461 "data_offset": 2048, 00:15:37.461 "data_size": 63488 00:15:37.461 } 00:15:37.461 ] 00:15:37.461 }' 00:15:37.461 21:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.461 21:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:37.719 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:37.719 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.976 [2024-05-14 21:57:38.432921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.976 [2024-05-14 21:57:38.432986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.976 [2024-05-14 21:57:38.433014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb7c80 00:15:37.976 [2024-05-14 21:57:38.433030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.976 [2024-05-14 21:57:38.433143] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.976 [2024-05-14 21:57:38.433155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.976 [2024-05-14 21:57:38.433179] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:37.976 [2024-05-14 21:57:38.433193] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.976 pt2 00:15:37.976 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:37.976 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:37.976 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.234 [2024-05-14 21:57:38.684929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.234 [2024-05-14 21:57:38.684977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.234 [2024-05-14 21:57:38.685001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb7780 00:15:38.234 [2024-05-14 21:57:38.685010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.234 [2024-05-14 21:57:38.685124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.234 [2024-05-14 21:57:38.685135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.234 [2024-05-14 21:57:38.685167] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:38.234 [2024-05-14 21:57:38.685176] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.234 pt3 00:15:38.234 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.234 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.234 21:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:38.492 [2024-05-14 21:57:39.008935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:38.492 [2024-05-14 21:57:39.008994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.492 [2024-05-14 21:57:39.009022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cfb8900 00:15:38.492 [2024-05-14 21:57:39.009030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.492 [2024-05-14 21:57:39.009157] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.492 [2024-05-14 21:57:39.009168] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:38.492 [2024-05-14 21:57:39.009191] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:15:38.492 [2024-05-14 21:57:39.009200] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:38.492 [2024-05-14 21:57:39.009238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cfbc300 00:15:38.492 [2024-05-14 21:57:39.009242] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:38.492 [2024-05-14 21:57:39.009265] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d01ae20 00:15:38.492 [2024-05-14 21:57:39.009317] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cfbc300 00:15:38.492 [2024-05-14 21:57:39.009321] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cfbc300 00:15:38.492 [2024-05-14 21:57:39.009346] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.492 pt4 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.492 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.750 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.750 "name": "raid_bdev1", 00:15:38.750 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:38.750 "strip_size_kb": 64, 00:15:38.750 "state": "online", 00:15:38.750 "raid_level": "concat", 00:15:38.750 "superblock": true, 00:15:38.750 "num_base_bdevs": 4, 00:15:38.750 "num_base_bdevs_discovered": 4, 00:15:38.751 "num_base_bdevs_operational": 4, 00:15:38.751 "base_bdevs_list": [ 00:15:38.751 { 00:15:38.751 "name": "pt1", 00:15:38.751 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:38.751 "is_configured": true, 00:15:38.751 "data_offset": 2048, 00:15:38.751 "data_size": 63488 00:15:38.751 }, 00:15:38.751 { 00:15:38.751 "name": "pt2", 00:15:38.751 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:38.751 "is_configured": true, 00:15:38.751 "data_offset": 2048, 00:15:38.751 "data_size": 63488 00:15:38.751 }, 00:15:38.751 { 00:15:38.751 "name": "pt3", 00:15:38.751 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:38.751 "is_configured": true, 00:15:38.751 "data_offset": 2048, 00:15:38.751 "data_size": 63488 00:15:38.751 }, 00:15:38.751 { 00:15:38.751 "name": "pt4", 00:15:38.751 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:38.751 "is_configured": true, 00:15:38.751 "data_offset": 2048, 00:15:38.751 "data_size": 63488 00:15:38.751 } 00:15:38.751 ] 00:15:38.751 }' 00:15:38.751 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.751 21:57:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:39.318 [2024-05-14 21:57:39.872981] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.318 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:39.318 "name": "raid_bdev1", 00:15:39.319 "aliases": [ 00:15:39.319 "f640fb5a-123c-11ef-8c90-4585f0cfab08" 00:15:39.319 ], 00:15:39.319 "product_name": "Raid Volume", 00:15:39.319 "block_size": 512, 00:15:39.319 "num_blocks": 253952, 00:15:39.319 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:39.319 "assigned_rate_limits": { 00:15:39.319 "rw_ios_per_sec": 0, 00:15:39.319 "rw_mbytes_per_sec": 0, 00:15:39.319 "r_mbytes_per_sec": 0, 00:15:39.319 "w_mbytes_per_sec": 0 00:15:39.319 }, 00:15:39.319 "claimed": false, 00:15:39.319 "zoned": false, 00:15:39.319 "supported_io_types": { 00:15:39.319 "read": true, 00:15:39.319 "write": true, 00:15:39.319 "unmap": true, 00:15:39.319 "write_zeroes": true, 00:15:39.319 "flush": true, 00:15:39.319 "reset": true, 00:15:39.319 "compare": false, 00:15:39.319 "compare_and_write": false, 00:15:39.319 "abort": false, 00:15:39.319 "nvme_admin": false, 00:15:39.319 "nvme_io": false 00:15:39.319 }, 00:15:39.319 "memory_domains": [ 00:15:39.319 { 00:15:39.319 "dma_device_id": "system", 00:15:39.319 "dma_device_type": 1 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.319 "dma_device_type": 2 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "system", 00:15:39.319 "dma_device_type": 1 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.319 "dma_device_type": 2 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "system", 00:15:39.319 "dma_device_type": 1 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.319 "dma_device_type": 2 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "system", 00:15:39.319 "dma_device_type": 1 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.319 "dma_device_type": 2 00:15:39.319 } 00:15:39.319 ], 00:15:39.319 "driver_specific": { 00:15:39.319 "raid": { 00:15:39.319 "uuid": "f640fb5a-123c-11ef-8c90-4585f0cfab08", 00:15:39.319 "strip_size_kb": 64, 00:15:39.319 "state": "online", 00:15:39.319 "raid_level": "concat", 00:15:39.319 "superblock": true, 00:15:39.319 "num_base_bdevs": 4, 00:15:39.319 "num_base_bdevs_discovered": 4, 00:15:39.319 "num_base_bdevs_operational": 4, 00:15:39.319 "base_bdevs_list": [ 00:15:39.319 { 00:15:39.319 "name": "pt1", 00:15:39.319 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:39.319 "is_configured": true, 00:15:39.319 "data_offset": 2048, 00:15:39.319 "data_size": 63488 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "name": "pt2", 00:15:39.319 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:39.319 "is_configured": true, 00:15:39.319 "data_offset": 2048, 00:15:39.319 "data_size": 63488 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "name": "pt3", 00:15:39.319 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:39.319 "is_configured": true, 00:15:39.319 "data_offset": 2048, 00:15:39.319 "data_size": 63488 00:15:39.319 }, 00:15:39.319 { 00:15:39.319 "name": "pt4", 00:15:39.319 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:39.319 "is_configured": true, 00:15:39.319 "data_offset": 2048, 00:15:39.319 "data_size": 63488 00:15:39.319 } 00:15:39.319 ] 00:15:39.319 } 00:15:39.319 } 00:15:39.319 }' 00:15:39.319 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.319 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:39.319 pt2 00:15:39.319 pt3 00:15:39.319 pt4' 00:15:39.319 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:39.319 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:39.319 21:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:39.884 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:39.884 "name": "pt1", 00:15:39.884 "aliases": [ 00:15:39.884 "51c73d35-983f-1f57-a0f8-f54fb3836c95" 00:15:39.884 ], 00:15:39.884 "product_name": "passthru", 00:15:39.884 "block_size": 512, 00:15:39.884 "num_blocks": 65536, 00:15:39.884 "uuid": "51c73d35-983f-1f57-a0f8-f54fb3836c95", 00:15:39.884 "assigned_rate_limits": { 00:15:39.884 "rw_ios_per_sec": 0, 00:15:39.884 "rw_mbytes_per_sec": 0, 00:15:39.884 "r_mbytes_per_sec": 0, 00:15:39.884 "w_mbytes_per_sec": 0 00:15:39.884 }, 00:15:39.884 "claimed": true, 00:15:39.884 "claim_type": "exclusive_write", 00:15:39.884 "zoned": false, 00:15:39.884 "supported_io_types": { 00:15:39.884 "read": true, 00:15:39.884 "write": true, 00:15:39.884 "unmap": true, 00:15:39.884 "write_zeroes": true, 00:15:39.884 "flush": true, 00:15:39.884 "reset": true, 00:15:39.885 "compare": false, 00:15:39.885 "compare_and_write": false, 00:15:39.885 "abort": true, 00:15:39.885 "nvme_admin": false, 00:15:39.885 "nvme_io": false 00:15:39.885 }, 00:15:39.885 "memory_domains": [ 00:15:39.885 { 00:15:39.885 "dma_device_id": "system", 00:15:39.885 "dma_device_type": 1 00:15:39.885 }, 00:15:39.885 { 00:15:39.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.885 "dma_device_type": 2 00:15:39.885 } 00:15:39.885 ], 00:15:39.885 "driver_specific": { 00:15:39.885 "passthru": { 00:15:39.885 "name": "pt1", 00:15:39.885 "base_bdev_name": "malloc1" 00:15:39.885 } 00:15:39.885 } 00:15:39.885 }' 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:39.885 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:40.143 "name": "pt2", 00:15:40.143 "aliases": [ 00:15:40.143 "d35e230c-3e82-8951-928f-7a4439a0e347" 00:15:40.143 ], 00:15:40.143 "product_name": "passthru", 00:15:40.143 "block_size": 512, 00:15:40.143 "num_blocks": 65536, 00:15:40.143 "uuid": "d35e230c-3e82-8951-928f-7a4439a0e347", 00:15:40.143 "assigned_rate_limits": { 00:15:40.143 "rw_ios_per_sec": 0, 00:15:40.143 "rw_mbytes_per_sec": 0, 00:15:40.143 "r_mbytes_per_sec": 0, 00:15:40.143 "w_mbytes_per_sec": 0 00:15:40.143 }, 00:15:40.143 "claimed": true, 00:15:40.143 "claim_type": "exclusive_write", 00:15:40.143 "zoned": false, 00:15:40.143 "supported_io_types": { 00:15:40.143 "read": true, 00:15:40.143 "write": true, 00:15:40.143 "unmap": true, 00:15:40.143 "write_zeroes": true, 00:15:40.143 "flush": true, 00:15:40.143 "reset": true, 00:15:40.143 "compare": false, 00:15:40.143 "compare_and_write": false, 00:15:40.143 "abort": true, 00:15:40.143 "nvme_admin": false, 00:15:40.143 "nvme_io": false 00:15:40.143 }, 00:15:40.143 "memory_domains": [ 00:15:40.143 { 00:15:40.143 "dma_device_id": "system", 00:15:40.143 "dma_device_type": 1 00:15:40.143 }, 00:15:40.143 { 00:15:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.143 "dma_device_type": 2 00:15:40.143 } 00:15:40.143 ], 00:15:40.143 "driver_specific": { 00:15:40.143 "passthru": { 00:15:40.143 "name": "pt2", 00:15:40.143 "base_bdev_name": "malloc2" 00:15:40.143 } 00:15:40.143 } 00:15:40.143 }' 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:40.143 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:40.401 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:40.401 "name": "pt3", 00:15:40.401 "aliases": [ 00:15:40.401 "fdf95421-348f-c559-9a93-69aadb5e2733" 00:15:40.401 ], 00:15:40.401 "product_name": "passthru", 00:15:40.401 "block_size": 512, 00:15:40.401 "num_blocks": 65536, 00:15:40.401 "uuid": "fdf95421-348f-c559-9a93-69aadb5e2733", 00:15:40.401 "assigned_rate_limits": { 00:15:40.402 "rw_ios_per_sec": 0, 00:15:40.402 "rw_mbytes_per_sec": 0, 00:15:40.402 "r_mbytes_per_sec": 0, 00:15:40.402 "w_mbytes_per_sec": 0 00:15:40.402 }, 00:15:40.402 "claimed": true, 00:15:40.402 "claim_type": "exclusive_write", 00:15:40.402 "zoned": false, 00:15:40.402 "supported_io_types": { 00:15:40.402 "read": true, 00:15:40.402 "write": true, 00:15:40.402 "unmap": true, 00:15:40.402 "write_zeroes": true, 00:15:40.402 "flush": true, 00:15:40.402 "reset": true, 00:15:40.402 "compare": false, 00:15:40.402 "compare_and_write": false, 00:15:40.402 "abort": true, 00:15:40.402 "nvme_admin": false, 00:15:40.402 "nvme_io": false 00:15:40.402 }, 00:15:40.402 "memory_domains": [ 00:15:40.402 { 00:15:40.402 "dma_device_id": "system", 00:15:40.402 "dma_device_type": 1 00:15:40.402 }, 00:15:40.402 { 00:15:40.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.402 "dma_device_type": 2 00:15:40.402 } 00:15:40.402 ], 00:15:40.402 "driver_specific": { 00:15:40.402 "passthru": { 00:15:40.402 "name": "pt3", 00:15:40.402 "base_bdev_name": "malloc3" 00:15:40.402 } 00:15:40.402 } 00:15:40.402 }' 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:40.402 21:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:40.660 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:40.660 "name": "pt4", 00:15:40.661 "aliases": [ 00:15:40.661 "e4f993e0-5b43-2d52-b1cb-753ab793496e" 00:15:40.661 ], 00:15:40.661 "product_name": "passthru", 00:15:40.661 "block_size": 512, 00:15:40.661 "num_blocks": 65536, 00:15:40.661 "uuid": "e4f993e0-5b43-2d52-b1cb-753ab793496e", 00:15:40.661 "assigned_rate_limits": { 00:15:40.661 "rw_ios_per_sec": 0, 00:15:40.661 "rw_mbytes_per_sec": 0, 00:15:40.661 "r_mbytes_per_sec": 0, 00:15:40.661 "w_mbytes_per_sec": 0 00:15:40.661 }, 00:15:40.661 "claimed": true, 00:15:40.661 "claim_type": "exclusive_write", 00:15:40.661 "zoned": false, 00:15:40.661 "supported_io_types": { 00:15:40.661 "read": true, 00:15:40.661 "write": true, 00:15:40.661 "unmap": true, 00:15:40.661 "write_zeroes": true, 00:15:40.661 "flush": true, 00:15:40.661 "reset": true, 00:15:40.661 "compare": false, 00:15:40.661 "compare_and_write": false, 00:15:40.661 "abort": true, 00:15:40.661 "nvme_admin": false, 00:15:40.661 "nvme_io": false 00:15:40.661 }, 00:15:40.661 "memory_domains": [ 00:15:40.661 { 00:15:40.661 "dma_device_id": "system", 00:15:40.661 "dma_device_type": 1 00:15:40.661 }, 00:15:40.661 { 00:15:40.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.661 "dma_device_type": 2 00:15:40.661 } 00:15:40.661 ], 00:15:40.661 "driver_specific": { 00:15:40.661 "passthru": { 00:15:40.661 "name": "pt4", 00:15:40.661 "base_bdev_name": "malloc4" 00:15:40.661 } 00:15:40.661 } 00:15:40.661 }' 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:40.661 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:40.920 [2024-05-14 21:57:41.453016] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f640fb5a-123c-11ef-8c90-4585f0cfab08 '!=' f640fb5a-123c-11ef-8c90-4585f0cfab08 ']' 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 60849 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 60849 ']' 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 60849 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 60849 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:15:40.920 killing process with pid 60849 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60849' 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 60849 00:15:40.920 [2024-05-14 21:57:41.483336] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.920 [2024-05-14 21:57:41.483363] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.920 [2024-05-14 21:57:41.483379] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.920 [2024-05-14 21:57:41.483384] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cfbc300 name raid_bdev1, state offline 00:15:40.920 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 60849 00:15:40.920 [2024-05-14 21:57:41.507084] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.179 21:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:15:41.179 00:15:41.179 real 0m13.937s 00:15:41.179 user 0m24.927s 00:15:41.179 sys 0m2.102s 00:15:41.179 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.179 21:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.179 ************************************ 00:15:41.179 END TEST raid_superblock_test 00:15:41.179 ************************************ 00:15:41.179 21:57:41 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:15:41.179 21:57:41 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:41.179 21:57:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:41.179 21:57:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.179 21:57:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.179 ************************************ 00:15:41.179 START TEST raid_state_function_test 00:15:41.179 ************************************ 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=61248 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 61248' 00:15:41.179 Process raid pid: 61248 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 61248 /var/tmp/spdk-raid.sock 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 61248 ']' 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.179 21:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.179 [2024-05-14 21:57:41.758542] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:41.179 [2024-05-14 21:57:41.758832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:41.746 EAL: TSC is not safe to use in SMP mode 00:15:41.746 EAL: TSC is not invariant 00:15:41.747 [2024-05-14 21:57:42.330839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.005 [2024-05-14 21:57:42.426566] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:42.005 [2024-05-14 21:57:42.428923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.005 [2024-05-14 21:57:42.429688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.005 [2024-05-14 21:57:42.429708] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.264 21:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.264 21:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:42.264 21:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:42.522 [2024-05-14 21:57:42.991226] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.522 [2024-05-14 21:57:42.991295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.522 [2024-05-14 21:57:42.991316] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.522 [2024-05-14 21:57:42.991324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.522 [2024-05-14 21:57:42.991328] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.522 [2024-05-14 21:57:42.991335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.523 [2024-05-14 21:57:42.991339] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.523 [2024-05-14 21:57:42.991346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.523 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.781 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.781 "name": "Existed_Raid", 00:15:42.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.781 "strip_size_kb": 0, 00:15:42.781 "state": "configuring", 00:15:42.781 "raid_level": "raid1", 00:15:42.781 "superblock": false, 00:15:42.781 "num_base_bdevs": 4, 00:15:42.781 "num_base_bdevs_discovered": 0, 00:15:42.781 "num_base_bdevs_operational": 4, 00:15:42.781 "base_bdevs_list": [ 00:15:42.781 { 00:15:42.781 "name": "BaseBdev1", 00:15:42.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.781 "is_configured": false, 00:15:42.781 "data_offset": 0, 00:15:42.781 "data_size": 0 00:15:42.781 }, 00:15:42.781 { 00:15:42.781 "name": "BaseBdev2", 00:15:42.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.781 "is_configured": false, 00:15:42.781 "data_offset": 0, 00:15:42.781 "data_size": 0 00:15:42.781 }, 00:15:42.781 { 00:15:42.781 "name": "BaseBdev3", 00:15:42.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.781 "is_configured": false, 00:15:42.781 "data_offset": 0, 00:15:42.781 "data_size": 0 00:15:42.781 }, 00:15:42.781 { 00:15:42.781 "name": "BaseBdev4", 00:15:42.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.781 "is_configured": false, 00:15:42.781 "data_offset": 0, 00:15:42.781 "data_size": 0 00:15:42.781 } 00:15:42.781 ] 00:15:42.781 }' 00:15:42.781 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.781 21:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:43.346 [2024-05-14 21:57:43.867344] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.347 [2024-05-14 21:57:43.867369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be0d300 name Existed_Raid, state configuring 00:15:43.347 21:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:43.605 [2024-05-14 21:57:44.107385] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.605 [2024-05-14 21:57:44.107438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.605 [2024-05-14 21:57:44.107459] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.605 [2024-05-14 21:57:44.107467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.605 [2024-05-14 21:57:44.107470] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.605 [2024-05-14 21:57:44.107477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.605 [2024-05-14 21:57:44.107480] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.605 [2024-05-14 21:57:44.107487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.605 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.864 [2024-05-14 21:57:44.368631] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.864 BaseBdev1 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:43.864 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.123 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.382 [ 00:15:44.382 { 00:15:44.382 "name": "BaseBdev1", 00:15:44.382 "aliases": [ 00:15:44.382 "fe300f2b-123c-11ef-8c90-4585f0cfab08" 00:15:44.382 ], 00:15:44.382 "product_name": "Malloc disk", 00:15:44.382 "block_size": 512, 00:15:44.382 "num_blocks": 65536, 00:15:44.382 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:44.382 "assigned_rate_limits": { 00:15:44.382 "rw_ios_per_sec": 0, 00:15:44.382 "rw_mbytes_per_sec": 0, 00:15:44.382 "r_mbytes_per_sec": 0, 00:15:44.382 "w_mbytes_per_sec": 0 00:15:44.382 }, 00:15:44.382 "claimed": true, 00:15:44.382 "claim_type": "exclusive_write", 00:15:44.382 "zoned": false, 00:15:44.382 "supported_io_types": { 00:15:44.382 "read": true, 00:15:44.382 "write": true, 00:15:44.382 "unmap": true, 00:15:44.382 "write_zeroes": true, 00:15:44.382 "flush": true, 00:15:44.382 "reset": true, 00:15:44.382 "compare": false, 00:15:44.382 "compare_and_write": false, 00:15:44.382 "abort": true, 00:15:44.382 "nvme_admin": false, 00:15:44.382 "nvme_io": false 00:15:44.382 }, 00:15:44.382 "memory_domains": [ 00:15:44.382 { 00:15:44.382 "dma_device_id": "system", 00:15:44.382 "dma_device_type": 1 00:15:44.382 }, 00:15:44.382 { 00:15:44.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.382 "dma_device_type": 2 00:15:44.382 } 00:15:44.382 ], 00:15:44.382 "driver_specific": {} 00:15:44.382 } 00:15:44.382 ] 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.382 21:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.641 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.641 "name": "Existed_Raid", 00:15:44.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.641 "strip_size_kb": 0, 00:15:44.641 "state": "configuring", 00:15:44.641 "raid_level": "raid1", 00:15:44.641 "superblock": false, 00:15:44.641 "num_base_bdevs": 4, 00:15:44.641 "num_base_bdevs_discovered": 1, 00:15:44.641 "num_base_bdevs_operational": 4, 00:15:44.641 "base_bdevs_list": [ 00:15:44.641 { 00:15:44.641 "name": "BaseBdev1", 00:15:44.641 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:44.641 "is_configured": true, 00:15:44.641 "data_offset": 0, 00:15:44.641 "data_size": 65536 00:15:44.641 }, 00:15:44.641 { 00:15:44.641 "name": "BaseBdev2", 00:15:44.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.641 "is_configured": false, 00:15:44.641 "data_offset": 0, 00:15:44.641 "data_size": 0 00:15:44.641 }, 00:15:44.641 { 00:15:44.641 "name": "BaseBdev3", 00:15:44.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.641 "is_configured": false, 00:15:44.641 "data_offset": 0, 00:15:44.641 "data_size": 0 00:15:44.641 }, 00:15:44.641 { 00:15:44.641 "name": "BaseBdev4", 00:15:44.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.641 "is_configured": false, 00:15:44.641 "data_offset": 0, 00:15:44.641 "data_size": 0 00:15:44.641 } 00:15:44.641 ] 00:15:44.641 }' 00:15:44.641 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.641 21:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.899 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:45.156 [2024-05-14 21:57:45.671599] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.156 [2024-05-14 21:57:45.671632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be0d300 name Existed_Raid, state configuring 00:15:45.156 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:45.414 [2024-05-14 21:57:45.895617] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.414 [2024-05-14 21:57:45.896442] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.414 [2024-05-14 21:57:45.896503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.414 [2024-05-14 21:57:45.896508] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.414 [2024-05-14 21:57:45.896516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.414 [2024-05-14 21:57:45.896519] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.414 [2024-05-14 21:57:45.896526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.414 21:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.672 21:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.672 "name": "Existed_Raid", 00:15:45.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.672 "strip_size_kb": 0, 00:15:45.672 "state": "configuring", 00:15:45.672 "raid_level": "raid1", 00:15:45.672 "superblock": false, 00:15:45.672 "num_base_bdevs": 4, 00:15:45.672 "num_base_bdevs_discovered": 1, 00:15:45.672 "num_base_bdevs_operational": 4, 00:15:45.672 "base_bdevs_list": [ 00:15:45.672 { 00:15:45.672 "name": "BaseBdev1", 00:15:45.672 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:45.672 "is_configured": true, 00:15:45.672 "data_offset": 0, 00:15:45.672 "data_size": 65536 00:15:45.672 }, 00:15:45.672 { 00:15:45.672 "name": "BaseBdev2", 00:15:45.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.672 "is_configured": false, 00:15:45.672 "data_offset": 0, 00:15:45.672 "data_size": 0 00:15:45.672 }, 00:15:45.672 { 00:15:45.672 "name": "BaseBdev3", 00:15:45.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.672 "is_configured": false, 00:15:45.672 "data_offset": 0, 00:15:45.672 "data_size": 0 00:15:45.672 }, 00:15:45.672 { 00:15:45.672 "name": "BaseBdev4", 00:15:45.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.672 "is_configured": false, 00:15:45.672 "data_offset": 0, 00:15:45.672 "data_size": 0 00:15:45.672 } 00:15:45.672 ] 00:15:45.672 }' 00:15:45.672 21:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.672 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.930 21:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.188 [2024-05-14 21:57:46.739811] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.188 BaseBdev2 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:46.188 21:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.447 21:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.705 [ 00:15:46.705 { 00:15:46.705 "name": "BaseBdev2", 00:15:46.705 "aliases": [ 00:15:46.705 "ff9a0645-123c-11ef-8c90-4585f0cfab08" 00:15:46.705 ], 00:15:46.705 "product_name": "Malloc disk", 00:15:46.705 "block_size": 512, 00:15:46.705 "num_blocks": 65536, 00:15:46.705 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:46.705 "assigned_rate_limits": { 00:15:46.705 "rw_ios_per_sec": 0, 00:15:46.705 "rw_mbytes_per_sec": 0, 00:15:46.705 "r_mbytes_per_sec": 0, 00:15:46.705 "w_mbytes_per_sec": 0 00:15:46.705 }, 00:15:46.705 "claimed": true, 00:15:46.705 "claim_type": "exclusive_write", 00:15:46.705 "zoned": false, 00:15:46.705 "supported_io_types": { 00:15:46.705 "read": true, 00:15:46.705 "write": true, 00:15:46.705 "unmap": true, 00:15:46.705 "write_zeroes": true, 00:15:46.705 "flush": true, 00:15:46.705 "reset": true, 00:15:46.705 "compare": false, 00:15:46.705 "compare_and_write": false, 00:15:46.705 "abort": true, 00:15:46.705 "nvme_admin": false, 00:15:46.705 "nvme_io": false 00:15:46.705 }, 00:15:46.705 "memory_domains": [ 00:15:46.705 { 00:15:46.705 "dma_device_id": "system", 00:15:46.705 "dma_device_type": 1 00:15:46.705 }, 00:15:46.705 { 00:15:46.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.705 "dma_device_type": 2 00:15:46.705 } 00:15:46.705 ], 00:15:46.705 "driver_specific": {} 00:15:46.705 } 00:15:46.705 ] 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.705 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.963 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.963 "name": "Existed_Raid", 00:15:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.963 "strip_size_kb": 0, 00:15:46.963 "state": "configuring", 00:15:46.963 "raid_level": "raid1", 00:15:46.963 "superblock": false, 00:15:46.963 "num_base_bdevs": 4, 00:15:46.963 "num_base_bdevs_discovered": 2, 00:15:46.963 "num_base_bdevs_operational": 4, 00:15:46.963 "base_bdevs_list": [ 00:15:46.963 { 00:15:46.963 "name": "BaseBdev1", 00:15:46.963 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:46.963 "is_configured": true, 00:15:46.963 "data_offset": 0, 00:15:46.963 "data_size": 65536 00:15:46.963 }, 00:15:46.963 { 00:15:46.963 "name": "BaseBdev2", 00:15:46.963 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:46.963 "is_configured": true, 00:15:46.963 "data_offset": 0, 00:15:46.963 "data_size": 65536 00:15:46.963 }, 00:15:46.963 { 00:15:46.963 "name": "BaseBdev3", 00:15:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.963 "is_configured": false, 00:15:46.963 "data_offset": 0, 00:15:46.963 "data_size": 0 00:15:46.963 }, 00:15:46.963 { 00:15:46.963 "name": "BaseBdev4", 00:15:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.963 "is_configured": false, 00:15:46.963 "data_offset": 0, 00:15:46.963 "data_size": 0 00:15:46.963 } 00:15:46.963 ] 00:15:46.963 }' 00:15:46.963 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.963 21:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.528 21:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.786 [2024-05-14 21:57:48.119809] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.786 BaseBdev3 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:47.786 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.044 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.301 [ 00:15:48.301 { 00:15:48.301 "name": "BaseBdev3", 00:15:48.301 "aliases": [ 00:15:48.301 "006c998e-123d-11ef-8c90-4585f0cfab08" 00:15:48.301 ], 00:15:48.301 "product_name": "Malloc disk", 00:15:48.301 "block_size": 512, 00:15:48.301 "num_blocks": 65536, 00:15:48.301 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:48.301 "assigned_rate_limits": { 00:15:48.301 "rw_ios_per_sec": 0, 00:15:48.301 "rw_mbytes_per_sec": 0, 00:15:48.301 "r_mbytes_per_sec": 0, 00:15:48.301 "w_mbytes_per_sec": 0 00:15:48.301 }, 00:15:48.301 "claimed": true, 00:15:48.301 "claim_type": "exclusive_write", 00:15:48.301 "zoned": false, 00:15:48.301 "supported_io_types": { 00:15:48.301 "read": true, 00:15:48.301 "write": true, 00:15:48.301 "unmap": true, 00:15:48.301 "write_zeroes": true, 00:15:48.301 "flush": true, 00:15:48.301 "reset": true, 00:15:48.301 "compare": false, 00:15:48.301 "compare_and_write": false, 00:15:48.301 "abort": true, 00:15:48.301 "nvme_admin": false, 00:15:48.301 "nvme_io": false 00:15:48.301 }, 00:15:48.301 "memory_domains": [ 00:15:48.301 { 00:15:48.301 "dma_device_id": "system", 00:15:48.301 "dma_device_type": 1 00:15:48.301 }, 00:15:48.301 { 00:15:48.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.301 "dma_device_type": 2 00:15:48.301 } 00:15:48.301 ], 00:15:48.301 "driver_specific": {} 00:15:48.301 } 00:15:48.301 ] 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.301 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.559 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.559 "name": "Existed_Raid", 00:15:48.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.559 "strip_size_kb": 0, 00:15:48.559 "state": "configuring", 00:15:48.559 "raid_level": "raid1", 00:15:48.559 "superblock": false, 00:15:48.559 "num_base_bdevs": 4, 00:15:48.559 "num_base_bdevs_discovered": 3, 00:15:48.559 "num_base_bdevs_operational": 4, 00:15:48.559 "base_bdevs_list": [ 00:15:48.559 { 00:15:48.559 "name": "BaseBdev1", 00:15:48.559 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:48.559 "is_configured": true, 00:15:48.559 "data_offset": 0, 00:15:48.559 "data_size": 65536 00:15:48.559 }, 00:15:48.559 { 00:15:48.559 "name": "BaseBdev2", 00:15:48.559 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:48.559 "is_configured": true, 00:15:48.559 "data_offset": 0, 00:15:48.559 "data_size": 65536 00:15:48.559 }, 00:15:48.559 { 00:15:48.559 "name": "BaseBdev3", 00:15:48.559 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:48.559 "is_configured": true, 00:15:48.559 "data_offset": 0, 00:15:48.559 "data_size": 65536 00:15:48.559 }, 00:15:48.559 { 00:15:48.559 "name": "BaseBdev4", 00:15:48.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.559 "is_configured": false, 00:15:48.559 "data_offset": 0, 00:15:48.559 "data_size": 0 00:15:48.559 } 00:15:48.559 ] 00:15:48.559 }' 00:15:48.559 21:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.559 21:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 21:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:49.073 [2024-05-14 21:57:49.543902] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.073 [2024-05-14 21:57:49.543934] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be0d300 00:15:49.073 [2024-05-14 21:57:49.543939] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:49.073 [2024-05-14 21:57:49.543972] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be6bec0 00:15:49.073 [2024-05-14 21:57:49.544104] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be0d300 00:15:49.073 [2024-05-14 21:57:49.544110] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be0d300 00:15:49.074 [2024-05-14 21:57:49.544145] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.074 BaseBdev4 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:49.074 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.331 21:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:49.603 [ 00:15:49.603 { 00:15:49.603 "name": "BaseBdev4", 00:15:49.603 "aliases": [ 00:15:49.603 "0145e576-123d-11ef-8c90-4585f0cfab08" 00:15:49.603 ], 00:15:49.603 "product_name": "Malloc disk", 00:15:49.603 "block_size": 512, 00:15:49.603 "num_blocks": 65536, 00:15:49.603 "uuid": "0145e576-123d-11ef-8c90-4585f0cfab08", 00:15:49.603 "assigned_rate_limits": { 00:15:49.603 "rw_ios_per_sec": 0, 00:15:49.603 "rw_mbytes_per_sec": 0, 00:15:49.603 "r_mbytes_per_sec": 0, 00:15:49.603 "w_mbytes_per_sec": 0 00:15:49.603 }, 00:15:49.603 "claimed": true, 00:15:49.603 "claim_type": "exclusive_write", 00:15:49.603 "zoned": false, 00:15:49.603 "supported_io_types": { 00:15:49.603 "read": true, 00:15:49.603 "write": true, 00:15:49.603 "unmap": true, 00:15:49.603 "write_zeroes": true, 00:15:49.603 "flush": true, 00:15:49.603 "reset": true, 00:15:49.603 "compare": false, 00:15:49.603 "compare_and_write": false, 00:15:49.603 "abort": true, 00:15:49.603 "nvme_admin": false, 00:15:49.603 "nvme_io": false 00:15:49.603 }, 00:15:49.603 "memory_domains": [ 00:15:49.603 { 00:15:49.603 "dma_device_id": "system", 00:15:49.603 "dma_device_type": 1 00:15:49.603 }, 00:15:49.603 { 00:15:49.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.603 "dma_device_type": 2 00:15:49.603 } 00:15:49.603 ], 00:15:49.603 "driver_specific": {} 00:15:49.603 } 00:15:49.603 ] 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.603 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.890 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.890 "name": "Existed_Raid", 00:15:49.890 "uuid": "0145ec84-123d-11ef-8c90-4585f0cfab08", 00:15:49.890 "strip_size_kb": 0, 00:15:49.890 "state": "online", 00:15:49.890 "raid_level": "raid1", 00:15:49.890 "superblock": false, 00:15:49.890 "num_base_bdevs": 4, 00:15:49.890 "num_base_bdevs_discovered": 4, 00:15:49.890 "num_base_bdevs_operational": 4, 00:15:49.890 "base_bdevs_list": [ 00:15:49.890 { 00:15:49.890 "name": "BaseBdev1", 00:15:49.890 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 0, 00:15:49.890 "data_size": 65536 00:15:49.890 }, 00:15:49.890 { 00:15:49.890 "name": "BaseBdev2", 00:15:49.890 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 0, 00:15:49.890 "data_size": 65536 00:15:49.890 }, 00:15:49.890 { 00:15:49.890 "name": "BaseBdev3", 00:15:49.890 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 0, 00:15:49.890 "data_size": 65536 00:15:49.890 }, 00:15:49.890 { 00:15:49.890 "name": "BaseBdev4", 00:15:49.890 "uuid": "0145e576-123d-11ef-8c90-4585f0cfab08", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 0, 00:15:49.890 "data_size": 65536 00:15:49.890 } 00:15:49.890 ] 00:15:49.890 }' 00:15:49.890 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.890 21:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:50.147 21:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.712 [2024-05-14 21:57:51.039848] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.712 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:50.712 "name": "Existed_Raid", 00:15:50.712 "aliases": [ 00:15:50.712 "0145ec84-123d-11ef-8c90-4585f0cfab08" 00:15:50.712 ], 00:15:50.712 "product_name": "Raid Volume", 00:15:50.712 "block_size": 512, 00:15:50.712 "num_blocks": 65536, 00:15:50.712 "uuid": "0145ec84-123d-11ef-8c90-4585f0cfab08", 00:15:50.712 "assigned_rate_limits": { 00:15:50.713 "rw_ios_per_sec": 0, 00:15:50.713 "rw_mbytes_per_sec": 0, 00:15:50.713 "r_mbytes_per_sec": 0, 00:15:50.713 "w_mbytes_per_sec": 0 00:15:50.713 }, 00:15:50.713 "claimed": false, 00:15:50.713 "zoned": false, 00:15:50.713 "supported_io_types": { 00:15:50.713 "read": true, 00:15:50.713 "write": true, 00:15:50.713 "unmap": false, 00:15:50.713 "write_zeroes": true, 00:15:50.713 "flush": false, 00:15:50.713 "reset": true, 00:15:50.713 "compare": false, 00:15:50.713 "compare_and_write": false, 00:15:50.713 "abort": false, 00:15:50.713 "nvme_admin": false, 00:15:50.713 "nvme_io": false 00:15:50.713 }, 00:15:50.713 "memory_domains": [ 00:15:50.713 { 00:15:50.713 "dma_device_id": "system", 00:15:50.713 "dma_device_type": 1 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.713 "dma_device_type": 2 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "system", 00:15:50.713 "dma_device_type": 1 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.713 "dma_device_type": 2 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "system", 00:15:50.713 "dma_device_type": 1 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.713 "dma_device_type": 2 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "system", 00:15:50.713 "dma_device_type": 1 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.713 "dma_device_type": 2 00:15:50.713 } 00:15:50.713 ], 00:15:50.713 "driver_specific": { 00:15:50.713 "raid": { 00:15:50.713 "uuid": "0145ec84-123d-11ef-8c90-4585f0cfab08", 00:15:50.713 "strip_size_kb": 0, 00:15:50.713 "state": "online", 00:15:50.713 "raid_level": "raid1", 00:15:50.713 "superblock": false, 00:15:50.713 "num_base_bdevs": 4, 00:15:50.713 "num_base_bdevs_discovered": 4, 00:15:50.713 "num_base_bdevs_operational": 4, 00:15:50.713 "base_bdevs_list": [ 00:15:50.713 { 00:15:50.713 "name": "BaseBdev1", 00:15:50.713 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:50.713 "is_configured": true, 00:15:50.713 "data_offset": 0, 00:15:50.713 "data_size": 65536 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "name": "BaseBdev2", 00:15:50.713 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:50.713 "is_configured": true, 00:15:50.713 "data_offset": 0, 00:15:50.713 "data_size": 65536 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "name": "BaseBdev3", 00:15:50.713 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:50.713 "is_configured": true, 00:15:50.713 "data_offset": 0, 00:15:50.713 "data_size": 65536 00:15:50.713 }, 00:15:50.713 { 00:15:50.713 "name": "BaseBdev4", 00:15:50.713 "uuid": "0145e576-123d-11ef-8c90-4585f0cfab08", 00:15:50.713 "is_configured": true, 00:15:50.713 "data_offset": 0, 00:15:50.713 "data_size": 65536 00:15:50.713 } 00:15:50.713 ] 00:15:50.713 } 00:15:50.713 } 00:15:50.713 }' 00:15:50.713 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.713 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:50.713 BaseBdev2 00:15:50.713 BaseBdev3 00:15:50.713 BaseBdev4' 00:15:50.713 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:50.713 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:50.713 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:50.971 "name": "BaseBdev1", 00:15:50.971 "aliases": [ 00:15:50.971 "fe300f2b-123c-11ef-8c90-4585f0cfab08" 00:15:50.971 ], 00:15:50.971 "product_name": "Malloc disk", 00:15:50.971 "block_size": 512, 00:15:50.971 "num_blocks": 65536, 00:15:50.971 "uuid": "fe300f2b-123c-11ef-8c90-4585f0cfab08", 00:15:50.971 "assigned_rate_limits": { 00:15:50.971 "rw_ios_per_sec": 0, 00:15:50.971 "rw_mbytes_per_sec": 0, 00:15:50.971 "r_mbytes_per_sec": 0, 00:15:50.971 "w_mbytes_per_sec": 0 00:15:50.971 }, 00:15:50.971 "claimed": true, 00:15:50.971 "claim_type": "exclusive_write", 00:15:50.971 "zoned": false, 00:15:50.971 "supported_io_types": { 00:15:50.971 "read": true, 00:15:50.971 "write": true, 00:15:50.971 "unmap": true, 00:15:50.971 "write_zeroes": true, 00:15:50.971 "flush": true, 00:15:50.971 "reset": true, 00:15:50.971 "compare": false, 00:15:50.971 "compare_and_write": false, 00:15:50.971 "abort": true, 00:15:50.971 "nvme_admin": false, 00:15:50.971 "nvme_io": false 00:15:50.971 }, 00:15:50.971 "memory_domains": [ 00:15:50.971 { 00:15:50.971 "dma_device_id": "system", 00:15:50.971 "dma_device_type": 1 00:15:50.971 }, 00:15:50.971 { 00:15:50.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.971 "dma_device_type": 2 00:15:50.971 } 00:15:50.971 ], 00:15:50.971 "driver_specific": {} 00:15:50.971 }' 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:50.971 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:51.229 "name": "BaseBdev2", 00:15:51.229 "aliases": [ 00:15:51.229 "ff9a0645-123c-11ef-8c90-4585f0cfab08" 00:15:51.229 ], 00:15:51.229 "product_name": "Malloc disk", 00:15:51.229 "block_size": 512, 00:15:51.229 "num_blocks": 65536, 00:15:51.229 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:51.229 "assigned_rate_limits": { 00:15:51.229 "rw_ios_per_sec": 0, 00:15:51.229 "rw_mbytes_per_sec": 0, 00:15:51.229 "r_mbytes_per_sec": 0, 00:15:51.229 "w_mbytes_per_sec": 0 00:15:51.229 }, 00:15:51.229 "claimed": true, 00:15:51.229 "claim_type": "exclusive_write", 00:15:51.229 "zoned": false, 00:15:51.229 "supported_io_types": { 00:15:51.229 "read": true, 00:15:51.229 "write": true, 00:15:51.229 "unmap": true, 00:15:51.229 "write_zeroes": true, 00:15:51.229 "flush": true, 00:15:51.229 "reset": true, 00:15:51.229 "compare": false, 00:15:51.229 "compare_and_write": false, 00:15:51.229 "abort": true, 00:15:51.229 "nvme_admin": false, 00:15:51.229 "nvme_io": false 00:15:51.229 }, 00:15:51.229 "memory_domains": [ 00:15:51.229 { 00:15:51.229 "dma_device_id": "system", 00:15:51.229 "dma_device_type": 1 00:15:51.229 }, 00:15:51.229 { 00:15:51.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.229 "dma_device_type": 2 00:15:51.229 } 00:15:51.229 ], 00:15:51.229 "driver_specific": {} 00:15:51.229 }' 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:51.229 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:51.487 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:51.487 "name": "BaseBdev3", 00:15:51.487 "aliases": [ 00:15:51.487 "006c998e-123d-11ef-8c90-4585f0cfab08" 00:15:51.487 ], 00:15:51.487 "product_name": "Malloc disk", 00:15:51.487 "block_size": 512, 00:15:51.487 "num_blocks": 65536, 00:15:51.487 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:51.487 "assigned_rate_limits": { 00:15:51.487 "rw_ios_per_sec": 0, 00:15:51.487 "rw_mbytes_per_sec": 0, 00:15:51.487 "r_mbytes_per_sec": 0, 00:15:51.487 "w_mbytes_per_sec": 0 00:15:51.487 }, 00:15:51.487 "claimed": true, 00:15:51.487 "claim_type": "exclusive_write", 00:15:51.487 "zoned": false, 00:15:51.487 "supported_io_types": { 00:15:51.487 "read": true, 00:15:51.487 "write": true, 00:15:51.487 "unmap": true, 00:15:51.487 "write_zeroes": true, 00:15:51.487 "flush": true, 00:15:51.487 "reset": true, 00:15:51.487 "compare": false, 00:15:51.487 "compare_and_write": false, 00:15:51.487 "abort": true, 00:15:51.487 "nvme_admin": false, 00:15:51.487 "nvme_io": false 00:15:51.487 }, 00:15:51.487 "memory_domains": [ 00:15:51.487 { 00:15:51.487 "dma_device_id": "system", 00:15:51.487 "dma_device_type": 1 00:15:51.487 }, 00:15:51.487 { 00:15:51.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.488 "dma_device_type": 2 00:15:51.488 } 00:15:51.488 ], 00:15:51.488 "driver_specific": {} 00:15:51.488 }' 00:15:51.488 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.488 21:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:51.488 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:51.745 "name": "BaseBdev4", 00:15:51.745 "aliases": [ 00:15:51.745 "0145e576-123d-11ef-8c90-4585f0cfab08" 00:15:51.745 ], 00:15:51.745 "product_name": "Malloc disk", 00:15:51.745 "block_size": 512, 00:15:51.745 "num_blocks": 65536, 00:15:51.745 "uuid": "0145e576-123d-11ef-8c90-4585f0cfab08", 00:15:51.745 "assigned_rate_limits": { 00:15:51.745 "rw_ios_per_sec": 0, 00:15:51.745 "rw_mbytes_per_sec": 0, 00:15:51.745 "r_mbytes_per_sec": 0, 00:15:51.745 "w_mbytes_per_sec": 0 00:15:51.745 }, 00:15:51.745 "claimed": true, 00:15:51.745 "claim_type": "exclusive_write", 00:15:51.745 "zoned": false, 00:15:51.745 "supported_io_types": { 00:15:51.745 "read": true, 00:15:51.745 "write": true, 00:15:51.745 "unmap": true, 00:15:51.745 "write_zeroes": true, 00:15:51.745 "flush": true, 00:15:51.745 "reset": true, 00:15:51.745 "compare": false, 00:15:51.745 "compare_and_write": false, 00:15:51.745 "abort": true, 00:15:51.745 "nvme_admin": false, 00:15:51.745 "nvme_io": false 00:15:51.745 }, 00:15:51.745 "memory_domains": [ 00:15:51.745 { 00:15:51.745 "dma_device_id": "system", 00:15:51.745 "dma_device_type": 1 00:15:51.745 }, 00:15:51.745 { 00:15:51.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.745 "dma_device_type": 2 00:15:51.745 } 00:15:51.745 ], 00:15:51.745 "driver_specific": {} 00:15:51.745 }' 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.745 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:52.003 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:52.003 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:52.003 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.261 [2024-05-14 21:57:52.611831] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.261 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.520 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.520 "name": "Existed_Raid", 00:15:52.520 "uuid": "0145ec84-123d-11ef-8c90-4585f0cfab08", 00:15:52.520 "strip_size_kb": 0, 00:15:52.520 "state": "online", 00:15:52.520 "raid_level": "raid1", 00:15:52.520 "superblock": false, 00:15:52.520 "num_base_bdevs": 4, 00:15:52.520 "num_base_bdevs_discovered": 3, 00:15:52.520 "num_base_bdevs_operational": 3, 00:15:52.520 "base_bdevs_list": [ 00:15:52.520 { 00:15:52.520 "name": null, 00:15:52.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.520 "is_configured": false, 00:15:52.520 "data_offset": 0, 00:15:52.520 "data_size": 65536 00:15:52.520 }, 00:15:52.520 { 00:15:52.520 "name": "BaseBdev2", 00:15:52.520 "uuid": "ff9a0645-123c-11ef-8c90-4585f0cfab08", 00:15:52.520 "is_configured": true, 00:15:52.520 "data_offset": 0, 00:15:52.520 "data_size": 65536 00:15:52.520 }, 00:15:52.520 { 00:15:52.520 "name": "BaseBdev3", 00:15:52.520 "uuid": "006c998e-123d-11ef-8c90-4585f0cfab08", 00:15:52.520 "is_configured": true, 00:15:52.520 "data_offset": 0, 00:15:52.520 "data_size": 65536 00:15:52.520 }, 00:15:52.520 { 00:15:52.520 "name": "BaseBdev4", 00:15:52.520 "uuid": "0145e576-123d-11ef-8c90-4585f0cfab08", 00:15:52.520 "is_configured": true, 00:15:52.520 "data_offset": 0, 00:15:52.520 "data_size": 65536 00:15:52.520 } 00:15:52.520 ] 00:15:52.520 }' 00:15:52.520 21:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.520 21:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.778 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:52.778 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:52.778 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.778 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:53.035 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:53.035 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.035 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:53.292 [2024-05-14 21:57:53.749660] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.292 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.292 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.292 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.292 21:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:53.550 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:53.550 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.550 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:53.806 [2024-05-14 21:57:54.263797] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.806 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.806 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.806 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.806 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:54.064 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:54.064 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.064 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:54.322 [2024-05-14 21:57:54.782033] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:54.322 [2024-05-14 21:57:54.782067] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.322 [2024-05-14 21:57:54.788386] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.322 [2024-05-14 21:57:54.788431] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.322 [2024-05-14 21:57:54.788437] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be0d300 name Existed_Raid, state offline 00:15:54.322 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:54.322 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:54.322 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.322 21:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:54.579 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:54.835 BaseBdev2 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:54.835 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.092 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.349 [ 00:15:55.349 { 00:15:55.349 "name": "BaseBdev2", 00:15:55.349 "aliases": [ 00:15:55.349 "04b9c862-123d-11ef-8c90-4585f0cfab08" 00:15:55.349 ], 00:15:55.349 "product_name": "Malloc disk", 00:15:55.349 "block_size": 512, 00:15:55.349 "num_blocks": 65536, 00:15:55.349 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:15:55.349 "assigned_rate_limits": { 00:15:55.349 "rw_ios_per_sec": 0, 00:15:55.349 "rw_mbytes_per_sec": 0, 00:15:55.349 "r_mbytes_per_sec": 0, 00:15:55.349 "w_mbytes_per_sec": 0 00:15:55.349 }, 00:15:55.349 "claimed": false, 00:15:55.349 "zoned": false, 00:15:55.349 "supported_io_types": { 00:15:55.349 "read": true, 00:15:55.349 "write": true, 00:15:55.349 "unmap": true, 00:15:55.349 "write_zeroes": true, 00:15:55.349 "flush": true, 00:15:55.349 "reset": true, 00:15:55.349 "compare": false, 00:15:55.349 "compare_and_write": false, 00:15:55.349 "abort": true, 00:15:55.349 "nvme_admin": false, 00:15:55.349 "nvme_io": false 00:15:55.349 }, 00:15:55.349 "memory_domains": [ 00:15:55.349 { 00:15:55.349 "dma_device_id": "system", 00:15:55.349 "dma_device_type": 1 00:15:55.349 }, 00:15:55.349 { 00:15:55.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.349 "dma_device_type": 2 00:15:55.349 } 00:15:55.349 ], 00:15:55.349 "driver_specific": {} 00:15:55.349 } 00:15:55.349 ] 00:15:55.349 21:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:55.349 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:55.349 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:55.349 21:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:55.606 BaseBdev3 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:55.606 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.863 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.120 [ 00:15:56.120 { 00:15:56.120 "name": "BaseBdev3", 00:15:56.120 "aliases": [ 00:15:56.120 "05297c2a-123d-11ef-8c90-4585f0cfab08" 00:15:56.120 ], 00:15:56.120 "product_name": "Malloc disk", 00:15:56.120 "block_size": 512, 00:15:56.120 "num_blocks": 65536, 00:15:56.120 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:15:56.120 "assigned_rate_limits": { 00:15:56.120 "rw_ios_per_sec": 0, 00:15:56.120 "rw_mbytes_per_sec": 0, 00:15:56.120 "r_mbytes_per_sec": 0, 00:15:56.120 "w_mbytes_per_sec": 0 00:15:56.120 }, 00:15:56.120 "claimed": false, 00:15:56.120 "zoned": false, 00:15:56.120 "supported_io_types": { 00:15:56.120 "read": true, 00:15:56.120 "write": true, 00:15:56.120 "unmap": true, 00:15:56.120 "write_zeroes": true, 00:15:56.120 "flush": true, 00:15:56.120 "reset": true, 00:15:56.120 "compare": false, 00:15:56.120 "compare_and_write": false, 00:15:56.120 "abort": true, 00:15:56.120 "nvme_admin": false, 00:15:56.120 "nvme_io": false 00:15:56.120 }, 00:15:56.120 "memory_domains": [ 00:15:56.120 { 00:15:56.120 "dma_device_id": "system", 00:15:56.120 "dma_device_type": 1 00:15:56.120 }, 00:15:56.120 { 00:15:56.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.120 "dma_device_type": 2 00:15:56.120 } 00:15:56.120 ], 00:15:56.120 "driver_specific": {} 00:15:56.120 } 00:15:56.120 ] 00:15:56.120 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:56.120 21:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:56.120 21:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:56.120 21:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.376 BaseBdev4 00:15:56.376 21:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:15:56.376 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:15:56.376 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:56.376 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:56.376 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:56.377 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:56.377 21:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.637 21:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.894 [ 00:15:56.894 { 00:15:56.894 "name": "BaseBdev4", 00:15:56.894 "aliases": [ 00:15:56.894 "059fe506-123d-11ef-8c90-4585f0cfab08" 00:15:56.894 ], 00:15:56.894 "product_name": "Malloc disk", 00:15:56.894 "block_size": 512, 00:15:56.894 "num_blocks": 65536, 00:15:56.894 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:15:56.894 "assigned_rate_limits": { 00:15:56.894 "rw_ios_per_sec": 0, 00:15:56.894 "rw_mbytes_per_sec": 0, 00:15:56.894 "r_mbytes_per_sec": 0, 00:15:56.894 "w_mbytes_per_sec": 0 00:15:56.894 }, 00:15:56.894 "claimed": false, 00:15:56.894 "zoned": false, 00:15:56.894 "supported_io_types": { 00:15:56.894 "read": true, 00:15:56.894 "write": true, 00:15:56.894 "unmap": true, 00:15:56.894 "write_zeroes": true, 00:15:56.894 "flush": true, 00:15:56.894 "reset": true, 00:15:56.894 "compare": false, 00:15:56.894 "compare_and_write": false, 00:15:56.894 "abort": true, 00:15:56.894 "nvme_admin": false, 00:15:56.894 "nvme_io": false 00:15:56.894 }, 00:15:56.894 "memory_domains": [ 00:15:56.894 { 00:15:56.894 "dma_device_id": "system", 00:15:56.894 "dma_device_type": 1 00:15:56.894 }, 00:15:56.894 { 00:15:56.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.894 "dma_device_type": 2 00:15:56.894 } 00:15:56.894 ], 00:15:56.894 "driver_specific": {} 00:15:56.894 } 00:15:56.894 ] 00:15:56.894 21:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:56.894 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:56.894 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:56.894 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:57.152 [2024-05-14 21:57:57.644479] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.152 [2024-05-14 21:57:57.644537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.152 [2024-05-14 21:57:57.644546] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.152 [2024-05-14 21:57:57.645118] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.152 [2024-05-14 21:57:57.645138] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.152 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.409 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.409 "name": "Existed_Raid", 00:15:57.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.409 "strip_size_kb": 0, 00:15:57.409 "state": "configuring", 00:15:57.409 "raid_level": "raid1", 00:15:57.409 "superblock": false, 00:15:57.409 "num_base_bdevs": 4, 00:15:57.409 "num_base_bdevs_discovered": 3, 00:15:57.409 "num_base_bdevs_operational": 4, 00:15:57.410 "base_bdevs_list": [ 00:15:57.410 { 00:15:57.410 "name": "BaseBdev1", 00:15:57.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.410 "is_configured": false, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 0 00:15:57.410 }, 00:15:57.410 { 00:15:57.410 "name": "BaseBdev2", 00:15:57.410 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:15:57.410 "is_configured": true, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 65536 00:15:57.410 }, 00:15:57.410 { 00:15:57.410 "name": "BaseBdev3", 00:15:57.410 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:15:57.410 "is_configured": true, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 65536 00:15:57.410 }, 00:15:57.410 { 00:15:57.410 "name": "BaseBdev4", 00:15:57.410 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:15:57.410 "is_configured": true, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 65536 00:15:57.410 } 00:15:57.410 ] 00:15:57.410 }' 00:15:57.410 21:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.410 21:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:57.974 [2024-05-14 21:57:58.528485] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.974 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.538 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.538 "name": "Existed_Raid", 00:15:58.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.538 "strip_size_kb": 0, 00:15:58.538 "state": "configuring", 00:15:58.538 "raid_level": "raid1", 00:15:58.538 "superblock": false, 00:15:58.538 "num_base_bdevs": 4, 00:15:58.538 "num_base_bdevs_discovered": 2, 00:15:58.538 "num_base_bdevs_operational": 4, 00:15:58.538 "base_bdevs_list": [ 00:15:58.538 { 00:15:58.538 "name": "BaseBdev1", 00:15:58.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.538 "is_configured": false, 00:15:58.538 "data_offset": 0, 00:15:58.538 "data_size": 0 00:15:58.538 }, 00:15:58.538 { 00:15:58.538 "name": null, 00:15:58.538 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:15:58.538 "is_configured": false, 00:15:58.538 "data_offset": 0, 00:15:58.538 "data_size": 65536 00:15:58.538 }, 00:15:58.538 { 00:15:58.538 "name": "BaseBdev3", 00:15:58.538 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:15:58.538 "is_configured": true, 00:15:58.538 "data_offset": 0, 00:15:58.538 "data_size": 65536 00:15:58.538 }, 00:15:58.538 { 00:15:58.538 "name": "BaseBdev4", 00:15:58.538 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:15:58.538 "is_configured": true, 00:15:58.538 "data_offset": 0, 00:15:58.538 "data_size": 65536 00:15:58.538 } 00:15:58.538 ] 00:15:58.538 }' 00:15:58.538 21:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.538 21:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.811 21:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.811 21:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.080 [2024-05-14 21:57:59.644733] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.080 BaseBdev1 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:59.080 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.643 21:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.643 [ 00:15:59.643 { 00:15:59.643 "name": "BaseBdev1", 00:15:59.643 "aliases": [ 00:15:59.643 "074b2993-123d-11ef-8c90-4585f0cfab08" 00:15:59.643 ], 00:15:59.643 "product_name": "Malloc disk", 00:15:59.643 "block_size": 512, 00:15:59.643 "num_blocks": 65536, 00:15:59.643 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:15:59.643 "assigned_rate_limits": { 00:15:59.643 "rw_ios_per_sec": 0, 00:15:59.643 "rw_mbytes_per_sec": 0, 00:15:59.643 "r_mbytes_per_sec": 0, 00:15:59.643 "w_mbytes_per_sec": 0 00:15:59.643 }, 00:15:59.643 "claimed": true, 00:15:59.643 "claim_type": "exclusive_write", 00:15:59.643 "zoned": false, 00:15:59.643 "supported_io_types": { 00:15:59.643 "read": true, 00:15:59.643 "write": true, 00:15:59.643 "unmap": true, 00:15:59.643 "write_zeroes": true, 00:15:59.643 "flush": true, 00:15:59.643 "reset": true, 00:15:59.643 "compare": false, 00:15:59.643 "compare_and_write": false, 00:15:59.643 "abort": true, 00:15:59.643 "nvme_admin": false, 00:15:59.643 "nvme_io": false 00:15:59.643 }, 00:15:59.643 "memory_domains": [ 00:15:59.643 { 00:15:59.643 "dma_device_id": "system", 00:15:59.643 "dma_device_type": 1 00:15:59.643 }, 00:15:59.643 { 00:15:59.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.643 "dma_device_type": 2 00:15:59.643 } 00:15:59.643 ], 00:15:59.643 "driver_specific": {} 00:15:59.643 } 00:15:59.643 ] 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.900 "name": "Existed_Raid", 00:15:59.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.900 "strip_size_kb": 0, 00:15:59.900 "state": "configuring", 00:15:59.900 "raid_level": "raid1", 00:15:59.900 "superblock": false, 00:15:59.900 "num_base_bdevs": 4, 00:15:59.900 "num_base_bdevs_discovered": 3, 00:15:59.900 "num_base_bdevs_operational": 4, 00:15:59.900 "base_bdevs_list": [ 00:15:59.900 { 00:15:59.900 "name": "BaseBdev1", 00:15:59.900 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:15:59.900 "is_configured": true, 00:15:59.900 "data_offset": 0, 00:15:59.900 "data_size": 65536 00:15:59.900 }, 00:15:59.900 { 00:15:59.900 "name": null, 00:15:59.900 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:15:59.900 "is_configured": false, 00:15:59.900 "data_offset": 0, 00:15:59.900 "data_size": 65536 00:15:59.900 }, 00:15:59.900 { 00:15:59.900 "name": "BaseBdev3", 00:15:59.900 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:15:59.900 "is_configured": true, 00:15:59.900 "data_offset": 0, 00:15:59.900 "data_size": 65536 00:15:59.900 }, 00:15:59.900 { 00:15:59.900 "name": "BaseBdev4", 00:15:59.900 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:15:59.900 "is_configured": true, 00:15:59.900 "data_offset": 0, 00:15:59.900 "data_size": 65536 00:15:59.900 } 00:15:59.900 ] 00:15:59.900 }' 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.900 21:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.465 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.465 21:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.722 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:00.722 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:00.979 [2024-05-14 21:58:01.396646] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.979 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.237 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.237 "name": "Existed_Raid", 00:16:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.237 "strip_size_kb": 0, 00:16:01.237 "state": "configuring", 00:16:01.237 "raid_level": "raid1", 00:16:01.237 "superblock": false, 00:16:01.237 "num_base_bdevs": 4, 00:16:01.237 "num_base_bdevs_discovered": 2, 00:16:01.237 "num_base_bdevs_operational": 4, 00:16:01.237 "base_bdevs_list": [ 00:16:01.237 { 00:16:01.237 "name": "BaseBdev1", 00:16:01.237 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:01.237 "is_configured": true, 00:16:01.237 "data_offset": 0, 00:16:01.237 "data_size": 65536 00:16:01.237 }, 00:16:01.237 { 00:16:01.237 "name": null, 00:16:01.237 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:01.237 "is_configured": false, 00:16:01.237 "data_offset": 0, 00:16:01.237 "data_size": 65536 00:16:01.237 }, 00:16:01.237 { 00:16:01.237 "name": null, 00:16:01.237 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:01.237 "is_configured": false, 00:16:01.237 "data_offset": 0, 00:16:01.237 "data_size": 65536 00:16:01.237 }, 00:16:01.237 { 00:16:01.237 "name": "BaseBdev4", 00:16:01.237 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:01.237 "is_configured": true, 00:16:01.237 "data_offset": 0, 00:16:01.237 "data_size": 65536 00:16:01.237 } 00:16:01.237 ] 00:16:01.237 }' 00:16:01.237 21:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.237 21:58:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.494 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.494 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.751 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:01.751 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:02.010 [2024-05-14 21:58:02.584738] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.268 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.525 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.525 "name": "Existed_Raid", 00:16:02.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.525 "strip_size_kb": 0, 00:16:02.525 "state": "configuring", 00:16:02.525 "raid_level": "raid1", 00:16:02.525 "superblock": false, 00:16:02.525 "num_base_bdevs": 4, 00:16:02.525 "num_base_bdevs_discovered": 3, 00:16:02.525 "num_base_bdevs_operational": 4, 00:16:02.525 "base_bdevs_list": [ 00:16:02.525 { 00:16:02.525 "name": "BaseBdev1", 00:16:02.525 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:02.525 "is_configured": true, 00:16:02.525 "data_offset": 0, 00:16:02.525 "data_size": 65536 00:16:02.525 }, 00:16:02.525 { 00:16:02.525 "name": null, 00:16:02.525 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:02.525 "is_configured": false, 00:16:02.525 "data_offset": 0, 00:16:02.525 "data_size": 65536 00:16:02.525 }, 00:16:02.525 { 00:16:02.525 "name": "BaseBdev3", 00:16:02.525 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:02.525 "is_configured": true, 00:16:02.525 "data_offset": 0, 00:16:02.525 "data_size": 65536 00:16:02.525 }, 00:16:02.525 { 00:16:02.525 "name": "BaseBdev4", 00:16:02.525 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:02.525 "is_configured": true, 00:16:02.525 "data_offset": 0, 00:16:02.525 "data_size": 65536 00:16:02.525 } 00:16:02.525 ] 00:16:02.525 }' 00:16:02.525 21:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.525 21:58:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.798 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.798 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.072 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:03.072 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.330 [2024-05-14 21:58:03.716757] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.330 21:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.588 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.588 "name": "Existed_Raid", 00:16:03.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.588 "strip_size_kb": 0, 00:16:03.588 "state": "configuring", 00:16:03.588 "raid_level": "raid1", 00:16:03.588 "superblock": false, 00:16:03.588 "num_base_bdevs": 4, 00:16:03.588 "num_base_bdevs_discovered": 2, 00:16:03.588 "num_base_bdevs_operational": 4, 00:16:03.588 "base_bdevs_list": [ 00:16:03.588 { 00:16:03.588 "name": null, 00:16:03.588 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:03.588 "is_configured": false, 00:16:03.588 "data_offset": 0, 00:16:03.588 "data_size": 65536 00:16:03.588 }, 00:16:03.588 { 00:16:03.588 "name": null, 00:16:03.588 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:03.588 "is_configured": false, 00:16:03.588 "data_offset": 0, 00:16:03.588 "data_size": 65536 00:16:03.588 }, 00:16:03.588 { 00:16:03.588 "name": "BaseBdev3", 00:16:03.588 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:03.588 "is_configured": true, 00:16:03.588 "data_offset": 0, 00:16:03.588 "data_size": 65536 00:16:03.588 }, 00:16:03.588 { 00:16:03.588 "name": "BaseBdev4", 00:16:03.588 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:03.588 "is_configured": true, 00:16:03.588 "data_offset": 0, 00:16:03.588 "data_size": 65536 00:16:03.588 } 00:16:03.588 ] 00:16:03.588 }' 00:16:03.588 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.588 21:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.846 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.846 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:04.104 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:04.104 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:04.362 [2024-05-14 21:58:04.826667] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.362 21:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.619 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.619 "name": "Existed_Raid", 00:16:04.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.619 "strip_size_kb": 0, 00:16:04.619 "state": "configuring", 00:16:04.619 "raid_level": "raid1", 00:16:04.619 "superblock": false, 00:16:04.619 "num_base_bdevs": 4, 00:16:04.619 "num_base_bdevs_discovered": 3, 00:16:04.619 "num_base_bdevs_operational": 4, 00:16:04.619 "base_bdevs_list": [ 00:16:04.619 { 00:16:04.619 "name": null, 00:16:04.619 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:04.619 "is_configured": false, 00:16:04.619 "data_offset": 0, 00:16:04.619 "data_size": 65536 00:16:04.619 }, 00:16:04.619 { 00:16:04.619 "name": "BaseBdev2", 00:16:04.619 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:04.619 "is_configured": true, 00:16:04.619 "data_offset": 0, 00:16:04.619 "data_size": 65536 00:16:04.619 }, 00:16:04.619 { 00:16:04.619 "name": "BaseBdev3", 00:16:04.619 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:04.619 "is_configured": true, 00:16:04.619 "data_offset": 0, 00:16:04.619 "data_size": 65536 00:16:04.619 }, 00:16:04.619 { 00:16:04.619 "name": "BaseBdev4", 00:16:04.619 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:04.619 "is_configured": true, 00:16:04.619 "data_offset": 0, 00:16:04.619 "data_size": 65536 00:16:04.619 } 00:16:04.619 ] 00:16:04.619 }' 00:16:04.619 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.619 21:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.184 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:05.441 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:05.441 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.441 21:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:05.700 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 074b2993-123d-11ef-8c90-4585f0cfab08 00:16:05.700 [2024-05-14 21:58:06.274890] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:05.700 [2024-05-14 21:58:06.274942] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be0d300 00:16:05.700 [2024-05-14 21:58:06.274947] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:05.700 [2024-05-14 21:58:06.274982] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be6be20 00:16:05.700 [2024-05-14 21:58:06.275068] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be0d300 00:16:05.700 [2024-05-14 21:58:06.275075] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be0d300 00:16:05.700 [2024-05-14 21:58:06.275114] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.700 NewBaseBdev 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.958 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:06.217 [ 00:16:06.217 { 00:16:06.217 "name": "NewBaseBdev", 00:16:06.217 "aliases": [ 00:16:06.217 "074b2993-123d-11ef-8c90-4585f0cfab08" 00:16:06.217 ], 00:16:06.217 "product_name": "Malloc disk", 00:16:06.217 "block_size": 512, 00:16:06.217 "num_blocks": 65536, 00:16:06.217 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:06.217 "assigned_rate_limits": { 00:16:06.217 "rw_ios_per_sec": 0, 00:16:06.217 "rw_mbytes_per_sec": 0, 00:16:06.217 "r_mbytes_per_sec": 0, 00:16:06.217 "w_mbytes_per_sec": 0 00:16:06.217 }, 00:16:06.217 "claimed": true, 00:16:06.217 "claim_type": "exclusive_write", 00:16:06.217 "zoned": false, 00:16:06.217 "supported_io_types": { 00:16:06.217 "read": true, 00:16:06.217 "write": true, 00:16:06.217 "unmap": true, 00:16:06.217 "write_zeroes": true, 00:16:06.217 "flush": true, 00:16:06.217 "reset": true, 00:16:06.217 "compare": false, 00:16:06.217 "compare_and_write": false, 00:16:06.217 "abort": true, 00:16:06.217 "nvme_admin": false, 00:16:06.217 "nvme_io": false 00:16:06.217 }, 00:16:06.217 "memory_domains": [ 00:16:06.217 { 00:16:06.217 "dma_device_id": "system", 00:16:06.217 "dma_device_type": 1 00:16:06.217 }, 00:16:06.217 { 00:16:06.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.217 "dma_device_type": 2 00:16:06.217 } 00:16:06.217 ], 00:16:06.217 "driver_specific": {} 00:16:06.217 } 00:16:06.217 ] 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.217 21:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.475 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.475 "name": "Existed_Raid", 00:16:06.475 "uuid": "0b3edf69-123d-11ef-8c90-4585f0cfab08", 00:16:06.475 "strip_size_kb": 0, 00:16:06.475 "state": "online", 00:16:06.475 "raid_level": "raid1", 00:16:06.475 "superblock": false, 00:16:06.475 "num_base_bdevs": 4, 00:16:06.475 "num_base_bdevs_discovered": 4, 00:16:06.475 "num_base_bdevs_operational": 4, 00:16:06.475 "base_bdevs_list": [ 00:16:06.475 { 00:16:06.475 "name": "NewBaseBdev", 00:16:06.475 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:06.475 "is_configured": true, 00:16:06.475 "data_offset": 0, 00:16:06.475 "data_size": 65536 00:16:06.475 }, 00:16:06.475 { 00:16:06.475 "name": "BaseBdev2", 00:16:06.475 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:06.475 "is_configured": true, 00:16:06.475 "data_offset": 0, 00:16:06.475 "data_size": 65536 00:16:06.475 }, 00:16:06.475 { 00:16:06.475 "name": "BaseBdev3", 00:16:06.475 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:06.475 "is_configured": true, 00:16:06.475 "data_offset": 0, 00:16:06.475 "data_size": 65536 00:16:06.475 }, 00:16:06.475 { 00:16:06.475 "name": "BaseBdev4", 00:16:06.475 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:06.475 "is_configured": true, 00:16:06.475 "data_offset": 0, 00:16:06.475 "data_size": 65536 00:16:06.475 } 00:16:06.475 ] 00:16:06.475 }' 00:16:06.475 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.475 21:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:07.049 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:07.049 [2024-05-14 21:58:07.618864] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:07.322 "name": "Existed_Raid", 00:16:07.322 "aliases": [ 00:16:07.322 "0b3edf69-123d-11ef-8c90-4585f0cfab08" 00:16:07.322 ], 00:16:07.322 "product_name": "Raid Volume", 00:16:07.322 "block_size": 512, 00:16:07.322 "num_blocks": 65536, 00:16:07.322 "uuid": "0b3edf69-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "assigned_rate_limits": { 00:16:07.322 "rw_ios_per_sec": 0, 00:16:07.322 "rw_mbytes_per_sec": 0, 00:16:07.322 "r_mbytes_per_sec": 0, 00:16:07.322 "w_mbytes_per_sec": 0 00:16:07.322 }, 00:16:07.322 "claimed": false, 00:16:07.322 "zoned": false, 00:16:07.322 "supported_io_types": { 00:16:07.322 "read": true, 00:16:07.322 "write": true, 00:16:07.322 "unmap": false, 00:16:07.322 "write_zeroes": true, 00:16:07.322 "flush": false, 00:16:07.322 "reset": true, 00:16:07.322 "compare": false, 00:16:07.322 "compare_and_write": false, 00:16:07.322 "abort": false, 00:16:07.322 "nvme_admin": false, 00:16:07.322 "nvme_io": false 00:16:07.322 }, 00:16:07.322 "memory_domains": [ 00:16:07.322 { 00:16:07.322 "dma_device_id": "system", 00:16:07.322 "dma_device_type": 1 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.322 "dma_device_type": 2 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "system", 00:16:07.322 "dma_device_type": 1 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.322 "dma_device_type": 2 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "system", 00:16:07.322 "dma_device_type": 1 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.322 "dma_device_type": 2 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "system", 00:16:07.322 "dma_device_type": 1 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.322 "dma_device_type": 2 00:16:07.322 } 00:16:07.322 ], 00:16:07.322 "driver_specific": { 00:16:07.322 "raid": { 00:16:07.322 "uuid": "0b3edf69-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "strip_size_kb": 0, 00:16:07.322 "state": "online", 00:16:07.322 "raid_level": "raid1", 00:16:07.322 "superblock": false, 00:16:07.322 "num_base_bdevs": 4, 00:16:07.322 "num_base_bdevs_discovered": 4, 00:16:07.322 "num_base_bdevs_operational": 4, 00:16:07.322 "base_bdevs_list": [ 00:16:07.322 { 00:16:07.322 "name": "NewBaseBdev", 00:16:07.322 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "is_configured": true, 00:16:07.322 "data_offset": 0, 00:16:07.322 "data_size": 65536 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "name": "BaseBdev2", 00:16:07.322 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "is_configured": true, 00:16:07.322 "data_offset": 0, 00:16:07.322 "data_size": 65536 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "name": "BaseBdev3", 00:16:07.322 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "is_configured": true, 00:16:07.322 "data_offset": 0, 00:16:07.322 "data_size": 65536 00:16:07.322 }, 00:16:07.322 { 00:16:07.322 "name": "BaseBdev4", 00:16:07.322 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:07.322 "is_configured": true, 00:16:07.322 "data_offset": 0, 00:16:07.322 "data_size": 65536 00:16:07.322 } 00:16:07.322 ] 00:16:07.322 } 00:16:07.322 } 00:16:07.322 }' 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:07.322 BaseBdev2 00:16:07.322 BaseBdev3 00:16:07.322 BaseBdev4' 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:07.322 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:07.580 "name": "NewBaseBdev", 00:16:07.580 "aliases": [ 00:16:07.580 "074b2993-123d-11ef-8c90-4585f0cfab08" 00:16:07.580 ], 00:16:07.580 "product_name": "Malloc disk", 00:16:07.580 "block_size": 512, 00:16:07.580 "num_blocks": 65536, 00:16:07.580 "uuid": "074b2993-123d-11ef-8c90-4585f0cfab08", 00:16:07.580 "assigned_rate_limits": { 00:16:07.580 "rw_ios_per_sec": 0, 00:16:07.580 "rw_mbytes_per_sec": 0, 00:16:07.580 "r_mbytes_per_sec": 0, 00:16:07.580 "w_mbytes_per_sec": 0 00:16:07.580 }, 00:16:07.580 "claimed": true, 00:16:07.580 "claim_type": "exclusive_write", 00:16:07.580 "zoned": false, 00:16:07.580 "supported_io_types": { 00:16:07.580 "read": true, 00:16:07.580 "write": true, 00:16:07.580 "unmap": true, 00:16:07.580 "write_zeroes": true, 00:16:07.580 "flush": true, 00:16:07.580 "reset": true, 00:16:07.580 "compare": false, 00:16:07.580 "compare_and_write": false, 00:16:07.580 "abort": true, 00:16:07.580 "nvme_admin": false, 00:16:07.580 "nvme_io": false 00:16:07.580 }, 00:16:07.580 "memory_domains": [ 00:16:07.580 { 00:16:07.580 "dma_device_id": "system", 00:16:07.580 "dma_device_type": 1 00:16:07.580 }, 00:16:07.580 { 00:16:07.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.580 "dma_device_type": 2 00:16:07.580 } 00:16:07.580 ], 00:16:07.580 "driver_specific": {} 00:16:07.580 }' 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:07.580 21:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:07.838 "name": "BaseBdev2", 00:16:07.838 "aliases": [ 00:16:07.838 "04b9c862-123d-11ef-8c90-4585f0cfab08" 00:16:07.838 ], 00:16:07.838 "product_name": "Malloc disk", 00:16:07.838 "block_size": 512, 00:16:07.838 "num_blocks": 65536, 00:16:07.838 "uuid": "04b9c862-123d-11ef-8c90-4585f0cfab08", 00:16:07.838 "assigned_rate_limits": { 00:16:07.838 "rw_ios_per_sec": 0, 00:16:07.838 "rw_mbytes_per_sec": 0, 00:16:07.838 "r_mbytes_per_sec": 0, 00:16:07.838 "w_mbytes_per_sec": 0 00:16:07.838 }, 00:16:07.838 "claimed": true, 00:16:07.838 "claim_type": "exclusive_write", 00:16:07.838 "zoned": false, 00:16:07.838 "supported_io_types": { 00:16:07.838 "read": true, 00:16:07.838 "write": true, 00:16:07.838 "unmap": true, 00:16:07.838 "write_zeroes": true, 00:16:07.838 "flush": true, 00:16:07.838 "reset": true, 00:16:07.838 "compare": false, 00:16:07.838 "compare_and_write": false, 00:16:07.838 "abort": true, 00:16:07.838 "nvme_admin": false, 00:16:07.838 "nvme_io": false 00:16:07.838 }, 00:16:07.838 "memory_domains": [ 00:16:07.838 { 00:16:07.838 "dma_device_id": "system", 00:16:07.838 "dma_device_type": 1 00:16:07.838 }, 00:16:07.838 { 00:16:07.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.838 "dma_device_type": 2 00:16:07.838 } 00:16:07.838 ], 00:16:07.838 "driver_specific": {} 00:16:07.838 }' 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:07.838 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:08.096 "name": "BaseBdev3", 00:16:08.096 "aliases": [ 00:16:08.096 "05297c2a-123d-11ef-8c90-4585f0cfab08" 00:16:08.096 ], 00:16:08.096 "product_name": "Malloc disk", 00:16:08.096 "block_size": 512, 00:16:08.096 "num_blocks": 65536, 00:16:08.096 "uuid": "05297c2a-123d-11ef-8c90-4585f0cfab08", 00:16:08.096 "assigned_rate_limits": { 00:16:08.096 "rw_ios_per_sec": 0, 00:16:08.096 "rw_mbytes_per_sec": 0, 00:16:08.096 "r_mbytes_per_sec": 0, 00:16:08.096 "w_mbytes_per_sec": 0 00:16:08.096 }, 00:16:08.096 "claimed": true, 00:16:08.096 "claim_type": "exclusive_write", 00:16:08.096 "zoned": false, 00:16:08.096 "supported_io_types": { 00:16:08.096 "read": true, 00:16:08.096 "write": true, 00:16:08.096 "unmap": true, 00:16:08.096 "write_zeroes": true, 00:16:08.096 "flush": true, 00:16:08.096 "reset": true, 00:16:08.096 "compare": false, 00:16:08.096 "compare_and_write": false, 00:16:08.096 "abort": true, 00:16:08.096 "nvme_admin": false, 00:16:08.096 "nvme_io": false 00:16:08.096 }, 00:16:08.096 "memory_domains": [ 00:16:08.096 { 00:16:08.096 "dma_device_id": "system", 00:16:08.096 "dma_device_type": 1 00:16:08.096 }, 00:16:08.096 { 00:16:08.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.096 "dma_device_type": 2 00:16:08.096 } 00:16:08.096 ], 00:16:08.096 "driver_specific": {} 00:16:08.096 }' 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:08.096 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:08.354 "name": "BaseBdev4", 00:16:08.354 "aliases": [ 00:16:08.354 "059fe506-123d-11ef-8c90-4585f0cfab08" 00:16:08.354 ], 00:16:08.354 "product_name": "Malloc disk", 00:16:08.354 "block_size": 512, 00:16:08.354 "num_blocks": 65536, 00:16:08.354 "uuid": "059fe506-123d-11ef-8c90-4585f0cfab08", 00:16:08.354 "assigned_rate_limits": { 00:16:08.354 "rw_ios_per_sec": 0, 00:16:08.354 "rw_mbytes_per_sec": 0, 00:16:08.354 "r_mbytes_per_sec": 0, 00:16:08.354 "w_mbytes_per_sec": 0 00:16:08.354 }, 00:16:08.354 "claimed": true, 00:16:08.354 "claim_type": "exclusive_write", 00:16:08.354 "zoned": false, 00:16:08.354 "supported_io_types": { 00:16:08.354 "read": true, 00:16:08.354 "write": true, 00:16:08.354 "unmap": true, 00:16:08.354 "write_zeroes": true, 00:16:08.354 "flush": true, 00:16:08.354 "reset": true, 00:16:08.354 "compare": false, 00:16:08.354 "compare_and_write": false, 00:16:08.354 "abort": true, 00:16:08.354 "nvme_admin": false, 00:16:08.354 "nvme_io": false 00:16:08.354 }, 00:16:08.354 "memory_domains": [ 00:16:08.354 { 00:16:08.354 "dma_device_id": "system", 00:16:08.354 "dma_device_type": 1 00:16:08.354 }, 00:16:08.354 { 00:16:08.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.354 "dma_device_type": 2 00:16:08.354 } 00:16:08.354 ], 00:16:08.354 "driver_specific": {} 00:16:08.354 }' 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.354 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:08.612 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.612 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.612 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:08.612 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:08.612 21:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.870 [2024-05-14 21:58:09.234823] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.870 [2024-05-14 21:58:09.234855] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.870 [2024-05-14 21:58:09.234881] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.870 [2024-05-14 21:58:09.234952] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.870 [2024-05-14 21:58:09.234957] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be0d300 name Existed_Raid, state offline 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 61248 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 61248 ']' 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 61248 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 61248 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:16:08.870 killing process with pid 61248 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61248' 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 61248 00:16:08.870 [2024-05-14 21:58:09.261640] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.870 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 61248 00:16:08.870 [2024-05-14 21:58:09.288051] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:16:09.128 00:16:09.128 real 0m27.738s 00:16:09.128 user 0m50.941s 00:16:09.128 sys 0m3.666s 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.128 ************************************ 00:16:09.128 END TEST raid_state_function_test 00:16:09.128 ************************************ 00:16:09.128 21:58:09 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:09.128 21:58:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:09.128 21:58:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.128 21:58:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.128 ************************************ 00:16:09.128 START TEST raid_state_function_test_sb 00:16:09.128 ************************************ 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:16:09.128 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=62071 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 62071' 00:16:09.129 Process raid pid: 62071 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 62071 /var/tmp/spdk-raid.sock 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 62071 ']' 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:09.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:09.129 21:58:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.129 [2024-05-14 21:58:09.543057] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:09.129 [2024-05-14 21:58:09.543249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:09.694 EAL: TSC is not safe to use in SMP mode 00:16:09.694 EAL: TSC is not invariant 00:16:09.694 [2024-05-14 21:58:10.097821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.694 [2024-05-14 21:58:10.185166] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:09.694 [2024-05-14 21:58:10.187521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.694 [2024-05-14 21:58:10.188299] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.694 [2024-05-14 21:58:10.188316] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.260 21:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.260 21:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:10.260 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:10.518 [2024-05-14 21:58:10.880772] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.518 [2024-05-14 21:58:10.880864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.518 [2024-05-14 21:58:10.880876] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.518 [2024-05-14 21:58:10.880889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.518 [2024-05-14 21:58:10.880893] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.518 [2024-05-14 21:58:10.880901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.518 [2024-05-14 21:58:10.880904] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:10.518 [2024-05-14 21:58:10.880911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.518 21:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.776 21:58:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.776 "name": "Existed_Raid", 00:16:10.776 "uuid": "0dfda9d7-123d-11ef-8c90-4585f0cfab08", 00:16:10.776 "strip_size_kb": 0, 00:16:10.776 "state": "configuring", 00:16:10.776 "raid_level": "raid1", 00:16:10.776 "superblock": true, 00:16:10.776 "num_base_bdevs": 4, 00:16:10.776 "num_base_bdevs_discovered": 0, 00:16:10.776 "num_base_bdevs_operational": 4, 00:16:10.776 "base_bdevs_list": [ 00:16:10.776 { 00:16:10.776 "name": "BaseBdev1", 00:16:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.776 "is_configured": false, 00:16:10.776 "data_offset": 0, 00:16:10.776 "data_size": 0 00:16:10.776 }, 00:16:10.776 { 00:16:10.776 "name": "BaseBdev2", 00:16:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.776 "is_configured": false, 00:16:10.776 "data_offset": 0, 00:16:10.776 "data_size": 0 00:16:10.776 }, 00:16:10.776 { 00:16:10.776 "name": "BaseBdev3", 00:16:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.776 "is_configured": false, 00:16:10.776 "data_offset": 0, 00:16:10.776 "data_size": 0 00:16:10.776 }, 00:16:10.776 { 00:16:10.776 "name": "BaseBdev4", 00:16:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.776 "is_configured": false, 00:16:10.776 "data_offset": 0, 00:16:10.776 "data_size": 0 00:16:10.776 } 00:16:10.776 ] 00:16:10.776 }' 00:16:10.776 21:58:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.776 21:58:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.033 21:58:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:11.293 [2024-05-14 21:58:11.780747] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.293 [2024-05-14 21:58:11.780797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b238300 name Existed_Raid, state configuring 00:16:11.293 21:58:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:11.556 [2024-05-14 21:58:12.012761] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.556 [2024-05-14 21:58:12.012836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.556 [2024-05-14 21:58:12.012842] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.556 [2024-05-14 21:58:12.012851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.556 [2024-05-14 21:58:12.012855] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:11.556 [2024-05-14 21:58:12.012864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:11.556 [2024-05-14 21:58:12.012868] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:11.556 [2024-05-14 21:58:12.012875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:11.556 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.814 [2024-05-14 21:58:12.297882] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.814 BaseBdev1 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:11.814 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.072 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:12.330 [ 00:16:12.330 { 00:16:12.330 "name": "BaseBdev1", 00:16:12.330 "aliases": [ 00:16:12.330 "0ed5bb60-123d-11ef-8c90-4585f0cfab08" 00:16:12.330 ], 00:16:12.330 "product_name": "Malloc disk", 00:16:12.330 "block_size": 512, 00:16:12.330 "num_blocks": 65536, 00:16:12.330 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:12.330 "assigned_rate_limits": { 00:16:12.330 "rw_ios_per_sec": 0, 00:16:12.330 "rw_mbytes_per_sec": 0, 00:16:12.330 "r_mbytes_per_sec": 0, 00:16:12.330 "w_mbytes_per_sec": 0 00:16:12.330 }, 00:16:12.330 "claimed": true, 00:16:12.330 "claim_type": "exclusive_write", 00:16:12.330 "zoned": false, 00:16:12.330 "supported_io_types": { 00:16:12.330 "read": true, 00:16:12.330 "write": true, 00:16:12.330 "unmap": true, 00:16:12.330 "write_zeroes": true, 00:16:12.330 "flush": true, 00:16:12.330 "reset": true, 00:16:12.330 "compare": false, 00:16:12.330 "compare_and_write": false, 00:16:12.330 "abort": true, 00:16:12.330 "nvme_admin": false, 00:16:12.330 "nvme_io": false 00:16:12.330 }, 00:16:12.330 "memory_domains": [ 00:16:12.330 { 00:16:12.330 "dma_device_id": "system", 00:16:12.330 "dma_device_type": 1 00:16:12.330 }, 00:16:12.330 { 00:16:12.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.330 "dma_device_type": 2 00:16:12.330 } 00:16:12.330 ], 00:16:12.330 "driver_specific": {} 00:16:12.330 } 00:16:12.330 ] 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.330 21:58:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.588 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.588 "name": "Existed_Raid", 00:16:12.588 "uuid": "0eaa6449-123d-11ef-8c90-4585f0cfab08", 00:16:12.588 "strip_size_kb": 0, 00:16:12.588 "state": "configuring", 00:16:12.588 "raid_level": "raid1", 00:16:12.588 "superblock": true, 00:16:12.588 "num_base_bdevs": 4, 00:16:12.588 "num_base_bdevs_discovered": 1, 00:16:12.588 "num_base_bdevs_operational": 4, 00:16:12.588 "base_bdevs_list": [ 00:16:12.588 { 00:16:12.588 "name": "BaseBdev1", 00:16:12.588 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:12.588 "is_configured": true, 00:16:12.588 "data_offset": 2048, 00:16:12.588 "data_size": 63488 00:16:12.588 }, 00:16:12.588 { 00:16:12.588 "name": "BaseBdev2", 00:16:12.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.588 "is_configured": false, 00:16:12.588 "data_offset": 0, 00:16:12.588 "data_size": 0 00:16:12.588 }, 00:16:12.588 { 00:16:12.588 "name": "BaseBdev3", 00:16:12.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.588 "is_configured": false, 00:16:12.588 "data_offset": 0, 00:16:12.588 "data_size": 0 00:16:12.588 }, 00:16:12.588 { 00:16:12.588 "name": "BaseBdev4", 00:16:12.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.588 "is_configured": false, 00:16:12.588 "data_offset": 0, 00:16:12.588 "data_size": 0 00:16:12.589 } 00:16:12.589 ] 00:16:12.589 }' 00:16:12.589 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.589 21:58:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.847 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.105 [2024-05-14 21:58:13.668846] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.105 [2024-05-14 21:58:13.668884] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b238300 name Existed_Raid, state configuring 00:16:13.105 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:13.363 [2024-05-14 21:58:13.916883] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.363 [2024-05-14 21:58:13.917701] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.363 [2024-05-14 21:58:13.917748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.363 [2024-05-14 21:58:13.917754] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.363 [2024-05-14 21:58:13.917763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.363 [2024-05-14 21:58:13.917767] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.363 [2024-05-14 21:58:13.917774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.363 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.364 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.364 21:58:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.622 21:58:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.622 "name": "Existed_Raid", 00:16:13.622 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:13.622 "strip_size_kb": 0, 00:16:13.622 "state": "configuring", 00:16:13.622 "raid_level": "raid1", 00:16:13.622 "superblock": true, 00:16:13.622 "num_base_bdevs": 4, 00:16:13.622 "num_base_bdevs_discovered": 1, 00:16:13.622 "num_base_bdevs_operational": 4, 00:16:13.622 "base_bdevs_list": [ 00:16:13.622 { 00:16:13.623 "name": "BaseBdev1", 00:16:13.623 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:13.623 "is_configured": true, 00:16:13.623 "data_offset": 2048, 00:16:13.623 "data_size": 63488 00:16:13.623 }, 00:16:13.623 { 00:16:13.623 "name": "BaseBdev2", 00:16:13.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.623 "is_configured": false, 00:16:13.623 "data_offset": 0, 00:16:13.623 "data_size": 0 00:16:13.623 }, 00:16:13.623 { 00:16:13.623 "name": "BaseBdev3", 00:16:13.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.623 "is_configured": false, 00:16:13.623 "data_offset": 0, 00:16:13.623 "data_size": 0 00:16:13.623 }, 00:16:13.623 { 00:16:13.623 "name": "BaseBdev4", 00:16:13.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.623 "is_configured": false, 00:16:13.623 "data_offset": 0, 00:16:13.623 "data_size": 0 00:16:13.623 } 00:16:13.623 ] 00:16:13.623 }' 00:16:13.623 21:58:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.623 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.202 21:58:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:14.202 [2024-05-14 21:58:14.781021] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.202 BaseBdev2 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:14.460 21:58:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.718 21:58:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.976 [ 00:16:14.976 { 00:16:14.976 "name": "BaseBdev2", 00:16:14.976 "aliases": [ 00:16:14.976 "1050c625-123d-11ef-8c90-4585f0cfab08" 00:16:14.976 ], 00:16:14.976 "product_name": "Malloc disk", 00:16:14.976 "block_size": 512, 00:16:14.976 "num_blocks": 65536, 00:16:14.976 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:14.976 "assigned_rate_limits": { 00:16:14.976 "rw_ios_per_sec": 0, 00:16:14.976 "rw_mbytes_per_sec": 0, 00:16:14.976 "r_mbytes_per_sec": 0, 00:16:14.976 "w_mbytes_per_sec": 0 00:16:14.976 }, 00:16:14.976 "claimed": true, 00:16:14.976 "claim_type": "exclusive_write", 00:16:14.976 "zoned": false, 00:16:14.976 "supported_io_types": { 00:16:14.976 "read": true, 00:16:14.976 "write": true, 00:16:14.976 "unmap": true, 00:16:14.976 "write_zeroes": true, 00:16:14.976 "flush": true, 00:16:14.976 "reset": true, 00:16:14.976 "compare": false, 00:16:14.976 "compare_and_write": false, 00:16:14.976 "abort": true, 00:16:14.976 "nvme_admin": false, 00:16:14.976 "nvme_io": false 00:16:14.976 }, 00:16:14.976 "memory_domains": [ 00:16:14.976 { 00:16:14.976 "dma_device_id": "system", 00:16:14.976 "dma_device_type": 1 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.976 "dma_device_type": 2 00:16:14.976 } 00:16:14.976 ], 00:16:14.976 "driver_specific": {} 00:16:14.976 } 00:16:14.976 ] 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.976 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.234 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.234 "name": "Existed_Raid", 00:16:15.234 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:15.234 "strip_size_kb": 0, 00:16:15.234 "state": "configuring", 00:16:15.234 "raid_level": "raid1", 00:16:15.234 "superblock": true, 00:16:15.234 "num_base_bdevs": 4, 00:16:15.234 "num_base_bdevs_discovered": 2, 00:16:15.234 "num_base_bdevs_operational": 4, 00:16:15.234 "base_bdevs_list": [ 00:16:15.234 { 00:16:15.234 "name": "BaseBdev1", 00:16:15.234 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:15.234 "is_configured": true, 00:16:15.234 "data_offset": 2048, 00:16:15.234 "data_size": 63488 00:16:15.234 }, 00:16:15.234 { 00:16:15.234 "name": "BaseBdev2", 00:16:15.234 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:15.234 "is_configured": true, 00:16:15.234 "data_offset": 2048, 00:16:15.234 "data_size": 63488 00:16:15.234 }, 00:16:15.234 { 00:16:15.234 "name": "BaseBdev3", 00:16:15.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.234 "is_configured": false, 00:16:15.234 "data_offset": 0, 00:16:15.234 "data_size": 0 00:16:15.234 }, 00:16:15.234 { 00:16:15.234 "name": "BaseBdev4", 00:16:15.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.234 "is_configured": false, 00:16:15.234 "data_offset": 0, 00:16:15.234 "data_size": 0 00:16:15.234 } 00:16:15.234 ] 00:16:15.234 }' 00:16:15.234 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.234 21:58:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.504 21:58:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:15.762 [2024-05-14 21:58:16.101029] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.762 BaseBdev3 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:15.762 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.020 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:16.278 [ 00:16:16.278 { 00:16:16.278 "name": "BaseBdev3", 00:16:16.278 "aliases": [ 00:16:16.278 "111a317b-123d-11ef-8c90-4585f0cfab08" 00:16:16.278 ], 00:16:16.278 "product_name": "Malloc disk", 00:16:16.278 "block_size": 512, 00:16:16.278 "num_blocks": 65536, 00:16:16.278 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:16.278 "assigned_rate_limits": { 00:16:16.278 "rw_ios_per_sec": 0, 00:16:16.278 "rw_mbytes_per_sec": 0, 00:16:16.278 "r_mbytes_per_sec": 0, 00:16:16.278 "w_mbytes_per_sec": 0 00:16:16.278 }, 00:16:16.278 "claimed": true, 00:16:16.278 "claim_type": "exclusive_write", 00:16:16.278 "zoned": false, 00:16:16.278 "supported_io_types": { 00:16:16.278 "read": true, 00:16:16.278 "write": true, 00:16:16.278 "unmap": true, 00:16:16.278 "write_zeroes": true, 00:16:16.278 "flush": true, 00:16:16.278 "reset": true, 00:16:16.278 "compare": false, 00:16:16.278 "compare_and_write": false, 00:16:16.278 "abort": true, 00:16:16.278 "nvme_admin": false, 00:16:16.278 "nvme_io": false 00:16:16.278 }, 00:16:16.278 "memory_domains": [ 00:16:16.278 { 00:16:16.278 "dma_device_id": "system", 00:16:16.278 "dma_device_type": 1 00:16:16.278 }, 00:16:16.278 { 00:16:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.278 "dma_device_type": 2 00:16:16.278 } 00:16:16.278 ], 00:16:16.278 "driver_specific": {} 00:16:16.278 } 00:16:16.278 ] 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.278 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.536 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.536 "name": "Existed_Raid", 00:16:16.536 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:16.536 "strip_size_kb": 0, 00:16:16.536 "state": "configuring", 00:16:16.536 "raid_level": "raid1", 00:16:16.536 "superblock": true, 00:16:16.536 "num_base_bdevs": 4, 00:16:16.536 "num_base_bdevs_discovered": 3, 00:16:16.536 "num_base_bdevs_operational": 4, 00:16:16.536 "base_bdevs_list": [ 00:16:16.536 { 00:16:16.536 "name": "BaseBdev1", 00:16:16.536 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:16.536 "is_configured": true, 00:16:16.536 "data_offset": 2048, 00:16:16.536 "data_size": 63488 00:16:16.536 }, 00:16:16.536 { 00:16:16.536 "name": "BaseBdev2", 00:16:16.536 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:16.536 "is_configured": true, 00:16:16.536 "data_offset": 2048, 00:16:16.536 "data_size": 63488 00:16:16.536 }, 00:16:16.536 { 00:16:16.536 "name": "BaseBdev3", 00:16:16.536 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:16.536 "is_configured": true, 00:16:16.536 "data_offset": 2048, 00:16:16.536 "data_size": 63488 00:16:16.536 }, 00:16:16.536 { 00:16:16.536 "name": "BaseBdev4", 00:16:16.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.536 "is_configured": false, 00:16:16.536 "data_offset": 0, 00:16:16.536 "data_size": 0 00:16:16.536 } 00:16:16.536 ] 00:16:16.536 }' 00:16:16.536 21:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.536 21:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.794 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.052 [2024-05-14 21:58:17.477062] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.052 [2024-05-14 21:58:17.477144] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b238300 00:16:17.052 [2024-05-14 21:58:17.477151] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:17.052 [2024-05-14 21:58:17.477173] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b296ec0 00:16:17.052 [2024-05-14 21:58:17.477246] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b238300 00:16:17.052 [2024-05-14 21:58:17.477252] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b238300 00:16:17.052 [2024-05-14 21:58:17.477284] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.052 BaseBdev4 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:17.052 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.310 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.567 [ 00:16:17.567 { 00:16:17.567 "name": "BaseBdev4", 00:16:17.567 "aliases": [ 00:16:17.567 "11ec28ca-123d-11ef-8c90-4585f0cfab08" 00:16:17.567 ], 00:16:17.567 "product_name": "Malloc disk", 00:16:17.567 "block_size": 512, 00:16:17.567 "num_blocks": 65536, 00:16:17.567 "uuid": "11ec28ca-123d-11ef-8c90-4585f0cfab08", 00:16:17.567 "assigned_rate_limits": { 00:16:17.567 "rw_ios_per_sec": 0, 00:16:17.567 "rw_mbytes_per_sec": 0, 00:16:17.567 "r_mbytes_per_sec": 0, 00:16:17.567 "w_mbytes_per_sec": 0 00:16:17.567 }, 00:16:17.567 "claimed": true, 00:16:17.567 "claim_type": "exclusive_write", 00:16:17.567 "zoned": false, 00:16:17.567 "supported_io_types": { 00:16:17.567 "read": true, 00:16:17.567 "write": true, 00:16:17.567 "unmap": true, 00:16:17.567 "write_zeroes": true, 00:16:17.567 "flush": true, 00:16:17.567 "reset": true, 00:16:17.567 "compare": false, 00:16:17.567 "compare_and_write": false, 00:16:17.567 "abort": true, 00:16:17.567 "nvme_admin": false, 00:16:17.567 "nvme_io": false 00:16:17.567 }, 00:16:17.567 "memory_domains": [ 00:16:17.567 { 00:16:17.567 "dma_device_id": "system", 00:16:17.567 "dma_device_type": 1 00:16:17.567 }, 00:16:17.567 { 00:16:17.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.567 "dma_device_type": 2 00:16:17.567 } 00:16:17.567 ], 00:16:17.567 "driver_specific": {} 00:16:17.567 } 00:16:17.567 ] 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.567 21:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.826 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.826 "name": "Existed_Raid", 00:16:17.826 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:17.826 "strip_size_kb": 0, 00:16:17.826 "state": "online", 00:16:17.826 "raid_level": "raid1", 00:16:17.826 "superblock": true, 00:16:17.826 "num_base_bdevs": 4, 00:16:17.826 "num_base_bdevs_discovered": 4, 00:16:17.826 "num_base_bdevs_operational": 4, 00:16:17.826 "base_bdevs_list": [ 00:16:17.826 { 00:16:17.826 "name": "BaseBdev1", 00:16:17.826 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:17.826 "is_configured": true, 00:16:17.826 "data_offset": 2048, 00:16:17.826 "data_size": 63488 00:16:17.826 }, 00:16:17.826 { 00:16:17.826 "name": "BaseBdev2", 00:16:17.826 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:17.826 "is_configured": true, 00:16:17.826 "data_offset": 2048, 00:16:17.826 "data_size": 63488 00:16:17.826 }, 00:16:17.826 { 00:16:17.826 "name": "BaseBdev3", 00:16:17.826 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:17.826 "is_configured": true, 00:16:17.826 "data_offset": 2048, 00:16:17.826 "data_size": 63488 00:16:17.826 }, 00:16:17.826 { 00:16:17.826 "name": "BaseBdev4", 00:16:17.826 "uuid": "11ec28ca-123d-11ef-8c90-4585f0cfab08", 00:16:17.826 "is_configured": true, 00:16:17.826 "data_offset": 2048, 00:16:17.826 "data_size": 63488 00:16:17.826 } 00:16:17.826 ] 00:16:17.826 }' 00:16:17.826 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.826 21:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:18.084 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:18.342 [2024-05-14 21:58:18.796999] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:18.342 "name": "Existed_Raid", 00:16:18.342 "aliases": [ 00:16:18.342 "0fccefc2-123d-11ef-8c90-4585f0cfab08" 00:16:18.342 ], 00:16:18.342 "product_name": "Raid Volume", 00:16:18.342 "block_size": 512, 00:16:18.342 "num_blocks": 63488, 00:16:18.342 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "assigned_rate_limits": { 00:16:18.342 "rw_ios_per_sec": 0, 00:16:18.342 "rw_mbytes_per_sec": 0, 00:16:18.342 "r_mbytes_per_sec": 0, 00:16:18.342 "w_mbytes_per_sec": 0 00:16:18.342 }, 00:16:18.342 "claimed": false, 00:16:18.342 "zoned": false, 00:16:18.342 "supported_io_types": { 00:16:18.342 "read": true, 00:16:18.342 "write": true, 00:16:18.342 "unmap": false, 00:16:18.342 "write_zeroes": true, 00:16:18.342 "flush": false, 00:16:18.342 "reset": true, 00:16:18.342 "compare": false, 00:16:18.342 "compare_and_write": false, 00:16:18.342 "abort": false, 00:16:18.342 "nvme_admin": false, 00:16:18.342 "nvme_io": false 00:16:18.342 }, 00:16:18.342 "memory_domains": [ 00:16:18.342 { 00:16:18.342 "dma_device_id": "system", 00:16:18.342 "dma_device_type": 1 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.342 "dma_device_type": 2 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "system", 00:16:18.342 "dma_device_type": 1 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.342 "dma_device_type": 2 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "system", 00:16:18.342 "dma_device_type": 1 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.342 "dma_device_type": 2 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "system", 00:16:18.342 "dma_device_type": 1 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.342 "dma_device_type": 2 00:16:18.342 } 00:16:18.342 ], 00:16:18.342 "driver_specific": { 00:16:18.342 "raid": { 00:16:18.342 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "strip_size_kb": 0, 00:16:18.342 "state": "online", 00:16:18.342 "raid_level": "raid1", 00:16:18.342 "superblock": true, 00:16:18.342 "num_base_bdevs": 4, 00:16:18.342 "num_base_bdevs_discovered": 4, 00:16:18.342 "num_base_bdevs_operational": 4, 00:16:18.342 "base_bdevs_list": [ 00:16:18.342 { 00:16:18.342 "name": "BaseBdev1", 00:16:18.342 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "is_configured": true, 00:16:18.342 "data_offset": 2048, 00:16:18.342 "data_size": 63488 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "name": "BaseBdev2", 00:16:18.342 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "is_configured": true, 00:16:18.342 "data_offset": 2048, 00:16:18.342 "data_size": 63488 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "name": "BaseBdev3", 00:16:18.342 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "is_configured": true, 00:16:18.342 "data_offset": 2048, 00:16:18.342 "data_size": 63488 00:16:18.342 }, 00:16:18.342 { 00:16:18.342 "name": "BaseBdev4", 00:16:18.342 "uuid": "11ec28ca-123d-11ef-8c90-4585f0cfab08", 00:16:18.342 "is_configured": true, 00:16:18.342 "data_offset": 2048, 00:16:18.342 "data_size": 63488 00:16:18.342 } 00:16:18.342 ] 00:16:18.342 } 00:16:18.342 } 00:16:18.342 }' 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:16:18.342 BaseBdev2 00:16:18.342 BaseBdev3 00:16:18.342 BaseBdev4' 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:18.342 21:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:18.600 "name": "BaseBdev1", 00:16:18.600 "aliases": [ 00:16:18.600 "0ed5bb60-123d-11ef-8c90-4585f0cfab08" 00:16:18.600 ], 00:16:18.600 "product_name": "Malloc disk", 00:16:18.600 "block_size": 512, 00:16:18.600 "num_blocks": 65536, 00:16:18.600 "uuid": "0ed5bb60-123d-11ef-8c90-4585f0cfab08", 00:16:18.600 "assigned_rate_limits": { 00:16:18.600 "rw_ios_per_sec": 0, 00:16:18.600 "rw_mbytes_per_sec": 0, 00:16:18.600 "r_mbytes_per_sec": 0, 00:16:18.600 "w_mbytes_per_sec": 0 00:16:18.600 }, 00:16:18.600 "claimed": true, 00:16:18.600 "claim_type": "exclusive_write", 00:16:18.600 "zoned": false, 00:16:18.600 "supported_io_types": { 00:16:18.600 "read": true, 00:16:18.600 "write": true, 00:16:18.600 "unmap": true, 00:16:18.600 "write_zeroes": true, 00:16:18.600 "flush": true, 00:16:18.600 "reset": true, 00:16:18.600 "compare": false, 00:16:18.600 "compare_and_write": false, 00:16:18.600 "abort": true, 00:16:18.600 "nvme_admin": false, 00:16:18.600 "nvme_io": false 00:16:18.600 }, 00:16:18.600 "memory_domains": [ 00:16:18.600 { 00:16:18.600 "dma_device_id": "system", 00:16:18.600 "dma_device_type": 1 00:16:18.600 }, 00:16:18.600 { 00:16:18.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.600 "dma_device_type": 2 00:16:18.600 } 00:16:18.600 ], 00:16:18.600 "driver_specific": {} 00:16:18.600 }' 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:18.600 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:19.166 "name": "BaseBdev2", 00:16:19.166 "aliases": [ 00:16:19.166 "1050c625-123d-11ef-8c90-4585f0cfab08" 00:16:19.166 ], 00:16:19.166 "product_name": "Malloc disk", 00:16:19.166 "block_size": 512, 00:16:19.166 "num_blocks": 65536, 00:16:19.166 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:19.166 "assigned_rate_limits": { 00:16:19.166 "rw_ios_per_sec": 0, 00:16:19.166 "rw_mbytes_per_sec": 0, 00:16:19.166 "r_mbytes_per_sec": 0, 00:16:19.166 "w_mbytes_per_sec": 0 00:16:19.166 }, 00:16:19.166 "claimed": true, 00:16:19.166 "claim_type": "exclusive_write", 00:16:19.166 "zoned": false, 00:16:19.166 "supported_io_types": { 00:16:19.166 "read": true, 00:16:19.166 "write": true, 00:16:19.166 "unmap": true, 00:16:19.166 "write_zeroes": true, 00:16:19.166 "flush": true, 00:16:19.166 "reset": true, 00:16:19.166 "compare": false, 00:16:19.166 "compare_and_write": false, 00:16:19.166 "abort": true, 00:16:19.166 "nvme_admin": false, 00:16:19.166 "nvme_io": false 00:16:19.166 }, 00:16:19.166 "memory_domains": [ 00:16:19.166 { 00:16:19.166 "dma_device_id": "system", 00:16:19.166 "dma_device_type": 1 00:16:19.166 }, 00:16:19.166 { 00:16:19.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.166 "dma_device_type": 2 00:16:19.166 } 00:16:19.166 ], 00:16:19.166 "driver_specific": {} 00:16:19.166 }' 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:19.166 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:19.425 "name": "BaseBdev3", 00:16:19.425 "aliases": [ 00:16:19.425 "111a317b-123d-11ef-8c90-4585f0cfab08" 00:16:19.425 ], 00:16:19.425 "product_name": "Malloc disk", 00:16:19.425 "block_size": 512, 00:16:19.425 "num_blocks": 65536, 00:16:19.425 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:19.425 "assigned_rate_limits": { 00:16:19.425 "rw_ios_per_sec": 0, 00:16:19.425 "rw_mbytes_per_sec": 0, 00:16:19.425 "r_mbytes_per_sec": 0, 00:16:19.425 "w_mbytes_per_sec": 0 00:16:19.425 }, 00:16:19.425 "claimed": true, 00:16:19.425 "claim_type": "exclusive_write", 00:16:19.425 "zoned": false, 00:16:19.425 "supported_io_types": { 00:16:19.425 "read": true, 00:16:19.425 "write": true, 00:16:19.425 "unmap": true, 00:16:19.425 "write_zeroes": true, 00:16:19.425 "flush": true, 00:16:19.425 "reset": true, 00:16:19.425 "compare": false, 00:16:19.425 "compare_and_write": false, 00:16:19.425 "abort": true, 00:16:19.425 "nvme_admin": false, 00:16:19.425 "nvme_io": false 00:16:19.425 }, 00:16:19.425 "memory_domains": [ 00:16:19.425 { 00:16:19.425 "dma_device_id": "system", 00:16:19.425 "dma_device_type": 1 00:16:19.425 }, 00:16:19.425 { 00:16:19.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.425 "dma_device_type": 2 00:16:19.425 } 00:16:19.425 ], 00:16:19.425 "driver_specific": {} 00:16:19.425 }' 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:19.425 21:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:19.683 "name": "BaseBdev4", 00:16:19.683 "aliases": [ 00:16:19.683 "11ec28ca-123d-11ef-8c90-4585f0cfab08" 00:16:19.683 ], 00:16:19.683 "product_name": "Malloc disk", 00:16:19.683 "block_size": 512, 00:16:19.683 "num_blocks": 65536, 00:16:19.683 "uuid": "11ec28ca-123d-11ef-8c90-4585f0cfab08", 00:16:19.683 "assigned_rate_limits": { 00:16:19.683 "rw_ios_per_sec": 0, 00:16:19.683 "rw_mbytes_per_sec": 0, 00:16:19.683 "r_mbytes_per_sec": 0, 00:16:19.683 "w_mbytes_per_sec": 0 00:16:19.683 }, 00:16:19.683 "claimed": true, 00:16:19.683 "claim_type": "exclusive_write", 00:16:19.683 "zoned": false, 00:16:19.683 "supported_io_types": { 00:16:19.683 "read": true, 00:16:19.683 "write": true, 00:16:19.683 "unmap": true, 00:16:19.683 "write_zeroes": true, 00:16:19.683 "flush": true, 00:16:19.683 "reset": true, 00:16:19.683 "compare": false, 00:16:19.683 "compare_and_write": false, 00:16:19.683 "abort": true, 00:16:19.683 "nvme_admin": false, 00:16:19.683 "nvme_io": false 00:16:19.683 }, 00:16:19.683 "memory_domains": [ 00:16:19.683 { 00:16:19.683 "dma_device_id": "system", 00:16:19.683 "dma_device_type": 1 00:16:19.683 }, 00:16:19.683 { 00:16:19.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.683 "dma_device_type": 2 00:16:19.683 } 00:16:19.683 ], 00:16:19.683 "driver_specific": {} 00:16:19.683 }' 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:19.683 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:19.941 [2024-05-14 21:58:20.524984] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.199 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.457 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.457 "name": "Existed_Raid", 00:16:20.457 "uuid": "0fccefc2-123d-11ef-8c90-4585f0cfab08", 00:16:20.457 "strip_size_kb": 0, 00:16:20.457 "state": "online", 00:16:20.457 "raid_level": "raid1", 00:16:20.457 "superblock": true, 00:16:20.457 "num_base_bdevs": 4, 00:16:20.457 "num_base_bdevs_discovered": 3, 00:16:20.457 "num_base_bdevs_operational": 3, 00:16:20.457 "base_bdevs_list": [ 00:16:20.457 { 00:16:20.457 "name": null, 00:16:20.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.457 "is_configured": false, 00:16:20.457 "data_offset": 2048, 00:16:20.457 "data_size": 63488 00:16:20.457 }, 00:16:20.457 { 00:16:20.457 "name": "BaseBdev2", 00:16:20.457 "uuid": "1050c625-123d-11ef-8c90-4585f0cfab08", 00:16:20.457 "is_configured": true, 00:16:20.457 "data_offset": 2048, 00:16:20.457 "data_size": 63488 00:16:20.457 }, 00:16:20.457 { 00:16:20.457 "name": "BaseBdev3", 00:16:20.457 "uuid": "111a317b-123d-11ef-8c90-4585f0cfab08", 00:16:20.457 "is_configured": true, 00:16:20.457 "data_offset": 2048, 00:16:20.457 "data_size": 63488 00:16:20.457 }, 00:16:20.457 { 00:16:20.457 "name": "BaseBdev4", 00:16:20.457 "uuid": "11ec28ca-123d-11ef-8c90-4585f0cfab08", 00:16:20.457 "is_configured": true, 00:16:20.457 "data_offset": 2048, 00:16:20.457 "data_size": 63488 00:16:20.457 } 00:16:20.457 ] 00:16:20.457 }' 00:16:20.457 21:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.457 21:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.715 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:20.715 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.715 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.715 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:20.973 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:20.973 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.973 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:21.231 [2024-05-14 21:58:21.683827] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.231 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.231 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.231 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.231 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:21.489 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:21.489 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.489 21:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:21.747 [2024-05-14 21:58:22.162477] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.747 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.747 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.747 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.747 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:22.005 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:22.005 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.005 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:22.262 [2024-05-14 21:58:22.728666] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:22.262 [2024-05-14 21:58:22.728706] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.262 [2024-05-14 21:58:22.734644] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.262 [2024-05-14 21:58:22.734691] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.262 [2024-05-14 21:58:22.734697] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b238300 name Existed_Raid, state offline 00:16:22.262 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.262 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.262 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.262 21:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:22.520 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.778 BaseBdev2 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:22.778 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.036 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.294 [ 00:16:23.294 { 00:16:23.294 "name": "BaseBdev2", 00:16:23.294 "aliases": [ 00:16:23.294 "15603666-123d-11ef-8c90-4585f0cfab08" 00:16:23.294 ], 00:16:23.294 "product_name": "Malloc disk", 00:16:23.294 "block_size": 512, 00:16:23.294 "num_blocks": 65536, 00:16:23.294 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:23.294 "assigned_rate_limits": { 00:16:23.294 "rw_ios_per_sec": 0, 00:16:23.294 "rw_mbytes_per_sec": 0, 00:16:23.294 "r_mbytes_per_sec": 0, 00:16:23.294 "w_mbytes_per_sec": 0 00:16:23.294 }, 00:16:23.294 "claimed": false, 00:16:23.294 "zoned": false, 00:16:23.294 "supported_io_types": { 00:16:23.294 "read": true, 00:16:23.294 "write": true, 00:16:23.294 "unmap": true, 00:16:23.294 "write_zeroes": true, 00:16:23.294 "flush": true, 00:16:23.294 "reset": true, 00:16:23.294 "compare": false, 00:16:23.294 "compare_and_write": false, 00:16:23.294 "abort": true, 00:16:23.294 "nvme_admin": false, 00:16:23.294 "nvme_io": false 00:16:23.294 }, 00:16:23.294 "memory_domains": [ 00:16:23.294 { 00:16:23.294 "dma_device_id": "system", 00:16:23.294 "dma_device_type": 1 00:16:23.294 }, 00:16:23.294 { 00:16:23.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.294 "dma_device_type": 2 00:16:23.294 } 00:16:23.294 ], 00:16:23.294 "driver_specific": {} 00:16:23.294 } 00:16:23.294 ] 00:16:23.294 21:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:23.294 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:23.294 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:23.294 21:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.552 BaseBdev3 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:23.552 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.810 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:24.069 [ 00:16:24.069 { 00:16:24.069 "name": "BaseBdev3", 00:16:24.069 "aliases": [ 00:16:24.069 "15d8752b-123d-11ef-8c90-4585f0cfab08" 00:16:24.069 ], 00:16:24.069 "product_name": "Malloc disk", 00:16:24.069 "block_size": 512, 00:16:24.069 "num_blocks": 65536, 00:16:24.069 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:24.069 "assigned_rate_limits": { 00:16:24.069 "rw_ios_per_sec": 0, 00:16:24.069 "rw_mbytes_per_sec": 0, 00:16:24.069 "r_mbytes_per_sec": 0, 00:16:24.069 "w_mbytes_per_sec": 0 00:16:24.069 }, 00:16:24.069 "claimed": false, 00:16:24.069 "zoned": false, 00:16:24.069 "supported_io_types": { 00:16:24.069 "read": true, 00:16:24.069 "write": true, 00:16:24.069 "unmap": true, 00:16:24.069 "write_zeroes": true, 00:16:24.069 "flush": true, 00:16:24.069 "reset": true, 00:16:24.069 "compare": false, 00:16:24.069 "compare_and_write": false, 00:16:24.069 "abort": true, 00:16:24.069 "nvme_admin": false, 00:16:24.069 "nvme_io": false 00:16:24.069 }, 00:16:24.069 "memory_domains": [ 00:16:24.069 { 00:16:24.069 "dma_device_id": "system", 00:16:24.069 "dma_device_type": 1 00:16:24.069 }, 00:16:24.069 { 00:16:24.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.069 "dma_device_type": 2 00:16:24.069 } 00:16:24.069 ], 00:16:24.069 "driver_specific": {} 00:16:24.069 } 00:16:24.069 ] 00:16:24.069 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:24.069 21:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:24.069 21:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:24.069 21:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:24.328 BaseBdev4 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:24.328 21:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.586 21:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:24.845 [ 00:16:24.845 { 00:16:24.845 "name": "BaseBdev4", 00:16:24.845 "aliases": [ 00:16:24.845 "1651ecbb-123d-11ef-8c90-4585f0cfab08" 00:16:24.845 ], 00:16:24.845 "product_name": "Malloc disk", 00:16:24.845 "block_size": 512, 00:16:24.845 "num_blocks": 65536, 00:16:24.845 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:24.845 "assigned_rate_limits": { 00:16:24.845 "rw_ios_per_sec": 0, 00:16:24.845 "rw_mbytes_per_sec": 0, 00:16:24.845 "r_mbytes_per_sec": 0, 00:16:24.845 "w_mbytes_per_sec": 0 00:16:24.845 }, 00:16:24.845 "claimed": false, 00:16:24.845 "zoned": false, 00:16:24.845 "supported_io_types": { 00:16:24.845 "read": true, 00:16:24.845 "write": true, 00:16:24.845 "unmap": true, 00:16:24.845 "write_zeroes": true, 00:16:24.845 "flush": true, 00:16:24.845 "reset": true, 00:16:24.845 "compare": false, 00:16:24.845 "compare_and_write": false, 00:16:24.845 "abort": true, 00:16:24.845 "nvme_admin": false, 00:16:24.845 "nvme_io": false 00:16:24.845 }, 00:16:24.845 "memory_domains": [ 00:16:24.845 { 00:16:24.845 "dma_device_id": "system", 00:16:24.845 "dma_device_type": 1 00:16:24.845 }, 00:16:24.845 { 00:16:24.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.845 "dma_device_type": 2 00:16:24.845 } 00:16:24.845 ], 00:16:24.845 "driver_specific": {} 00:16:24.845 } 00:16:24.845 ] 00:16:24.845 21:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:24.845 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:24.845 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:24.845 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:25.103 [2024-05-14 21:58:25.626765] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.103 [2024-05-14 21:58:25.626818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.103 [2024-05-14 21:58:25.626828] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.103 [2024-05-14 21:58:25.627416] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.103 [2024-05-14 21:58:25.627438] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.103 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.361 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.361 "name": "Existed_Raid", 00:16:25.361 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:25.361 "strip_size_kb": 0, 00:16:25.361 "state": "configuring", 00:16:25.361 "raid_level": "raid1", 00:16:25.361 "superblock": true, 00:16:25.361 "num_base_bdevs": 4, 00:16:25.361 "num_base_bdevs_discovered": 3, 00:16:25.362 "num_base_bdevs_operational": 4, 00:16:25.362 "base_bdevs_list": [ 00:16:25.362 { 00:16:25.362 "name": "BaseBdev1", 00:16:25.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.362 "is_configured": false, 00:16:25.362 "data_offset": 0, 00:16:25.362 "data_size": 0 00:16:25.362 }, 00:16:25.362 { 00:16:25.362 "name": "BaseBdev2", 00:16:25.362 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:25.362 "is_configured": true, 00:16:25.362 "data_offset": 2048, 00:16:25.362 "data_size": 63488 00:16:25.362 }, 00:16:25.362 { 00:16:25.362 "name": "BaseBdev3", 00:16:25.362 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:25.362 "is_configured": true, 00:16:25.362 "data_offset": 2048, 00:16:25.362 "data_size": 63488 00:16:25.362 }, 00:16:25.362 { 00:16:25.362 "name": "BaseBdev4", 00:16:25.362 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:25.362 "is_configured": true, 00:16:25.362 "data_offset": 2048, 00:16:25.362 "data_size": 63488 00:16:25.362 } 00:16:25.362 ] 00:16:25.362 }' 00:16:25.362 21:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.362 21:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.619 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:25.878 [2024-05-14 21:58:26.374775] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.878 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.136 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.136 "name": "Existed_Raid", 00:16:26.136 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:26.136 "strip_size_kb": 0, 00:16:26.136 "state": "configuring", 00:16:26.136 "raid_level": "raid1", 00:16:26.136 "superblock": true, 00:16:26.136 "num_base_bdevs": 4, 00:16:26.136 "num_base_bdevs_discovered": 2, 00:16:26.136 "num_base_bdevs_operational": 4, 00:16:26.136 "base_bdevs_list": [ 00:16:26.136 { 00:16:26.136 "name": "BaseBdev1", 00:16:26.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.136 "is_configured": false, 00:16:26.136 "data_offset": 0, 00:16:26.136 "data_size": 0 00:16:26.136 }, 00:16:26.136 { 00:16:26.136 "name": null, 00:16:26.136 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:26.136 "is_configured": false, 00:16:26.136 "data_offset": 2048, 00:16:26.136 "data_size": 63488 00:16:26.136 }, 00:16:26.136 { 00:16:26.136 "name": "BaseBdev3", 00:16:26.136 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:26.136 "is_configured": true, 00:16:26.136 "data_offset": 2048, 00:16:26.136 "data_size": 63488 00:16:26.136 }, 00:16:26.136 { 00:16:26.136 "name": "BaseBdev4", 00:16:26.136 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:26.136 "is_configured": true, 00:16:26.136 "data_offset": 2048, 00:16:26.136 "data_size": 63488 00:16:26.136 } 00:16:26.136 ] 00:16:26.136 }' 00:16:26.136 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.136 21:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.394 21:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.652 21:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:16:26.652 21:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.910 [2024-05-14 21:58:27.434938] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.910 BaseBdev1 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:26.910 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.168 21:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.426 [ 00:16:27.426 { 00:16:27.426 "name": "BaseBdev1", 00:16:27.426 "aliases": [ 00:16:27.426 "17db9c86-123d-11ef-8c90-4585f0cfab08" 00:16:27.426 ], 00:16:27.426 "product_name": "Malloc disk", 00:16:27.426 "block_size": 512, 00:16:27.426 "num_blocks": 65536, 00:16:27.426 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:27.426 "assigned_rate_limits": { 00:16:27.426 "rw_ios_per_sec": 0, 00:16:27.426 "rw_mbytes_per_sec": 0, 00:16:27.426 "r_mbytes_per_sec": 0, 00:16:27.426 "w_mbytes_per_sec": 0 00:16:27.426 }, 00:16:27.426 "claimed": true, 00:16:27.426 "claim_type": "exclusive_write", 00:16:27.426 "zoned": false, 00:16:27.426 "supported_io_types": { 00:16:27.426 "read": true, 00:16:27.426 "write": true, 00:16:27.426 "unmap": true, 00:16:27.426 "write_zeroes": true, 00:16:27.426 "flush": true, 00:16:27.426 "reset": true, 00:16:27.426 "compare": false, 00:16:27.426 "compare_and_write": false, 00:16:27.426 "abort": true, 00:16:27.426 "nvme_admin": false, 00:16:27.426 "nvme_io": false 00:16:27.426 }, 00:16:27.426 "memory_domains": [ 00:16:27.426 { 00:16:27.426 "dma_device_id": "system", 00:16:27.426 "dma_device_type": 1 00:16:27.426 }, 00:16:27.426 { 00:16:27.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.426 "dma_device_type": 2 00:16:27.426 } 00:16:27.426 ], 00:16:27.426 "driver_specific": {} 00:16:27.426 } 00:16:27.426 ] 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.685 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.943 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.943 "name": "Existed_Raid", 00:16:27.943 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:27.943 "strip_size_kb": 0, 00:16:27.943 "state": "configuring", 00:16:27.943 "raid_level": "raid1", 00:16:27.944 "superblock": true, 00:16:27.944 "num_base_bdevs": 4, 00:16:27.944 "num_base_bdevs_discovered": 3, 00:16:27.944 "num_base_bdevs_operational": 4, 00:16:27.944 "base_bdevs_list": [ 00:16:27.944 { 00:16:27.944 "name": "BaseBdev1", 00:16:27.944 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:27.944 "is_configured": true, 00:16:27.944 "data_offset": 2048, 00:16:27.944 "data_size": 63488 00:16:27.944 }, 00:16:27.944 { 00:16:27.944 "name": null, 00:16:27.944 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:27.944 "is_configured": false, 00:16:27.944 "data_offset": 2048, 00:16:27.944 "data_size": 63488 00:16:27.944 }, 00:16:27.944 { 00:16:27.944 "name": "BaseBdev3", 00:16:27.944 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:27.944 "is_configured": true, 00:16:27.944 "data_offset": 2048, 00:16:27.944 "data_size": 63488 00:16:27.944 }, 00:16:27.944 { 00:16:27.944 "name": "BaseBdev4", 00:16:27.944 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:27.944 "is_configured": true, 00:16:27.944 "data_offset": 2048, 00:16:27.944 "data_size": 63488 00:16:27.944 } 00:16:27.944 ] 00:16:27.944 }' 00:16:27.944 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.944 21:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.202 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.202 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.461 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:28.461 21:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:28.719 [2024-05-14 21:58:29.147101] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.719 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.977 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.977 "name": "Existed_Raid", 00:16:28.977 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:28.977 "strip_size_kb": 0, 00:16:28.977 "state": "configuring", 00:16:28.977 "raid_level": "raid1", 00:16:28.977 "superblock": true, 00:16:28.977 "num_base_bdevs": 4, 00:16:28.977 "num_base_bdevs_discovered": 2, 00:16:28.977 "num_base_bdevs_operational": 4, 00:16:28.977 "base_bdevs_list": [ 00:16:28.977 { 00:16:28.977 "name": "BaseBdev1", 00:16:28.977 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:28.977 "is_configured": true, 00:16:28.977 "data_offset": 2048, 00:16:28.977 "data_size": 63488 00:16:28.977 }, 00:16:28.977 { 00:16:28.977 "name": null, 00:16:28.977 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:28.977 "is_configured": false, 00:16:28.977 "data_offset": 2048, 00:16:28.977 "data_size": 63488 00:16:28.977 }, 00:16:28.977 { 00:16:28.977 "name": null, 00:16:28.977 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:28.977 "is_configured": false, 00:16:28.977 "data_offset": 2048, 00:16:28.977 "data_size": 63488 00:16:28.977 }, 00:16:28.977 { 00:16:28.977 "name": "BaseBdev4", 00:16:28.977 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:28.977 "is_configured": true, 00:16:28.977 "data_offset": 2048, 00:16:28.977 "data_size": 63488 00:16:28.977 } 00:16:28.977 ] 00:16:28.977 }' 00:16:28.977 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.977 21:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.235 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.235 21:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:29.493 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:29.493 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:29.751 [2024-05-14 21:58:30.263231] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.751 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.008 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.008 "name": "Existed_Raid", 00:16:30.008 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:30.008 "strip_size_kb": 0, 00:16:30.008 "state": "configuring", 00:16:30.008 "raid_level": "raid1", 00:16:30.008 "superblock": true, 00:16:30.008 "num_base_bdevs": 4, 00:16:30.008 "num_base_bdevs_discovered": 3, 00:16:30.008 "num_base_bdevs_operational": 4, 00:16:30.008 "base_bdevs_list": [ 00:16:30.008 { 00:16:30.008 "name": "BaseBdev1", 00:16:30.008 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:30.008 "is_configured": true, 00:16:30.008 "data_offset": 2048, 00:16:30.008 "data_size": 63488 00:16:30.008 }, 00:16:30.008 { 00:16:30.008 "name": null, 00:16:30.008 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:30.008 "is_configured": false, 00:16:30.008 "data_offset": 2048, 00:16:30.008 "data_size": 63488 00:16:30.008 }, 00:16:30.008 { 00:16:30.008 "name": "BaseBdev3", 00:16:30.008 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:30.008 "is_configured": true, 00:16:30.008 "data_offset": 2048, 00:16:30.008 "data_size": 63488 00:16:30.008 }, 00:16:30.008 { 00:16:30.008 "name": "BaseBdev4", 00:16:30.008 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:30.008 "is_configured": true, 00:16:30.008 "data_offset": 2048, 00:16:30.008 "data_size": 63488 00:16:30.008 } 00:16:30.008 ] 00:16:30.008 }' 00:16:30.008 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.008 21:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.573 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.573 21:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.573 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:30.573 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:30.832 [2024-05-14 21:58:31.419329] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.091 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.091 "name": "Existed_Raid", 00:16:31.091 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:31.091 "strip_size_kb": 0, 00:16:31.091 "state": "configuring", 00:16:31.091 "raid_level": "raid1", 00:16:31.091 "superblock": true, 00:16:31.091 "num_base_bdevs": 4, 00:16:31.091 "num_base_bdevs_discovered": 2, 00:16:31.091 "num_base_bdevs_operational": 4, 00:16:31.092 "base_bdevs_list": [ 00:16:31.092 { 00:16:31.092 "name": null, 00:16:31.092 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:31.092 "is_configured": false, 00:16:31.092 "data_offset": 2048, 00:16:31.092 "data_size": 63488 00:16:31.092 }, 00:16:31.092 { 00:16:31.092 "name": null, 00:16:31.092 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:31.092 "is_configured": false, 00:16:31.092 "data_offset": 2048, 00:16:31.092 "data_size": 63488 00:16:31.092 }, 00:16:31.092 { 00:16:31.092 "name": "BaseBdev3", 00:16:31.092 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:31.092 "is_configured": true, 00:16:31.092 "data_offset": 2048, 00:16:31.092 "data_size": 63488 00:16:31.092 }, 00:16:31.092 { 00:16:31.092 "name": "BaseBdev4", 00:16:31.092 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:31.092 "is_configured": true, 00:16:31.092 "data_offset": 2048, 00:16:31.092 "data_size": 63488 00:16:31.092 } 00:16:31.092 ] 00:16:31.092 }' 00:16:31.092 21:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.092 21:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.658 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.916 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:31.916 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.174 [2024-05-14 21:58:32.577968] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.174 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.433 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.433 "name": "Existed_Raid", 00:16:32.433 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:32.433 "strip_size_kb": 0, 00:16:32.433 "state": "configuring", 00:16:32.433 "raid_level": "raid1", 00:16:32.433 "superblock": true, 00:16:32.433 "num_base_bdevs": 4, 00:16:32.433 "num_base_bdevs_discovered": 3, 00:16:32.433 "num_base_bdevs_operational": 4, 00:16:32.433 "base_bdevs_list": [ 00:16:32.433 { 00:16:32.433 "name": null, 00:16:32.433 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:32.433 "is_configured": false, 00:16:32.433 "data_offset": 2048, 00:16:32.433 "data_size": 63488 00:16:32.433 }, 00:16:32.433 { 00:16:32.433 "name": "BaseBdev2", 00:16:32.433 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:32.433 "is_configured": true, 00:16:32.433 "data_offset": 2048, 00:16:32.433 "data_size": 63488 00:16:32.433 }, 00:16:32.433 { 00:16:32.433 "name": "BaseBdev3", 00:16:32.433 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:32.433 "is_configured": true, 00:16:32.433 "data_offset": 2048, 00:16:32.433 "data_size": 63488 00:16:32.433 }, 00:16:32.433 { 00:16:32.433 "name": "BaseBdev4", 00:16:32.433 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:32.433 "is_configured": true, 00:16:32.433 "data_offset": 2048, 00:16:32.433 "data_size": 63488 00:16:32.433 } 00:16:32.433 ] 00:16:32.433 }' 00:16:32.433 21:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.433 21:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.691 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.691 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.953 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:32.953 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.953 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:33.212 21:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 17db9c86-123d-11ef-8c90-4585f0cfab08 00:16:33.469 [2024-05-14 21:58:34.030261] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:33.469 [2024-05-14 21:58:34.030343] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b238300 00:16:33.469 [2024-05-14 21:58:34.030349] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:33.469 [2024-05-14 21:58:34.030371] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b296e20 00:16:33.469 [2024-05-14 21:58:34.030423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b238300 00:16:33.469 [2024-05-14 21:58:34.030428] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b238300 00:16:33.469 [2024-05-14 21:58:34.030451] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.469 NewBaseBdev 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:33.469 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:34.036 [ 00:16:34.036 { 00:16:34.036 "name": "NewBaseBdev", 00:16:34.036 "aliases": [ 00:16:34.036 "17db9c86-123d-11ef-8c90-4585f0cfab08" 00:16:34.036 ], 00:16:34.036 "product_name": "Malloc disk", 00:16:34.036 "block_size": 512, 00:16:34.036 "num_blocks": 65536, 00:16:34.036 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:34.036 "assigned_rate_limits": { 00:16:34.036 "rw_ios_per_sec": 0, 00:16:34.036 "rw_mbytes_per_sec": 0, 00:16:34.036 "r_mbytes_per_sec": 0, 00:16:34.036 "w_mbytes_per_sec": 0 00:16:34.036 }, 00:16:34.036 "claimed": true, 00:16:34.036 "claim_type": "exclusive_write", 00:16:34.036 "zoned": false, 00:16:34.036 "supported_io_types": { 00:16:34.036 "read": true, 00:16:34.036 "write": true, 00:16:34.036 "unmap": true, 00:16:34.036 "write_zeroes": true, 00:16:34.036 "flush": true, 00:16:34.036 "reset": true, 00:16:34.036 "compare": false, 00:16:34.036 "compare_and_write": false, 00:16:34.036 "abort": true, 00:16:34.036 "nvme_admin": false, 00:16:34.036 "nvme_io": false 00:16:34.036 }, 00:16:34.036 "memory_domains": [ 00:16:34.036 { 00:16:34.036 "dma_device_id": "system", 00:16:34.036 "dma_device_type": 1 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.036 "dma_device_type": 2 00:16:34.036 } 00:16:34.036 ], 00:16:34.036 "driver_specific": {} 00:16:34.036 } 00:16:34.036 ] 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.036 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.297 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.297 "name": "Existed_Raid", 00:16:34.297 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:34.297 "strip_size_kb": 0, 00:16:34.297 "state": "online", 00:16:34.297 "raid_level": "raid1", 00:16:34.297 "superblock": true, 00:16:34.297 "num_base_bdevs": 4, 00:16:34.297 "num_base_bdevs_discovered": 4, 00:16:34.297 "num_base_bdevs_operational": 4, 00:16:34.297 "base_bdevs_list": [ 00:16:34.297 { 00:16:34.297 "name": "NewBaseBdev", 00:16:34.297 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:34.297 "is_configured": true, 00:16:34.297 "data_offset": 2048, 00:16:34.297 "data_size": 63488 00:16:34.297 }, 00:16:34.297 { 00:16:34.297 "name": "BaseBdev2", 00:16:34.297 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:34.297 "is_configured": true, 00:16:34.297 "data_offset": 2048, 00:16:34.297 "data_size": 63488 00:16:34.297 }, 00:16:34.297 { 00:16:34.297 "name": "BaseBdev3", 00:16:34.297 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:34.297 "is_configured": true, 00:16:34.297 "data_offset": 2048, 00:16:34.297 "data_size": 63488 00:16:34.297 }, 00:16:34.297 { 00:16:34.297 "name": "BaseBdev4", 00:16:34.297 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:34.297 "is_configured": true, 00:16:34.297 "data_offset": 2048, 00:16:34.297 "data_size": 63488 00:16:34.297 } 00:16:34.297 ] 00:16:34.297 }' 00:16:34.297 21:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.297 21:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:34.864 [2024-05-14 21:58:35.398321] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:34.864 "name": "Existed_Raid", 00:16:34.864 "aliases": [ 00:16:34.864 "16c7b949-123d-11ef-8c90-4585f0cfab08" 00:16:34.864 ], 00:16:34.864 "product_name": "Raid Volume", 00:16:34.864 "block_size": 512, 00:16:34.864 "num_blocks": 63488, 00:16:34.864 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "assigned_rate_limits": { 00:16:34.864 "rw_ios_per_sec": 0, 00:16:34.864 "rw_mbytes_per_sec": 0, 00:16:34.864 "r_mbytes_per_sec": 0, 00:16:34.864 "w_mbytes_per_sec": 0 00:16:34.864 }, 00:16:34.864 "claimed": false, 00:16:34.864 "zoned": false, 00:16:34.864 "supported_io_types": { 00:16:34.864 "read": true, 00:16:34.864 "write": true, 00:16:34.864 "unmap": false, 00:16:34.864 "write_zeroes": true, 00:16:34.864 "flush": false, 00:16:34.864 "reset": true, 00:16:34.864 "compare": false, 00:16:34.864 "compare_and_write": false, 00:16:34.864 "abort": false, 00:16:34.864 "nvme_admin": false, 00:16:34.864 "nvme_io": false 00:16:34.864 }, 00:16:34.864 "memory_domains": [ 00:16:34.864 { 00:16:34.864 "dma_device_id": "system", 00:16:34.864 "dma_device_type": 1 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.864 "dma_device_type": 2 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "system", 00:16:34.864 "dma_device_type": 1 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.864 "dma_device_type": 2 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "system", 00:16:34.864 "dma_device_type": 1 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.864 "dma_device_type": 2 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "system", 00:16:34.864 "dma_device_type": 1 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.864 "dma_device_type": 2 00:16:34.864 } 00:16:34.864 ], 00:16:34.864 "driver_specific": { 00:16:34.864 "raid": { 00:16:34.864 "uuid": "16c7b949-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "strip_size_kb": 0, 00:16:34.864 "state": "online", 00:16:34.864 "raid_level": "raid1", 00:16:34.864 "superblock": true, 00:16:34.864 "num_base_bdevs": 4, 00:16:34.864 "num_base_bdevs_discovered": 4, 00:16:34.864 "num_base_bdevs_operational": 4, 00:16:34.864 "base_bdevs_list": [ 00:16:34.864 { 00:16:34.864 "name": "NewBaseBdev", 00:16:34.864 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 2048, 00:16:34.864 "data_size": 63488 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "name": "BaseBdev2", 00:16:34.864 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 2048, 00:16:34.864 "data_size": 63488 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "name": "BaseBdev3", 00:16:34.864 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 2048, 00:16:34.864 "data_size": 63488 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "name": "BaseBdev4", 00:16:34.864 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 2048, 00:16:34.864 "data_size": 63488 00:16:34.864 } 00:16:34.864 ] 00:16:34.864 } 00:16:34.864 } 00:16:34.864 }' 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:34.864 BaseBdev2 00:16:34.864 BaseBdev3 00:16:34.864 BaseBdev4' 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:34.864 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.431 "name": "NewBaseBdev", 00:16:35.431 "aliases": [ 00:16:35.431 "17db9c86-123d-11ef-8c90-4585f0cfab08" 00:16:35.431 ], 00:16:35.431 "product_name": "Malloc disk", 00:16:35.431 "block_size": 512, 00:16:35.431 "num_blocks": 65536, 00:16:35.431 "uuid": "17db9c86-123d-11ef-8c90-4585f0cfab08", 00:16:35.431 "assigned_rate_limits": { 00:16:35.431 "rw_ios_per_sec": 0, 00:16:35.431 "rw_mbytes_per_sec": 0, 00:16:35.431 "r_mbytes_per_sec": 0, 00:16:35.431 "w_mbytes_per_sec": 0 00:16:35.431 }, 00:16:35.431 "claimed": true, 00:16:35.431 "claim_type": "exclusive_write", 00:16:35.431 "zoned": false, 00:16:35.431 "supported_io_types": { 00:16:35.431 "read": true, 00:16:35.431 "write": true, 00:16:35.431 "unmap": true, 00:16:35.431 "write_zeroes": true, 00:16:35.431 "flush": true, 00:16:35.431 "reset": true, 00:16:35.431 "compare": false, 00:16:35.431 "compare_and_write": false, 00:16:35.431 "abort": true, 00:16:35.431 "nvme_admin": false, 00:16:35.431 "nvme_io": false 00:16:35.431 }, 00:16:35.431 "memory_domains": [ 00:16:35.431 { 00:16:35.431 "dma_device_id": "system", 00:16:35.431 "dma_device_type": 1 00:16:35.431 }, 00:16:35.431 { 00:16:35.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.431 "dma_device_type": 2 00:16:35.431 } 00:16:35.431 ], 00:16:35.431 "driver_specific": {} 00:16:35.431 }' 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:35.431 21:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.690 "name": "BaseBdev2", 00:16:35.690 "aliases": [ 00:16:35.690 "15603666-123d-11ef-8c90-4585f0cfab08" 00:16:35.690 ], 00:16:35.690 "product_name": "Malloc disk", 00:16:35.690 "block_size": 512, 00:16:35.690 "num_blocks": 65536, 00:16:35.690 "uuid": "15603666-123d-11ef-8c90-4585f0cfab08", 00:16:35.690 "assigned_rate_limits": { 00:16:35.690 "rw_ios_per_sec": 0, 00:16:35.690 "rw_mbytes_per_sec": 0, 00:16:35.690 "r_mbytes_per_sec": 0, 00:16:35.690 "w_mbytes_per_sec": 0 00:16:35.690 }, 00:16:35.690 "claimed": true, 00:16:35.690 "claim_type": "exclusive_write", 00:16:35.690 "zoned": false, 00:16:35.690 "supported_io_types": { 00:16:35.690 "read": true, 00:16:35.690 "write": true, 00:16:35.690 "unmap": true, 00:16:35.690 "write_zeroes": true, 00:16:35.690 "flush": true, 00:16:35.690 "reset": true, 00:16:35.690 "compare": false, 00:16:35.690 "compare_and_write": false, 00:16:35.690 "abort": true, 00:16:35.690 "nvme_admin": false, 00:16:35.690 "nvme_io": false 00:16:35.690 }, 00:16:35.690 "memory_domains": [ 00:16:35.690 { 00:16:35.690 "dma_device_id": "system", 00:16:35.690 "dma_device_type": 1 00:16:35.690 }, 00:16:35.690 { 00:16:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.690 "dma_device_type": 2 00:16:35.690 } 00:16:35.690 ], 00:16:35.690 "driver_specific": {} 00:16:35.690 }' 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:35.690 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:35.949 "name": "BaseBdev3", 00:16:35.949 "aliases": [ 00:16:35.949 "15d8752b-123d-11ef-8c90-4585f0cfab08" 00:16:35.949 ], 00:16:35.949 "product_name": "Malloc disk", 00:16:35.949 "block_size": 512, 00:16:35.949 "num_blocks": 65536, 00:16:35.949 "uuid": "15d8752b-123d-11ef-8c90-4585f0cfab08", 00:16:35.949 "assigned_rate_limits": { 00:16:35.949 "rw_ios_per_sec": 0, 00:16:35.949 "rw_mbytes_per_sec": 0, 00:16:35.949 "r_mbytes_per_sec": 0, 00:16:35.949 "w_mbytes_per_sec": 0 00:16:35.949 }, 00:16:35.949 "claimed": true, 00:16:35.949 "claim_type": "exclusive_write", 00:16:35.949 "zoned": false, 00:16:35.949 "supported_io_types": { 00:16:35.949 "read": true, 00:16:35.949 "write": true, 00:16:35.949 "unmap": true, 00:16:35.949 "write_zeroes": true, 00:16:35.949 "flush": true, 00:16:35.949 "reset": true, 00:16:35.949 "compare": false, 00:16:35.949 "compare_and_write": false, 00:16:35.949 "abort": true, 00:16:35.949 "nvme_admin": false, 00:16:35.949 "nvme_io": false 00:16:35.949 }, 00:16:35.949 "memory_domains": [ 00:16:35.949 { 00:16:35.949 "dma_device_id": "system", 00:16:35.949 "dma_device_type": 1 00:16:35.949 }, 00:16:35.949 { 00:16:35.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.949 "dma_device_type": 2 00:16:35.949 } 00:16:35.949 ], 00:16:35.949 "driver_specific": {} 00:16:35.949 }' 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:35.949 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:36.515 "name": "BaseBdev4", 00:16:36.515 "aliases": [ 00:16:36.515 "1651ecbb-123d-11ef-8c90-4585f0cfab08" 00:16:36.515 ], 00:16:36.515 "product_name": "Malloc disk", 00:16:36.515 "block_size": 512, 00:16:36.515 "num_blocks": 65536, 00:16:36.515 "uuid": "1651ecbb-123d-11ef-8c90-4585f0cfab08", 00:16:36.515 "assigned_rate_limits": { 00:16:36.515 "rw_ios_per_sec": 0, 00:16:36.515 "rw_mbytes_per_sec": 0, 00:16:36.515 "r_mbytes_per_sec": 0, 00:16:36.515 "w_mbytes_per_sec": 0 00:16:36.515 }, 00:16:36.515 "claimed": true, 00:16:36.515 "claim_type": "exclusive_write", 00:16:36.515 "zoned": false, 00:16:36.515 "supported_io_types": { 00:16:36.515 "read": true, 00:16:36.515 "write": true, 00:16:36.515 "unmap": true, 00:16:36.515 "write_zeroes": true, 00:16:36.515 "flush": true, 00:16:36.515 "reset": true, 00:16:36.515 "compare": false, 00:16:36.515 "compare_and_write": false, 00:16:36.515 "abort": true, 00:16:36.515 "nvme_admin": false, 00:16:36.515 "nvme_io": false 00:16:36.515 }, 00:16:36.515 "memory_domains": [ 00:16:36.515 { 00:16:36.515 "dma_device_id": "system", 00:16:36.515 "dma_device_type": 1 00:16:36.515 }, 00:16:36.515 { 00:16:36.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.515 "dma_device_type": 2 00:16:36.515 } 00:16:36.515 ], 00:16:36.515 "driver_specific": {} 00:16:36.515 }' 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:36.515 21:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.777 [2024-05-14 21:58:37.146430] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.777 [2024-05-14 21:58:37.146458] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.777 [2024-05-14 21:58:37.146482] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.777 [2024-05-14 21:58:37.146552] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.777 [2024-05-14 21:58:37.146557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b238300 name Existed_Raid, state offline 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 62071 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 62071 ']' 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 62071 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 62071 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:16:36.777 killing process with pid 62071 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62071' 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 62071 00:16:36.777 [2024-05-14 21:58:37.174778] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.777 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 62071 00:16:36.777 [2024-05-14 21:58:37.198794] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.042 21:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:16:37.042 00:16:37.042 real 0m27.857s 00:16:37.042 user 0m51.029s 00:16:37.042 sys 0m3.822s 00:16:37.042 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:37.042 ************************************ 00:16:37.042 END TEST raid_state_function_test_sb 00:16:37.042 ************************************ 00:16:37.042 21:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.042 21:58:37 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:37.042 21:58:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:37.042 21:58:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.042 21:58:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.042 ************************************ 00:16:37.042 START TEST raid_superblock_test 00:16:37.042 ************************************ 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62889 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62889 /var/tmp/spdk-raid.sock 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 62889 ']' 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:37.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:37.042 21:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.042 [2024-05-14 21:58:37.447715] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:37.042 [2024-05-14 21:58:37.447897] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:37.609 EAL: TSC is not safe to use in SMP mode 00:16:37.609 EAL: TSC is not invariant 00:16:37.609 [2024-05-14 21:58:38.026430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.609 [2024-05-14 21:58:38.121588] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:37.609 [2024-05-14 21:58:38.123941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.609 [2024-05-14 21:58:38.124748] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.609 [2024-05-14 21:58:38.124765] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.175 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:38.432 malloc1 00:16:38.432 21:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.690 [2024-05-14 21:58:39.126224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.690 [2024-05-14 21:58:39.126300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.690 [2024-05-14 21:58:39.126916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1780 00:16:38.690 [2024-05-14 21:58:39.126952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.690 [2024-05-14 21:58:39.127830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.690 [2024-05-14 21:58:39.127862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.690 pt1 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.690 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:38.948 malloc2 00:16:38.948 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.206 [2024-05-14 21:58:39.710212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.206 [2024-05-14 21:58:39.710279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.206 [2024-05-14 21:58:39.710312] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1c80 00:16:39.206 [2024-05-14 21:58:39.710322] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.206 [2024-05-14 21:58:39.710982] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.206 [2024-05-14 21:58:39.711013] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.206 pt2 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.206 21:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:39.465 malloc3 00:16:39.465 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:39.723 [2024-05-14 21:58:40.246223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:39.723 [2024-05-14 21:58:40.246285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.723 [2024-05-14 21:58:40.246316] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2180 00:16:39.723 [2024-05-14 21:58:40.246326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.723 [2024-05-14 21:58:40.246986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.723 [2024-05-14 21:58:40.247017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:39.723 pt3 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.723 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:39.984 malloc4 00:16:39.984 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:40.241 [2024-05-14 21:58:40.774264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:40.241 [2024-05-14 21:58:40.774329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.241 [2024-05-14 21:58:40.774359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2680 00:16:40.241 [2024-05-14 21:58:40.774369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.241 [2024-05-14 21:58:40.775022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.241 [2024-05-14 21:58:40.775054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:40.241 pt4 00:16:40.241 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:40.241 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:40.241 21:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:40.498 [2024-05-14 21:58:41.066369] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.498 [2024-05-14 21:58:41.067261] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.498 [2024-05-14 21:58:41.067290] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.498 [2024-05-14 21:58:41.067308] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:40.498 [2024-05-14 21:58:41.067392] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d9f6300 00:16:40.498 [2024-05-14 21:58:41.067400] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:40.498 [2024-05-14 21:58:41.067458] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82da54e20 00:16:40.498 [2024-05-14 21:58:41.067590] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d9f6300 00:16:40.498 [2024-05-14 21:58:41.067596] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d9f6300 00:16:40.498 [2024-05-14 21:58:41.067633] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.756 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.074 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.074 "name": "raid_bdev1", 00:16:41.074 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:41.074 "strip_size_kb": 0, 00:16:41.074 "state": "online", 00:16:41.074 "raid_level": "raid1", 00:16:41.074 "superblock": true, 00:16:41.074 "num_base_bdevs": 4, 00:16:41.074 "num_base_bdevs_discovered": 4, 00:16:41.074 "num_base_bdevs_operational": 4, 00:16:41.074 "base_bdevs_list": [ 00:16:41.074 { 00:16:41.074 "name": "pt1", 00:16:41.074 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:41.074 "is_configured": true, 00:16:41.074 "data_offset": 2048, 00:16:41.074 "data_size": 63488 00:16:41.074 }, 00:16:41.074 { 00:16:41.074 "name": "pt2", 00:16:41.074 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:41.074 "is_configured": true, 00:16:41.074 "data_offset": 2048, 00:16:41.074 "data_size": 63488 00:16:41.074 }, 00:16:41.074 { 00:16:41.074 "name": "pt3", 00:16:41.074 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:41.074 "is_configured": true, 00:16:41.074 "data_offset": 2048, 00:16:41.074 "data_size": 63488 00:16:41.074 }, 00:16:41.074 { 00:16:41.074 "name": "pt4", 00:16:41.074 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:41.074 "is_configured": true, 00:16:41.074 "data_offset": 2048, 00:16:41.074 "data_size": 63488 00:16:41.074 } 00:16:41.074 ] 00:16:41.074 }' 00:16:41.074 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.074 21:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:41.332 21:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:41.590 [2024-05-14 21:58:41.990395] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:41.590 "name": "raid_bdev1", 00:16:41.590 "aliases": [ 00:16:41.590 "1ffb9e41-123d-11ef-8c90-4585f0cfab08" 00:16:41.590 ], 00:16:41.590 "product_name": "Raid Volume", 00:16:41.590 "block_size": 512, 00:16:41.590 "num_blocks": 63488, 00:16:41.590 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:41.590 "assigned_rate_limits": { 00:16:41.590 "rw_ios_per_sec": 0, 00:16:41.590 "rw_mbytes_per_sec": 0, 00:16:41.590 "r_mbytes_per_sec": 0, 00:16:41.590 "w_mbytes_per_sec": 0 00:16:41.590 }, 00:16:41.590 "claimed": false, 00:16:41.590 "zoned": false, 00:16:41.590 "supported_io_types": { 00:16:41.590 "read": true, 00:16:41.590 "write": true, 00:16:41.590 "unmap": false, 00:16:41.590 "write_zeroes": true, 00:16:41.590 "flush": false, 00:16:41.590 "reset": true, 00:16:41.590 "compare": false, 00:16:41.590 "compare_and_write": false, 00:16:41.590 "abort": false, 00:16:41.590 "nvme_admin": false, 00:16:41.590 "nvme_io": false 00:16:41.590 }, 00:16:41.590 "memory_domains": [ 00:16:41.590 { 00:16:41.590 "dma_device_id": "system", 00:16:41.590 "dma_device_type": 1 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.590 "dma_device_type": 2 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "system", 00:16:41.590 "dma_device_type": 1 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.590 "dma_device_type": 2 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "system", 00:16:41.590 "dma_device_type": 1 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.590 "dma_device_type": 2 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "system", 00:16:41.590 "dma_device_type": 1 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.590 "dma_device_type": 2 00:16:41.590 } 00:16:41.590 ], 00:16:41.590 "driver_specific": { 00:16:41.590 "raid": { 00:16:41.590 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:41.590 "strip_size_kb": 0, 00:16:41.590 "state": "online", 00:16:41.590 "raid_level": "raid1", 00:16:41.590 "superblock": true, 00:16:41.590 "num_base_bdevs": 4, 00:16:41.590 "num_base_bdevs_discovered": 4, 00:16:41.590 "num_base_bdevs_operational": 4, 00:16:41.590 "base_bdevs_list": [ 00:16:41.590 { 00:16:41.590 "name": "pt1", 00:16:41.590 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:41.590 "is_configured": true, 00:16:41.590 "data_offset": 2048, 00:16:41.590 "data_size": 63488 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "name": "pt2", 00:16:41.590 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:41.590 "is_configured": true, 00:16:41.590 "data_offset": 2048, 00:16:41.590 "data_size": 63488 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "name": "pt3", 00:16:41.590 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:41.590 "is_configured": true, 00:16:41.590 "data_offset": 2048, 00:16:41.590 "data_size": 63488 00:16:41.590 }, 00:16:41.590 { 00:16:41.590 "name": "pt4", 00:16:41.590 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:41.590 "is_configured": true, 00:16:41.590 "data_offset": 2048, 00:16:41.590 "data_size": 63488 00:16:41.590 } 00:16:41.590 ] 00:16:41.590 } 00:16:41.590 } 00:16:41.590 }' 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:41.590 pt2 00:16:41.590 pt3 00:16:41.590 pt4' 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:41.590 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:41.849 "name": "pt1", 00:16:41.849 "aliases": [ 00:16:41.849 "1cace419-462a-c452-a112-b6c66d85fb84" 00:16:41.849 ], 00:16:41.849 "product_name": "passthru", 00:16:41.849 "block_size": 512, 00:16:41.849 "num_blocks": 65536, 00:16:41.849 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:41.849 "assigned_rate_limits": { 00:16:41.849 "rw_ios_per_sec": 0, 00:16:41.849 "rw_mbytes_per_sec": 0, 00:16:41.849 "r_mbytes_per_sec": 0, 00:16:41.849 "w_mbytes_per_sec": 0 00:16:41.849 }, 00:16:41.849 "claimed": true, 00:16:41.849 "claim_type": "exclusive_write", 00:16:41.849 "zoned": false, 00:16:41.849 "supported_io_types": { 00:16:41.849 "read": true, 00:16:41.849 "write": true, 00:16:41.849 "unmap": true, 00:16:41.849 "write_zeroes": true, 00:16:41.849 "flush": true, 00:16:41.849 "reset": true, 00:16:41.849 "compare": false, 00:16:41.849 "compare_and_write": false, 00:16:41.849 "abort": true, 00:16:41.849 "nvme_admin": false, 00:16:41.849 "nvme_io": false 00:16:41.849 }, 00:16:41.849 "memory_domains": [ 00:16:41.849 { 00:16:41.849 "dma_device_id": "system", 00:16:41.849 "dma_device_type": 1 00:16:41.849 }, 00:16:41.849 { 00:16:41.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.849 "dma_device_type": 2 00:16:41.849 } 00:16:41.849 ], 00:16:41.849 "driver_specific": { 00:16:41.849 "passthru": { 00:16:41.849 "name": "pt1", 00:16:41.849 "base_bdev_name": "malloc1" 00:16:41.849 } 00:16:41.849 } 00:16:41.849 }' 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:41.849 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:42.106 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:42.106 "name": "pt2", 00:16:42.106 "aliases": [ 00:16:42.106 "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5" 00:16:42.106 ], 00:16:42.106 "product_name": "passthru", 00:16:42.106 "block_size": 512, 00:16:42.106 "num_blocks": 65536, 00:16:42.106 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:42.106 "assigned_rate_limits": { 00:16:42.106 "rw_ios_per_sec": 0, 00:16:42.106 "rw_mbytes_per_sec": 0, 00:16:42.106 "r_mbytes_per_sec": 0, 00:16:42.106 "w_mbytes_per_sec": 0 00:16:42.106 }, 00:16:42.106 "claimed": true, 00:16:42.106 "claim_type": "exclusive_write", 00:16:42.106 "zoned": false, 00:16:42.106 "supported_io_types": { 00:16:42.106 "read": true, 00:16:42.106 "write": true, 00:16:42.106 "unmap": true, 00:16:42.106 "write_zeroes": true, 00:16:42.106 "flush": true, 00:16:42.106 "reset": true, 00:16:42.106 "compare": false, 00:16:42.106 "compare_and_write": false, 00:16:42.106 "abort": true, 00:16:42.106 "nvme_admin": false, 00:16:42.106 "nvme_io": false 00:16:42.106 }, 00:16:42.106 "memory_domains": [ 00:16:42.106 { 00:16:42.106 "dma_device_id": "system", 00:16:42.106 "dma_device_type": 1 00:16:42.106 }, 00:16:42.106 { 00:16:42.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.106 "dma_device_type": 2 00:16:42.106 } 00:16:42.106 ], 00:16:42.106 "driver_specific": { 00:16:42.106 "passthru": { 00:16:42.106 "name": "pt2", 00:16:42.106 "base_bdev_name": "malloc2" 00:16:42.106 } 00:16:42.107 } 00:16:42.107 }' 00:16:42.107 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.107 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.107 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:42.107 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.107 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:42.364 21:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:42.621 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:42.621 "name": "pt3", 00:16:42.621 "aliases": [ 00:16:42.621 "64c78825-21d2-605d-8952-556b899942b3" 00:16:42.621 ], 00:16:42.621 "product_name": "passthru", 00:16:42.621 "block_size": 512, 00:16:42.621 "num_blocks": 65536, 00:16:42.621 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:42.621 "assigned_rate_limits": { 00:16:42.621 "rw_ios_per_sec": 0, 00:16:42.621 "rw_mbytes_per_sec": 0, 00:16:42.621 "r_mbytes_per_sec": 0, 00:16:42.621 "w_mbytes_per_sec": 0 00:16:42.621 }, 00:16:42.621 "claimed": true, 00:16:42.621 "claim_type": "exclusive_write", 00:16:42.621 "zoned": false, 00:16:42.621 "supported_io_types": { 00:16:42.621 "read": true, 00:16:42.621 "write": true, 00:16:42.621 "unmap": true, 00:16:42.622 "write_zeroes": true, 00:16:42.622 "flush": true, 00:16:42.622 "reset": true, 00:16:42.622 "compare": false, 00:16:42.622 "compare_and_write": false, 00:16:42.622 "abort": true, 00:16:42.622 "nvme_admin": false, 00:16:42.622 "nvme_io": false 00:16:42.622 }, 00:16:42.622 "memory_domains": [ 00:16:42.622 { 00:16:42.622 "dma_device_id": "system", 00:16:42.622 "dma_device_type": 1 00:16:42.622 }, 00:16:42.622 { 00:16:42.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.622 "dma_device_type": 2 00:16:42.622 } 00:16:42.622 ], 00:16:42.622 "driver_specific": { 00:16:42.622 "passthru": { 00:16:42.622 "name": "pt3", 00:16:42.622 "base_bdev_name": "malloc3" 00:16:42.622 } 00:16:42.622 } 00:16:42.622 }' 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:42.622 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:42.880 "name": "pt4", 00:16:42.880 "aliases": [ 00:16:42.880 "9ee97e75-5b12-db5c-ab03-6542d76b81c2" 00:16:42.880 ], 00:16:42.880 "product_name": "passthru", 00:16:42.880 "block_size": 512, 00:16:42.880 "num_blocks": 65536, 00:16:42.880 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:42.880 "assigned_rate_limits": { 00:16:42.880 "rw_ios_per_sec": 0, 00:16:42.880 "rw_mbytes_per_sec": 0, 00:16:42.880 "r_mbytes_per_sec": 0, 00:16:42.880 "w_mbytes_per_sec": 0 00:16:42.880 }, 00:16:42.880 "claimed": true, 00:16:42.880 "claim_type": "exclusive_write", 00:16:42.880 "zoned": false, 00:16:42.880 "supported_io_types": { 00:16:42.880 "read": true, 00:16:42.880 "write": true, 00:16:42.880 "unmap": true, 00:16:42.880 "write_zeroes": true, 00:16:42.880 "flush": true, 00:16:42.880 "reset": true, 00:16:42.880 "compare": false, 00:16:42.880 "compare_and_write": false, 00:16:42.880 "abort": true, 00:16:42.880 "nvme_admin": false, 00:16:42.880 "nvme_io": false 00:16:42.880 }, 00:16:42.880 "memory_domains": [ 00:16:42.880 { 00:16:42.880 "dma_device_id": "system", 00:16:42.880 "dma_device_type": 1 00:16:42.880 }, 00:16:42.880 { 00:16:42.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.880 "dma_device_type": 2 00:16:42.880 } 00:16:42.880 ], 00:16:42.880 "driver_specific": { 00:16:42.880 "passthru": { 00:16:42.880 "name": "pt4", 00:16:42.880 "base_bdev_name": "malloc4" 00:16:42.880 } 00:16:42.880 } 00:16:42.880 }' 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:42.880 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:43.138 [2024-05-14 21:58:43.694603] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.138 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1ffb9e41-123d-11ef-8c90-4585f0cfab08 00:16:43.138 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1ffb9e41-123d-11ef-8c90-4585f0cfab08 ']' 00:16:43.138 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:43.396 [2024-05-14 21:58:43.978599] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.396 [2024-05-14 21:58:43.978626] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.396 [2024-05-14 21:58:43.978651] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.396 [2024-05-14 21:58:43.978671] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.396 [2024-05-14 21:58:43.978676] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state offline 00:16:43.653 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:43.653 21:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.911 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:43.911 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:43.911 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:43.912 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:44.170 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.170 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:44.428 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.428 21:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:44.685 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.685 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:44.943 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:44.943 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.201 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.202 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.202 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.202 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:45.202 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:45.460 [2024-05-14 21:58:45.862811] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:45.460 [2024-05-14 21:58:45.863439] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:45.460 [2024-05-14 21:58:45.863453] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:45.460 [2024-05-14 21:58:45.863462] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:45.460 [2024-05-14 21:58:45.863478] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:45.460 [2024-05-14 21:58:45.863522] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:45.460 [2024-05-14 21:58:45.863536] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:45.460 [2024-05-14 21:58:45.863547] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:45.460 [2024-05-14 21:58:45.863557] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.460 [2024-05-14 21:58:45.863562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state configuring 00:16:45.460 request: 00:16:45.461 { 00:16:45.461 "name": "raid_bdev1", 00:16:45.461 "raid_level": "raid1", 00:16:45.461 "base_bdevs": [ 00:16:45.461 "malloc1", 00:16:45.461 "malloc2", 00:16:45.461 "malloc3", 00:16:45.461 "malloc4" 00:16:45.461 ], 00:16:45.461 "superblock": false, 00:16:45.461 "method": "bdev_raid_create", 00:16:45.461 "req_id": 1 00:16:45.461 } 00:16:45.461 Got JSON-RPC error response 00:16:45.461 response: 00:16:45.461 { 00:16:45.461 "code": -17, 00:16:45.461 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:45.461 } 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.461 21:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:45.718 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:45.718 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:45.718 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:45.975 [2024-05-14 21:58:46.402850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:45.975 [2024-05-14 21:58:46.402966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.975 [2024-05-14 21:58:46.403036] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2680 00:16:45.975 [2024-05-14 21:58:46.403062] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.975 [2024-05-14 21:58:46.403710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.975 [2024-05-14 21:58:46.403743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:45.975 [2024-05-14 21:58:46.403772] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:45.975 [2024-05-14 21:58:46.403786] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.975 pt1 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.975 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.234 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.234 "name": "raid_bdev1", 00:16:46.234 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:46.234 "strip_size_kb": 0, 00:16:46.234 "state": "configuring", 00:16:46.234 "raid_level": "raid1", 00:16:46.234 "superblock": true, 00:16:46.234 "num_base_bdevs": 4, 00:16:46.234 "num_base_bdevs_discovered": 1, 00:16:46.234 "num_base_bdevs_operational": 4, 00:16:46.234 "base_bdevs_list": [ 00:16:46.234 { 00:16:46.234 "name": "pt1", 00:16:46.234 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:46.234 "is_configured": true, 00:16:46.234 "data_offset": 2048, 00:16:46.234 "data_size": 63488 00:16:46.234 }, 00:16:46.234 { 00:16:46.234 "name": null, 00:16:46.234 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:46.234 "is_configured": false, 00:16:46.234 "data_offset": 2048, 00:16:46.234 "data_size": 63488 00:16:46.234 }, 00:16:46.234 { 00:16:46.234 "name": null, 00:16:46.234 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:46.234 "is_configured": false, 00:16:46.234 "data_offset": 2048, 00:16:46.234 "data_size": 63488 00:16:46.234 }, 00:16:46.234 { 00:16:46.234 "name": null, 00:16:46.234 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:46.234 "is_configured": false, 00:16:46.234 "data_offset": 2048, 00:16:46.234 "data_size": 63488 00:16:46.234 } 00:16:46.234 ] 00:16:46.234 }' 00:16:46.234 21:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.234 21:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.492 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:46.492 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.750 [2024-05-14 21:58:47.310912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.750 [2024-05-14 21:58:47.310977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.750 [2024-05-14 21:58:47.311008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1c80 00:16:46.750 [2024-05-14 21:58:47.311018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.750 [2024-05-14 21:58:47.311137] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.750 [2024-05-14 21:58:47.311151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.750 [2024-05-14 21:58:47.311179] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:46.750 [2024-05-14 21:58:47.311188] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.750 pt2 00:16:46.750 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:47.008 [2024-05-14 21:58:47.546957] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.008 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.266 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.266 "name": "raid_bdev1", 00:16:47.266 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:47.266 "strip_size_kb": 0, 00:16:47.266 "state": "configuring", 00:16:47.266 "raid_level": "raid1", 00:16:47.266 "superblock": true, 00:16:47.266 "num_base_bdevs": 4, 00:16:47.266 "num_base_bdevs_discovered": 1, 00:16:47.266 "num_base_bdevs_operational": 4, 00:16:47.266 "base_bdevs_list": [ 00:16:47.266 { 00:16:47.266 "name": "pt1", 00:16:47.266 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:47.266 "is_configured": true, 00:16:47.266 "data_offset": 2048, 00:16:47.266 "data_size": 63488 00:16:47.266 }, 00:16:47.266 { 00:16:47.266 "name": null, 00:16:47.266 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:47.266 "is_configured": false, 00:16:47.266 "data_offset": 2048, 00:16:47.266 "data_size": 63488 00:16:47.266 }, 00:16:47.266 { 00:16:47.266 "name": null, 00:16:47.266 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:47.266 "is_configured": false, 00:16:47.266 "data_offset": 2048, 00:16:47.266 "data_size": 63488 00:16:47.266 }, 00:16:47.266 { 00:16:47.266 "name": null, 00:16:47.266 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:47.266 "is_configured": false, 00:16:47.266 "data_offset": 2048, 00:16:47.266 "data_size": 63488 00:16:47.266 } 00:16:47.266 ] 00:16:47.266 }' 00:16:47.266 21:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.266 21:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.833 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:47.833 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:47.833 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.833 [2024-05-14 21:58:48.411009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.833 [2024-05-14 21:58:48.411104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.833 [2024-05-14 21:58:48.411153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1c80 00:16:47.833 [2024-05-14 21:58:48.411175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.833 [2024-05-14 21:58:48.411309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.833 [2024-05-14 21:58:48.411336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.833 [2024-05-14 21:58:48.411364] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:47.833 [2024-05-14 21:58:48.411374] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.833 pt2 00:16:48.091 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:48.091 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:48.091 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.349 [2024-05-14 21:58:48.682998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.349 [2024-05-14 21:58:48.683087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.349 [2024-05-14 21:58:48.683134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1780 00:16:48.349 [2024-05-14 21:58:48.683162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.349 [2024-05-14 21:58:48.683300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.349 [2024-05-14 21:58:48.683313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.349 [2024-05-14 21:58:48.683340] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:48.349 [2024-05-14 21:58:48.683349] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.349 pt3 00:16:48.349 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:48.349 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:48.349 21:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:48.607 [2024-05-14 21:58:48.987008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:48.607 [2024-05-14 21:58:48.987131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.607 [2024-05-14 21:58:48.987176] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2900 00:16:48.607 [2024-05-14 21:58:48.987185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.607 [2024-05-14 21:58:48.987329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.607 [2024-05-14 21:58:48.987360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:48.607 [2024-05-14 21:58:48.987385] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:48.607 [2024-05-14 21:58:48.987395] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:48.607 [2024-05-14 21:58:48.987430] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d9f6300 00:16:48.607 [2024-05-14 21:58:48.987445] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.607 [2024-05-14 21:58:48.987469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82da54e20 00:16:48.608 [2024-05-14 21:58:48.987529] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d9f6300 00:16:48.608 [2024-05-14 21:58:48.987535] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d9f6300 00:16:48.608 [2024-05-14 21:58:48.987559] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.608 pt4 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.608 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.866 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.866 "name": "raid_bdev1", 00:16:48.866 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:48.866 "strip_size_kb": 0, 00:16:48.866 "state": "online", 00:16:48.866 "raid_level": "raid1", 00:16:48.866 "superblock": true, 00:16:48.866 "num_base_bdevs": 4, 00:16:48.866 "num_base_bdevs_discovered": 4, 00:16:48.866 "num_base_bdevs_operational": 4, 00:16:48.866 "base_bdevs_list": [ 00:16:48.866 { 00:16:48.866 "name": "pt1", 00:16:48.866 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 }, 00:16:48.866 { 00:16:48.866 "name": "pt2", 00:16:48.866 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 }, 00:16:48.866 { 00:16:48.866 "name": "pt3", 00:16:48.866 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 }, 00:16:48.866 { 00:16:48.866 "name": "pt4", 00:16:48.866 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 } 00:16:48.866 ] 00:16:48.866 }' 00:16:48.866 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.866 21:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:49.124 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:49.382 [2024-05-14 21:58:49.791144] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.382 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:49.382 "name": "raid_bdev1", 00:16:49.382 "aliases": [ 00:16:49.382 "1ffb9e41-123d-11ef-8c90-4585f0cfab08" 00:16:49.382 ], 00:16:49.382 "product_name": "Raid Volume", 00:16:49.382 "block_size": 512, 00:16:49.382 "num_blocks": 63488, 00:16:49.382 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:49.382 "assigned_rate_limits": { 00:16:49.382 "rw_ios_per_sec": 0, 00:16:49.382 "rw_mbytes_per_sec": 0, 00:16:49.382 "r_mbytes_per_sec": 0, 00:16:49.382 "w_mbytes_per_sec": 0 00:16:49.382 }, 00:16:49.382 "claimed": false, 00:16:49.382 "zoned": false, 00:16:49.382 "supported_io_types": { 00:16:49.382 "read": true, 00:16:49.382 "write": true, 00:16:49.382 "unmap": false, 00:16:49.382 "write_zeroes": true, 00:16:49.382 "flush": false, 00:16:49.382 "reset": true, 00:16:49.382 "compare": false, 00:16:49.382 "compare_and_write": false, 00:16:49.382 "abort": false, 00:16:49.382 "nvme_admin": false, 00:16:49.382 "nvme_io": false 00:16:49.382 }, 00:16:49.382 "memory_domains": [ 00:16:49.382 { 00:16:49.382 "dma_device_id": "system", 00:16:49.382 "dma_device_type": 1 00:16:49.382 }, 00:16:49.382 { 00:16:49.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.382 "dma_device_type": 2 00:16:49.382 }, 00:16:49.382 { 00:16:49.382 "dma_device_id": "system", 00:16:49.383 "dma_device_type": 1 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.383 "dma_device_type": 2 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "dma_device_id": "system", 00:16:49.383 "dma_device_type": 1 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.383 "dma_device_type": 2 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "dma_device_id": "system", 00:16:49.383 "dma_device_type": 1 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.383 "dma_device_type": 2 00:16:49.383 } 00:16:49.383 ], 00:16:49.383 "driver_specific": { 00:16:49.383 "raid": { 00:16:49.383 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:49.383 "strip_size_kb": 0, 00:16:49.383 "state": "online", 00:16:49.383 "raid_level": "raid1", 00:16:49.383 "superblock": true, 00:16:49.383 "num_base_bdevs": 4, 00:16:49.383 "num_base_bdevs_discovered": 4, 00:16:49.383 "num_base_bdevs_operational": 4, 00:16:49.383 "base_bdevs_list": [ 00:16:49.383 { 00:16:49.383 "name": "pt1", 00:16:49.383 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:49.383 "is_configured": true, 00:16:49.383 "data_offset": 2048, 00:16:49.383 "data_size": 63488 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "name": "pt2", 00:16:49.383 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:49.383 "is_configured": true, 00:16:49.383 "data_offset": 2048, 00:16:49.383 "data_size": 63488 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "name": "pt3", 00:16:49.383 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:49.383 "is_configured": true, 00:16:49.383 "data_offset": 2048, 00:16:49.383 "data_size": 63488 00:16:49.383 }, 00:16:49.383 { 00:16:49.383 "name": "pt4", 00:16:49.383 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:49.383 "is_configured": true, 00:16:49.383 "data_offset": 2048, 00:16:49.383 "data_size": 63488 00:16:49.383 } 00:16:49.383 ] 00:16:49.383 } 00:16:49.383 } 00:16:49.383 }' 00:16:49.383 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.383 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:49.383 pt2 00:16:49.383 pt3 00:16:49.383 pt4' 00:16:49.383 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:49.383 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:49.383 21:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:49.649 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:49.650 "name": "pt1", 00:16:49.650 "aliases": [ 00:16:49.650 "1cace419-462a-c452-a112-b6c66d85fb84" 00:16:49.650 ], 00:16:49.650 "product_name": "passthru", 00:16:49.650 "block_size": 512, 00:16:49.650 "num_blocks": 65536, 00:16:49.650 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:49.650 "assigned_rate_limits": { 00:16:49.650 "rw_ios_per_sec": 0, 00:16:49.650 "rw_mbytes_per_sec": 0, 00:16:49.650 "r_mbytes_per_sec": 0, 00:16:49.650 "w_mbytes_per_sec": 0 00:16:49.650 }, 00:16:49.650 "claimed": true, 00:16:49.650 "claim_type": "exclusive_write", 00:16:49.650 "zoned": false, 00:16:49.650 "supported_io_types": { 00:16:49.650 "read": true, 00:16:49.650 "write": true, 00:16:49.650 "unmap": true, 00:16:49.650 "write_zeroes": true, 00:16:49.650 "flush": true, 00:16:49.650 "reset": true, 00:16:49.650 "compare": false, 00:16:49.650 "compare_and_write": false, 00:16:49.650 "abort": true, 00:16:49.650 "nvme_admin": false, 00:16:49.650 "nvme_io": false 00:16:49.650 }, 00:16:49.650 "memory_domains": [ 00:16:49.650 { 00:16:49.650 "dma_device_id": "system", 00:16:49.650 "dma_device_type": 1 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.650 "dma_device_type": 2 00:16:49.650 } 00:16:49.650 ], 00:16:49.650 "driver_specific": { 00:16:49.650 "passthru": { 00:16:49.650 "name": "pt1", 00:16:49.650 "base_bdev_name": "malloc1" 00:16:49.650 } 00:16:49.650 } 00:16:49.650 }' 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:49.650 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:49.912 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:49.912 "name": "pt2", 00:16:49.912 "aliases": [ 00:16:49.912 "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5" 00:16:49.912 ], 00:16:49.912 "product_name": "passthru", 00:16:49.912 "block_size": 512, 00:16:49.912 "num_blocks": 65536, 00:16:49.912 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:49.912 "assigned_rate_limits": { 00:16:49.912 "rw_ios_per_sec": 0, 00:16:49.912 "rw_mbytes_per_sec": 0, 00:16:49.912 "r_mbytes_per_sec": 0, 00:16:49.912 "w_mbytes_per_sec": 0 00:16:49.912 }, 00:16:49.912 "claimed": true, 00:16:49.912 "claim_type": "exclusive_write", 00:16:49.912 "zoned": false, 00:16:49.912 "supported_io_types": { 00:16:49.912 "read": true, 00:16:49.912 "write": true, 00:16:49.912 "unmap": true, 00:16:49.912 "write_zeroes": true, 00:16:49.912 "flush": true, 00:16:49.912 "reset": true, 00:16:49.912 "compare": false, 00:16:49.912 "compare_and_write": false, 00:16:49.912 "abort": true, 00:16:49.912 "nvme_admin": false, 00:16:49.912 "nvme_io": false 00:16:49.912 }, 00:16:49.912 "memory_domains": [ 00:16:49.912 { 00:16:49.912 "dma_device_id": "system", 00:16:49.912 "dma_device_type": 1 00:16:49.912 }, 00:16:49.912 { 00:16:49.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.912 "dma_device_type": 2 00:16:49.912 } 00:16:49.912 ], 00:16:49.912 "driver_specific": { 00:16:49.912 "passthru": { 00:16:49.912 "name": "pt2", 00:16:49.912 "base_bdev_name": "malloc2" 00:16:49.912 } 00:16:49.912 } 00:16:49.912 }' 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:50.171 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:50.430 "name": "pt3", 00:16:50.430 "aliases": [ 00:16:50.430 "64c78825-21d2-605d-8952-556b899942b3" 00:16:50.430 ], 00:16:50.430 "product_name": "passthru", 00:16:50.430 "block_size": 512, 00:16:50.430 "num_blocks": 65536, 00:16:50.430 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:50.430 "assigned_rate_limits": { 00:16:50.430 "rw_ios_per_sec": 0, 00:16:50.430 "rw_mbytes_per_sec": 0, 00:16:50.430 "r_mbytes_per_sec": 0, 00:16:50.430 "w_mbytes_per_sec": 0 00:16:50.430 }, 00:16:50.430 "claimed": true, 00:16:50.430 "claim_type": "exclusive_write", 00:16:50.430 "zoned": false, 00:16:50.430 "supported_io_types": { 00:16:50.430 "read": true, 00:16:50.430 "write": true, 00:16:50.430 "unmap": true, 00:16:50.430 "write_zeroes": true, 00:16:50.430 "flush": true, 00:16:50.430 "reset": true, 00:16:50.430 "compare": false, 00:16:50.430 "compare_and_write": false, 00:16:50.430 "abort": true, 00:16:50.430 "nvme_admin": false, 00:16:50.430 "nvme_io": false 00:16:50.430 }, 00:16:50.430 "memory_domains": [ 00:16:50.430 { 00:16:50.430 "dma_device_id": "system", 00:16:50.430 "dma_device_type": 1 00:16:50.430 }, 00:16:50.430 { 00:16:50.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.430 "dma_device_type": 2 00:16:50.430 } 00:16:50.430 ], 00:16:50.430 "driver_specific": { 00:16:50.430 "passthru": { 00:16:50.430 "name": "pt3", 00:16:50.430 "base_bdev_name": "malloc3" 00:16:50.430 } 00:16:50.430 } 00:16:50.430 }' 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:50.430 21:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:50.688 "name": "pt4", 00:16:50.688 "aliases": [ 00:16:50.688 "9ee97e75-5b12-db5c-ab03-6542d76b81c2" 00:16:50.688 ], 00:16:50.688 "product_name": "passthru", 00:16:50.688 "block_size": 512, 00:16:50.688 "num_blocks": 65536, 00:16:50.688 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:50.688 "assigned_rate_limits": { 00:16:50.688 "rw_ios_per_sec": 0, 00:16:50.688 "rw_mbytes_per_sec": 0, 00:16:50.688 "r_mbytes_per_sec": 0, 00:16:50.688 "w_mbytes_per_sec": 0 00:16:50.688 }, 00:16:50.688 "claimed": true, 00:16:50.688 "claim_type": "exclusive_write", 00:16:50.688 "zoned": false, 00:16:50.688 "supported_io_types": { 00:16:50.688 "read": true, 00:16:50.688 "write": true, 00:16:50.688 "unmap": true, 00:16:50.688 "write_zeroes": true, 00:16:50.688 "flush": true, 00:16:50.688 "reset": true, 00:16:50.688 "compare": false, 00:16:50.688 "compare_and_write": false, 00:16:50.688 "abort": true, 00:16:50.688 "nvme_admin": false, 00:16:50.688 "nvme_io": false 00:16:50.688 }, 00:16:50.688 "memory_domains": [ 00:16:50.688 { 00:16:50.688 "dma_device_id": "system", 00:16:50.688 "dma_device_type": 1 00:16:50.688 }, 00:16:50.688 { 00:16:50.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.688 "dma_device_type": 2 00:16:50.688 } 00:16:50.688 ], 00:16:50.688 "driver_specific": { 00:16:50.688 "passthru": { 00:16:50.688 "name": "pt4", 00:16:50.688 "base_bdev_name": "malloc4" 00:16:50.688 } 00:16:50.688 } 00:16:50.688 }' 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.688 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.946 [2024-05-14 21:58:51.483379] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.946 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1ffb9e41-123d-11ef-8c90-4585f0cfab08 '!=' 1ffb9e41-123d-11ef-8c90-4585f0cfab08 ']' 00:16:50.946 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:50.946 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:50.946 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:16:50.946 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:51.204 [2024-05-14 21:58:51.759329] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.204 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.205 21:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.771 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.771 "name": "raid_bdev1", 00:16:51.771 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:51.771 "strip_size_kb": 0, 00:16:51.771 "state": "online", 00:16:51.771 "raid_level": "raid1", 00:16:51.771 "superblock": true, 00:16:51.771 "num_base_bdevs": 4, 00:16:51.771 "num_base_bdevs_discovered": 3, 00:16:51.771 "num_base_bdevs_operational": 3, 00:16:51.771 "base_bdevs_list": [ 00:16:51.771 { 00:16:51.771 "name": null, 00:16:51.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.771 "is_configured": false, 00:16:51.771 "data_offset": 2048, 00:16:51.771 "data_size": 63488 00:16:51.771 }, 00:16:51.771 { 00:16:51.771 "name": "pt2", 00:16:51.771 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:51.771 "is_configured": true, 00:16:51.771 "data_offset": 2048, 00:16:51.771 "data_size": 63488 00:16:51.771 }, 00:16:51.771 { 00:16:51.771 "name": "pt3", 00:16:51.771 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:51.771 "is_configured": true, 00:16:51.771 "data_offset": 2048, 00:16:51.771 "data_size": 63488 00:16:51.771 }, 00:16:51.771 { 00:16:51.771 "name": "pt4", 00:16:51.771 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:51.771 "is_configured": true, 00:16:51.771 "data_offset": 2048, 00:16:51.771 "data_size": 63488 00:16:51.771 } 00:16:51.771 ] 00:16:51.771 }' 00:16:51.771 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.771 21:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.030 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:52.288 [2024-05-14 21:58:52.691429] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.288 [2024-05-14 21:58:52.691462] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.288 [2024-05-14 21:58:52.691529] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.288 [2024-05-14 21:58:52.691547] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.288 [2024-05-14 21:58:52.691552] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state offline 00:16:52.288 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.288 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:52.547 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:52.547 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:52.547 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:52.547 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.547 21:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:52.805 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.805 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.805 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:53.063 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.063 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.063 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:53.322 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.322 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.322 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:53.322 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.322 21:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.580 [2024-05-14 21:58:54.007511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.580 [2024-05-14 21:58:54.007610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.580 [2024-05-14 21:58:54.007657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2900 00:16:53.580 [2024-05-14 21:58:54.007694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.580 [2024-05-14 21:58:54.008371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.580 [2024-05-14 21:58:54.008403] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.580 [2024-05-14 21:58:54.008432] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:53.580 [2024-05-14 21:58:54.008445] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.580 pt2 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.580 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.838 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.838 "name": "raid_bdev1", 00:16:53.838 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:53.838 "strip_size_kb": 0, 00:16:53.838 "state": "configuring", 00:16:53.839 "raid_level": "raid1", 00:16:53.839 "superblock": true, 00:16:53.839 "num_base_bdevs": 4, 00:16:53.839 "num_base_bdevs_discovered": 1, 00:16:53.839 "num_base_bdevs_operational": 3, 00:16:53.839 "base_bdevs_list": [ 00:16:53.839 { 00:16:53.839 "name": null, 00:16:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.839 "is_configured": false, 00:16:53.839 "data_offset": 2048, 00:16:53.839 "data_size": 63488 00:16:53.839 }, 00:16:53.839 { 00:16:53.839 "name": "pt2", 00:16:53.839 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:53.839 "is_configured": true, 00:16:53.839 "data_offset": 2048, 00:16:53.839 "data_size": 63488 00:16:53.839 }, 00:16:53.839 { 00:16:53.839 "name": null, 00:16:53.839 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:53.839 "is_configured": false, 00:16:53.839 "data_offset": 2048, 00:16:53.839 "data_size": 63488 00:16:53.839 }, 00:16:53.839 { 00:16:53.839 "name": null, 00:16:53.839 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:53.839 "is_configured": false, 00:16:53.839 "data_offset": 2048, 00:16:53.839 "data_size": 63488 00:16:53.839 } 00:16:53.839 ] 00:16:53.839 }' 00:16:53.839 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.839 21:58:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.097 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:54.097 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.097 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:54.377 [2024-05-14 21:58:54.955568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:54.377 [2024-05-14 21:58:54.955659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.377 [2024-05-14 21:58:54.955708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1c80 00:16:54.377 [2024-05-14 21:58:54.955717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.377 [2024-05-14 21:58:54.955853] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.377 [2024-05-14 21:58:54.955877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:54.377 [2024-05-14 21:58:54.955907] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:54.377 [2024-05-14 21:58:54.955916] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.377 pt3 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.635 21:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.635 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.635 "name": "raid_bdev1", 00:16:54.635 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:54.635 "strip_size_kb": 0, 00:16:54.635 "state": "configuring", 00:16:54.635 "raid_level": "raid1", 00:16:54.635 "superblock": true, 00:16:54.635 "num_base_bdevs": 4, 00:16:54.635 "num_base_bdevs_discovered": 2, 00:16:54.635 "num_base_bdevs_operational": 3, 00:16:54.635 "base_bdevs_list": [ 00:16:54.635 { 00:16:54.635 "name": null, 00:16:54.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.635 "is_configured": false, 00:16:54.635 "data_offset": 2048, 00:16:54.635 "data_size": 63488 00:16:54.635 }, 00:16:54.635 { 00:16:54.635 "name": "pt2", 00:16:54.635 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:54.635 "is_configured": true, 00:16:54.635 "data_offset": 2048, 00:16:54.635 "data_size": 63488 00:16:54.635 }, 00:16:54.635 { 00:16:54.635 "name": "pt3", 00:16:54.635 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:54.635 "is_configured": true, 00:16:54.635 "data_offset": 2048, 00:16:54.635 "data_size": 63488 00:16:54.635 }, 00:16:54.635 { 00:16:54.635 "name": null, 00:16:54.635 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:54.635 "is_configured": false, 00:16:54.635 "data_offset": 2048, 00:16:54.635 "data_size": 63488 00:16:54.635 } 00:16:54.635 ] 00:16:54.635 }' 00:16:54.635 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.635 21:58:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.199 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:55.199 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:55.199 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:55.199 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:55.457 [2024-05-14 21:58:55.815589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:55.457 [2024-05-14 21:58:55.815653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.457 [2024-05-14 21:58:55.815685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2180 00:16:55.457 [2024-05-14 21:58:55.815695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.457 [2024-05-14 21:58:55.815822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.457 [2024-05-14 21:58:55.815836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:55.457 [2024-05-14 21:58:55.815873] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:55.457 [2024-05-14 21:58:55.815883] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:55.457 [2024-05-14 21:58:55.815924] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d9f6300 00:16:55.457 [2024-05-14 21:58:55.815930] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:55.457 [2024-05-14 21:58:55.815952] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82da54e20 00:16:55.457 [2024-05-14 21:58:55.816023] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d9f6300 00:16:55.457 [2024-05-14 21:58:55.816030] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d9f6300 00:16:55.457 [2024-05-14 21:58:55.816052] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.457 pt4 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.457 21:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.714 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.715 "name": "raid_bdev1", 00:16:55.715 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:55.715 "strip_size_kb": 0, 00:16:55.715 "state": "online", 00:16:55.715 "raid_level": "raid1", 00:16:55.715 "superblock": true, 00:16:55.715 "num_base_bdevs": 4, 00:16:55.715 "num_base_bdevs_discovered": 3, 00:16:55.715 "num_base_bdevs_operational": 3, 00:16:55.715 "base_bdevs_list": [ 00:16:55.715 { 00:16:55.715 "name": null, 00:16:55.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.715 "is_configured": false, 00:16:55.715 "data_offset": 2048, 00:16:55.715 "data_size": 63488 00:16:55.715 }, 00:16:55.715 { 00:16:55.715 "name": "pt2", 00:16:55.715 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:55.715 "is_configured": true, 00:16:55.715 "data_offset": 2048, 00:16:55.715 "data_size": 63488 00:16:55.715 }, 00:16:55.715 { 00:16:55.715 "name": "pt3", 00:16:55.715 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:55.715 "is_configured": true, 00:16:55.715 "data_offset": 2048, 00:16:55.715 "data_size": 63488 00:16:55.715 }, 00:16:55.715 { 00:16:55.715 "name": "pt4", 00:16:55.715 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:55.715 "is_configured": true, 00:16:55.715 "data_offset": 2048, 00:16:55.715 "data_size": 63488 00:16:55.715 } 00:16:55.715 ] 00:16:55.715 }' 00:16:55.715 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.715 21:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.972 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 4 -gt 2 ']' 00:16:55.972 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:56.230 [2024-05-14 21:58:56.719606] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.230 [2024-05-14 21:58:56.719634] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.230 [2024-05-14 21:58:56.719658] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.230 [2024-05-14 21:58:56.719676] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.230 [2024-05-14 21:58:56.719681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state offline 00:16:56.230 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:16:56.230 21:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.543 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:16:56.543 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:16:56.543 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.819 [2024-05-14 21:58:57.275614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.819 [2024-05-14 21:58:57.275679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.819 [2024-05-14 21:58:57.275709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1780 00:16:56.819 [2024-05-14 21:58:57.275719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.819 [2024-05-14 21:58:57.276398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.819 [2024-05-14 21:58:57.276427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.819 [2024-05-14 21:58:57.276457] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:56.819 [2024-05-14 21:58:57.276470] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:56.819 pt1 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.819 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.077 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.077 "name": "raid_bdev1", 00:16:57.077 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:57.077 "strip_size_kb": 0, 00:16:57.077 "state": "configuring", 00:16:57.077 "raid_level": "raid1", 00:16:57.077 "superblock": true, 00:16:57.077 "num_base_bdevs": 4, 00:16:57.077 "num_base_bdevs_discovered": 1, 00:16:57.077 "num_base_bdevs_operational": 4, 00:16:57.077 "base_bdevs_list": [ 00:16:57.077 { 00:16:57.077 "name": "pt1", 00:16:57.077 "uuid": "1cace419-462a-c452-a112-b6c66d85fb84", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 2048, 00:16:57.077 "data_size": 63488 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": null, 00:16:57.077 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:57.077 "is_configured": false, 00:16:57.077 "data_offset": 2048, 00:16:57.077 "data_size": 63488 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": null, 00:16:57.077 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:57.077 "is_configured": false, 00:16:57.077 "data_offset": 2048, 00:16:57.077 "data_size": 63488 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": null, 00:16:57.077 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:57.077 "is_configured": false, 00:16:57.077 "data_offset": 2048, 00:16:57.077 "data_size": 63488 00:16:57.077 } 00:16:57.077 ] 00:16:57.077 }' 00:16:57.077 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.077 21:58:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.335 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:16:57.335 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:16:57.335 21:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:57.592 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:16:57.592 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:16:57.593 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:57.851 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:16:57.851 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:16:57.851 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:58.108 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:16:58.108 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:16:58.108 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=3 00:16:58.109 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:58.366 [2024-05-14 21:58:58.935738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:58.366 [2024-05-14 21:58:58.935805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.366 [2024-05-14 21:58:58.935835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2180 00:16:58.366 [2024-05-14 21:58:58.935845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.366 [2024-05-14 21:58:58.935992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.366 [2024-05-14 21:58:58.936008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:58.366 [2024-05-14 21:58:58.936045] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:58.366 [2024-05-14 21:58:58.936052] bdev_raid.c:3398:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:58.366 [2024-05-14 21:58:58.936055] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.366 [2024-05-14 21:58:58.936062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state configuring 00:16:58.366 [2024-05-14 21:58:58.936076] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:58.366 pt4 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.366 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.625 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.625 21:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.884 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.884 "name": "raid_bdev1", 00:16:58.884 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:58.884 "strip_size_kb": 0, 00:16:58.884 "state": "configuring", 00:16:58.884 "raid_level": "raid1", 00:16:58.884 "superblock": true, 00:16:58.884 "num_base_bdevs": 4, 00:16:58.884 "num_base_bdevs_discovered": 1, 00:16:58.884 "num_base_bdevs_operational": 3, 00:16:58.884 "base_bdevs_list": [ 00:16:58.884 { 00:16:58.884 "name": null, 00:16:58.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.884 "is_configured": false, 00:16:58.884 "data_offset": 2048, 00:16:58.884 "data_size": 63488 00:16:58.884 }, 00:16:58.884 { 00:16:58.884 "name": null, 00:16:58.884 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:58.884 "is_configured": false, 00:16:58.884 "data_offset": 2048, 00:16:58.884 "data_size": 63488 00:16:58.884 }, 00:16:58.884 { 00:16:58.884 "name": null, 00:16:58.884 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:58.884 "is_configured": false, 00:16:58.884 "data_offset": 2048, 00:16:58.884 "data_size": 63488 00:16:58.884 }, 00:16:58.884 { 00:16:58.884 "name": "pt4", 00:16:58.884 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:58.884 "is_configured": true, 00:16:58.884 "data_offset": 2048, 00:16:58.884 "data_size": 63488 00:16:58.884 } 00:16:58.884 ] 00:16:58.884 }' 00:16:58.884 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.884 21:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.142 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:16:59.142 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:16:59.142 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.401 [2024-05-14 21:58:59.823798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.401 [2024-05-14 21:58:59.823866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.401 [2024-05-14 21:58:59.823896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f1c80 00:16:59.401 [2024-05-14 21:58:59.823906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.401 [2024-05-14 21:58:59.824033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.401 [2024-05-14 21:58:59.824047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.401 [2024-05-14 21:58:59.824074] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:59.401 [2024-05-14 21:58:59.824084] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.401 pt2 00:16:59.401 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:16:59.401 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:16:59.401 21:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.659 [2024-05-14 21:59:00.071818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.659 [2024-05-14 21:59:00.071892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.659 [2024-05-14 21:59:00.071920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9f2900 00:16:59.659 [2024-05-14 21:59:00.071929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.659 [2024-05-14 21:59:00.072099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.659 [2024-05-14 21:59:00.072130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.659 [2024-05-14 21:59:00.072156] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:59.659 [2024-05-14 21:59:00.072165] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.659 [2024-05-14 21:59:00.072198] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d9f6300 00:16:59.659 [2024-05-14 21:59:00.072204] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.659 [2024-05-14 21:59:00.072226] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82da54e20 00:16:59.659 [2024-05-14 21:59:00.072274] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d9f6300 00:16:59.659 [2024-05-14 21:59:00.072280] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d9f6300 00:16:59.659 [2024-05-14 21:59:00.072303] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.659 pt3 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.659 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.918 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.918 "name": "raid_bdev1", 00:16:59.918 "uuid": "1ffb9e41-123d-11ef-8c90-4585f0cfab08", 00:16:59.918 "strip_size_kb": 0, 00:16:59.918 "state": "online", 00:16:59.918 "raid_level": "raid1", 00:16:59.918 "superblock": true, 00:16:59.918 "num_base_bdevs": 4, 00:16:59.918 "num_base_bdevs_discovered": 3, 00:16:59.918 "num_base_bdevs_operational": 3, 00:16:59.918 "base_bdevs_list": [ 00:16:59.918 { 00:16:59.918 "name": null, 00:16:59.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.918 "is_configured": false, 00:16:59.918 "data_offset": 2048, 00:16:59.918 "data_size": 63488 00:16:59.918 }, 00:16:59.918 { 00:16:59.918 "name": "pt2", 00:16:59.918 "uuid": "f5c9e6d7-a70a-3a5a-b489-ce966fcf8ee5", 00:16:59.918 "is_configured": true, 00:16:59.918 "data_offset": 2048, 00:16:59.918 "data_size": 63488 00:16:59.918 }, 00:16:59.918 { 00:16:59.918 "name": "pt3", 00:16:59.918 "uuid": "64c78825-21d2-605d-8952-556b899942b3", 00:16:59.918 "is_configured": true, 00:16:59.918 "data_offset": 2048, 00:16:59.918 "data_size": 63488 00:16:59.918 }, 00:16:59.918 { 00:16:59.918 "name": "pt4", 00:16:59.918 "uuid": "9ee97e75-5b12-db5c-ab03-6542d76b81c2", 00:16:59.918 "is_configured": true, 00:16:59.918 "data_offset": 2048, 00:16:59.918 "data_size": 63488 00:16:59.918 } 00:16:59.918 ] 00:16:59.918 }' 00:16:59.918 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.918 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.176 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.176 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:17:00.434 [2024-05-14 21:59:00.919888] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 1ffb9e41-123d-11ef-8c90-4585f0cfab08 '!=' 1ffb9e41-123d-11ef-8c90-4585f0cfab08 ']' 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 62889 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 62889 ']' 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 62889 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 62889 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:00.434 killing process with pid 62889 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62889' 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 62889 00:17:00.434 [2024-05-14 21:59:00.952425] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.434 [2024-05-14 21:59:00.952448] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.434 [2024-05-14 21:59:00.952466] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.434 [2024-05-14 21:59:00.952471] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d9f6300 name raid_bdev1, state offline 00:17:00.434 21:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 62889 00:17:00.434 [2024-05-14 21:59:00.976806] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.693 21:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:17:00.693 00:17:00.693 real 0m23.726s 00:17:00.693 user 0m43.470s 00:17:00.693 sys 0m3.120s 00:17:00.693 21:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:00.693 21:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.693 ************************************ 00:17:00.693 END TEST raid_superblock_test 00:17:00.693 ************************************ 00:17:00.693 21:59:01 bdev_raid -- bdev/bdev_raid.sh@821 -- # '[' '' = true ']' 00:17:00.693 21:59:01 bdev_raid -- bdev/bdev_raid.sh@830 -- # '[' n == y ']' 00:17:00.693 21:59:01 bdev_raid -- bdev/bdev_raid.sh@842 -- # base_blocklen=4096 00:17:00.693 21:59:01 bdev_raid -- bdev/bdev_raid.sh@844 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:00.693 21:59:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:00.693 21:59:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.693 21:59:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.693 ************************************ 00:17:00.693 START TEST raid_state_function_test_sb_4k 00:17:00.693 ************************************ 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=63559 00:17:00.693 Process raid pid: 63559 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 63559' 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 63559 /var/tmp/spdk-raid.sock 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 63559 ']' 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.693 21:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.693 [2024-05-14 21:59:01.231068] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:00.693 [2024-05-14 21:59:01.231364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:01.259 EAL: TSC is not safe to use in SMP mode 00:17:01.259 EAL: TSC is not invariant 00:17:01.259 [2024-05-14 21:59:01.826617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.518 [2024-05-14 21:59:01.922762] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:01.518 [2024-05-14 21:59:01.925110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.518 [2024-05-14 21:59:01.925946] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.518 [2024-05-14 21:59:01.925963] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.776 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.776 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:17:01.776 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:02.035 [2024-05-14 21:59:02.561974] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.035 [2024-05-14 21:59:02.562034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.035 [2024-05-14 21:59:02.562041] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.035 [2024-05-14 21:59:02.562050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.035 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.294 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.294 "name": "Existed_Raid", 00:17:02.294 "uuid": "2ccb96d3-123d-11ef-8c90-4585f0cfab08", 00:17:02.294 "strip_size_kb": 0, 00:17:02.294 "state": "configuring", 00:17:02.294 "raid_level": "raid1", 00:17:02.294 "superblock": true, 00:17:02.294 "num_base_bdevs": 2, 00:17:02.294 "num_base_bdevs_discovered": 0, 00:17:02.294 "num_base_bdevs_operational": 2, 00:17:02.294 "base_bdevs_list": [ 00:17:02.294 { 00:17:02.294 "name": "BaseBdev1", 00:17:02.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.294 "is_configured": false, 00:17:02.294 "data_offset": 0, 00:17:02.294 "data_size": 0 00:17:02.294 }, 00:17:02.294 { 00:17:02.294 "name": "BaseBdev2", 00:17:02.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.294 "is_configured": false, 00:17:02.294 "data_offset": 0, 00:17:02.294 "data_size": 0 00:17:02.294 } 00:17:02.294 ] 00:17:02.294 }' 00:17:02.294 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.294 21:59:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.861 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:02.861 [2024-05-14 21:59:03.413960] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.861 [2024-05-14 21:59:03.413986] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a3e3300 name Existed_Raid, state configuring 00:17:02.861 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.119 [2024-05-14 21:59:03.661995] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.119 [2024-05-14 21:59:03.662127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.119 [2024-05-14 21:59:03.662133] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.119 [2024-05-14 21:59:03.662159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.119 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:17:03.377 [2024-05-14 21:59:03.899153] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.377 BaseBdev1 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:03.377 21:59:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.636 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.895 [ 00:17:03.895 { 00:17:03.895 "name": "BaseBdev1", 00:17:03.895 "aliases": [ 00:17:03.895 "2d9776be-123d-11ef-8c90-4585f0cfab08" 00:17:03.895 ], 00:17:03.895 "product_name": "Malloc disk", 00:17:03.895 "block_size": 4096, 00:17:03.895 "num_blocks": 8192, 00:17:03.895 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:03.895 "assigned_rate_limits": { 00:17:03.895 "rw_ios_per_sec": 0, 00:17:03.895 "rw_mbytes_per_sec": 0, 00:17:03.895 "r_mbytes_per_sec": 0, 00:17:03.895 "w_mbytes_per_sec": 0 00:17:03.895 }, 00:17:03.895 "claimed": true, 00:17:03.895 "claim_type": "exclusive_write", 00:17:03.895 "zoned": false, 00:17:03.895 "supported_io_types": { 00:17:03.895 "read": true, 00:17:03.895 "write": true, 00:17:03.895 "unmap": true, 00:17:03.895 "write_zeroes": true, 00:17:03.895 "flush": true, 00:17:03.895 "reset": true, 00:17:03.895 "compare": false, 00:17:03.895 "compare_and_write": false, 00:17:03.895 "abort": true, 00:17:03.895 "nvme_admin": false, 00:17:03.895 "nvme_io": false 00:17:03.895 }, 00:17:03.895 "memory_domains": [ 00:17:03.895 { 00:17:03.895 "dma_device_id": "system", 00:17:03.895 "dma_device_type": 1 00:17:03.895 }, 00:17:03.895 { 00:17:03.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.895 "dma_device_type": 2 00:17:03.895 } 00:17:03.895 ], 00:17:03.895 "driver_specific": {} 00:17:03.895 } 00:17:03.895 ] 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.895 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.153 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.153 "name": "Existed_Raid", 00:17:04.153 "uuid": "2d737079-123d-11ef-8c90-4585f0cfab08", 00:17:04.153 "strip_size_kb": 0, 00:17:04.153 "state": "configuring", 00:17:04.153 "raid_level": "raid1", 00:17:04.153 "superblock": true, 00:17:04.153 "num_base_bdevs": 2, 00:17:04.153 "num_base_bdevs_discovered": 1, 00:17:04.153 "num_base_bdevs_operational": 2, 00:17:04.153 "base_bdevs_list": [ 00:17:04.153 { 00:17:04.153 "name": "BaseBdev1", 00:17:04.153 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:04.153 "is_configured": true, 00:17:04.153 "data_offset": 256, 00:17:04.153 "data_size": 7936 00:17:04.153 }, 00:17:04.153 { 00:17:04.153 "name": "BaseBdev2", 00:17:04.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.153 "is_configured": false, 00:17:04.153 "data_offset": 0, 00:17:04.153 "data_size": 0 00:17:04.153 } 00:17:04.153 ] 00:17:04.153 }' 00:17:04.153 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.153 21:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.719 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.979 [2024-05-14 21:59:05.326181] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.979 [2024-05-14 21:59:05.326218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a3e3300 name Existed_Raid, state configuring 00:17:04.979 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:05.237 [2024-05-14 21:59:05.610214] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.237 [2024-05-14 21:59:05.611032] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.237 [2024-05-14 21:59:05.611078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.237 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.238 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.238 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.496 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.496 "name": "Existed_Raid", 00:17:05.496 "uuid": "2e9cb68a-123d-11ef-8c90-4585f0cfab08", 00:17:05.496 "strip_size_kb": 0, 00:17:05.496 "state": "configuring", 00:17:05.496 "raid_level": "raid1", 00:17:05.496 "superblock": true, 00:17:05.496 "num_base_bdevs": 2, 00:17:05.496 "num_base_bdevs_discovered": 1, 00:17:05.496 "num_base_bdevs_operational": 2, 00:17:05.496 "base_bdevs_list": [ 00:17:05.496 { 00:17:05.496 "name": "BaseBdev1", 00:17:05.496 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:05.496 "is_configured": true, 00:17:05.496 "data_offset": 256, 00:17:05.496 "data_size": 7936 00:17:05.496 }, 00:17:05.496 { 00:17:05.496 "name": "BaseBdev2", 00:17:05.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.496 "is_configured": false, 00:17:05.496 "data_offset": 0, 00:17:05.496 "data_size": 0 00:17:05.496 } 00:17:05.496 ] 00:17:05.496 }' 00:17:05.496 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.496 21:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.754 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:17:06.011 [2024-05-14 21:59:06.570386] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.011 [2024-05-14 21:59:06.570487] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a3e3300 00:17:06.011 [2024-05-14 21:59:06.570505] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.011 [2024-05-14 21:59:06.570532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a441ec0 00:17:06.011 [2024-05-14 21:59:06.570585] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a3e3300 00:17:06.011 [2024-05-14 21:59:06.570591] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a3e3300 00:17:06.011 [2024-05-14 21:59:06.570616] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.011 BaseBdev2 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:06.011 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.576 21:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.834 [ 00:17:06.834 { 00:17:06.834 "name": "BaseBdev2", 00:17:06.834 "aliases": [ 00:17:06.834 "2f2f344e-123d-11ef-8c90-4585f0cfab08" 00:17:06.834 ], 00:17:06.834 "product_name": "Malloc disk", 00:17:06.834 "block_size": 4096, 00:17:06.834 "num_blocks": 8192, 00:17:06.834 "uuid": "2f2f344e-123d-11ef-8c90-4585f0cfab08", 00:17:06.834 "assigned_rate_limits": { 00:17:06.834 "rw_ios_per_sec": 0, 00:17:06.834 "rw_mbytes_per_sec": 0, 00:17:06.834 "r_mbytes_per_sec": 0, 00:17:06.834 "w_mbytes_per_sec": 0 00:17:06.834 }, 00:17:06.834 "claimed": true, 00:17:06.834 "claim_type": "exclusive_write", 00:17:06.834 "zoned": false, 00:17:06.834 "supported_io_types": { 00:17:06.834 "read": true, 00:17:06.834 "write": true, 00:17:06.834 "unmap": true, 00:17:06.834 "write_zeroes": true, 00:17:06.834 "flush": true, 00:17:06.834 "reset": true, 00:17:06.834 "compare": false, 00:17:06.834 "compare_and_write": false, 00:17:06.834 "abort": true, 00:17:06.834 "nvme_admin": false, 00:17:06.834 "nvme_io": false 00:17:06.834 }, 00:17:06.834 "memory_domains": [ 00:17:06.834 { 00:17:06.834 "dma_device_id": "system", 00:17:06.834 "dma_device_type": 1 00:17:06.834 }, 00:17:06.834 { 00:17:06.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.834 "dma_device_type": 2 00:17:06.834 } 00:17:06.835 ], 00:17:06.835 "driver_specific": {} 00:17:06.835 } 00:17:06.835 ] 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.835 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.093 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.093 "name": "Existed_Raid", 00:17:07.093 "uuid": "2e9cb68a-123d-11ef-8c90-4585f0cfab08", 00:17:07.093 "strip_size_kb": 0, 00:17:07.093 "state": "online", 00:17:07.093 "raid_level": "raid1", 00:17:07.093 "superblock": true, 00:17:07.093 "num_base_bdevs": 2, 00:17:07.093 "num_base_bdevs_discovered": 2, 00:17:07.093 "num_base_bdevs_operational": 2, 00:17:07.093 "base_bdevs_list": [ 00:17:07.093 { 00:17:07.093 "name": "BaseBdev1", 00:17:07.093 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:07.093 "is_configured": true, 00:17:07.093 "data_offset": 256, 00:17:07.093 "data_size": 7936 00:17:07.093 }, 00:17:07.093 { 00:17:07.093 "name": "BaseBdev2", 00:17:07.093 "uuid": "2f2f344e-123d-11ef-8c90-4585f0cfab08", 00:17:07.093 "is_configured": true, 00:17:07.093 "data_offset": 256, 00:17:07.093 "data_size": 7936 00:17:07.093 } 00:17:07.093 ] 00:17:07.093 }' 00:17:07.093 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.093 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.351 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:07.352 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:07.610 [2024-05-14 21:59:07.958365] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.610 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:07.610 "name": "Existed_Raid", 00:17:07.610 "aliases": [ 00:17:07.610 "2e9cb68a-123d-11ef-8c90-4585f0cfab08" 00:17:07.610 ], 00:17:07.610 "product_name": "Raid Volume", 00:17:07.610 "block_size": 4096, 00:17:07.610 "num_blocks": 7936, 00:17:07.610 "uuid": "2e9cb68a-123d-11ef-8c90-4585f0cfab08", 00:17:07.610 "assigned_rate_limits": { 00:17:07.610 "rw_ios_per_sec": 0, 00:17:07.610 "rw_mbytes_per_sec": 0, 00:17:07.610 "r_mbytes_per_sec": 0, 00:17:07.610 "w_mbytes_per_sec": 0 00:17:07.610 }, 00:17:07.610 "claimed": false, 00:17:07.610 "zoned": false, 00:17:07.610 "supported_io_types": { 00:17:07.610 "read": true, 00:17:07.610 "write": true, 00:17:07.610 "unmap": false, 00:17:07.610 "write_zeroes": true, 00:17:07.610 "flush": false, 00:17:07.610 "reset": true, 00:17:07.610 "compare": false, 00:17:07.610 "compare_and_write": false, 00:17:07.610 "abort": false, 00:17:07.610 "nvme_admin": false, 00:17:07.610 "nvme_io": false 00:17:07.610 }, 00:17:07.610 "memory_domains": [ 00:17:07.610 { 00:17:07.610 "dma_device_id": "system", 00:17:07.610 "dma_device_type": 1 00:17:07.610 }, 00:17:07.610 { 00:17:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.610 "dma_device_type": 2 00:17:07.610 }, 00:17:07.610 { 00:17:07.610 "dma_device_id": "system", 00:17:07.610 "dma_device_type": 1 00:17:07.610 }, 00:17:07.610 { 00:17:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.610 "dma_device_type": 2 00:17:07.610 } 00:17:07.610 ], 00:17:07.610 "driver_specific": { 00:17:07.610 "raid": { 00:17:07.610 "uuid": "2e9cb68a-123d-11ef-8c90-4585f0cfab08", 00:17:07.610 "strip_size_kb": 0, 00:17:07.610 "state": "online", 00:17:07.610 "raid_level": "raid1", 00:17:07.610 "superblock": true, 00:17:07.610 "num_base_bdevs": 2, 00:17:07.610 "num_base_bdevs_discovered": 2, 00:17:07.610 "num_base_bdevs_operational": 2, 00:17:07.610 "base_bdevs_list": [ 00:17:07.610 { 00:17:07.610 "name": "BaseBdev1", 00:17:07.610 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:07.610 "is_configured": true, 00:17:07.610 "data_offset": 256, 00:17:07.610 "data_size": 7936 00:17:07.610 }, 00:17:07.610 { 00:17:07.610 "name": "BaseBdev2", 00:17:07.610 "uuid": "2f2f344e-123d-11ef-8c90-4585f0cfab08", 00:17:07.610 "is_configured": true, 00:17:07.610 "data_offset": 256, 00:17:07.610 "data_size": 7936 00:17:07.610 } 00:17:07.610 ] 00:17:07.611 } 00:17:07.611 } 00:17:07.611 }' 00:17:07.611 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.611 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:07.611 BaseBdev2' 00:17:07.611 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:07.611 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:07.611 21:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:07.869 "name": "BaseBdev1", 00:17:07.869 "aliases": [ 00:17:07.869 "2d9776be-123d-11ef-8c90-4585f0cfab08" 00:17:07.869 ], 00:17:07.869 "product_name": "Malloc disk", 00:17:07.869 "block_size": 4096, 00:17:07.869 "num_blocks": 8192, 00:17:07.869 "uuid": "2d9776be-123d-11ef-8c90-4585f0cfab08", 00:17:07.869 "assigned_rate_limits": { 00:17:07.869 "rw_ios_per_sec": 0, 00:17:07.869 "rw_mbytes_per_sec": 0, 00:17:07.869 "r_mbytes_per_sec": 0, 00:17:07.869 "w_mbytes_per_sec": 0 00:17:07.869 }, 00:17:07.869 "claimed": true, 00:17:07.869 "claim_type": "exclusive_write", 00:17:07.869 "zoned": false, 00:17:07.869 "supported_io_types": { 00:17:07.869 "read": true, 00:17:07.869 "write": true, 00:17:07.869 "unmap": true, 00:17:07.869 "write_zeroes": true, 00:17:07.869 "flush": true, 00:17:07.869 "reset": true, 00:17:07.869 "compare": false, 00:17:07.869 "compare_and_write": false, 00:17:07.869 "abort": true, 00:17:07.869 "nvme_admin": false, 00:17:07.869 "nvme_io": false 00:17:07.869 }, 00:17:07.869 "memory_domains": [ 00:17:07.869 { 00:17:07.869 "dma_device_id": "system", 00:17:07.869 "dma_device_type": 1 00:17:07.869 }, 00:17:07.869 { 00:17:07.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.869 "dma_device_type": 2 00:17:07.869 } 00:17:07.869 ], 00:17:07.869 "driver_specific": {} 00:17:07.869 }' 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:07.869 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:08.129 "name": "BaseBdev2", 00:17:08.129 "aliases": [ 00:17:08.129 "2f2f344e-123d-11ef-8c90-4585f0cfab08" 00:17:08.129 ], 00:17:08.129 "product_name": "Malloc disk", 00:17:08.129 "block_size": 4096, 00:17:08.129 "num_blocks": 8192, 00:17:08.129 "uuid": "2f2f344e-123d-11ef-8c90-4585f0cfab08", 00:17:08.129 "assigned_rate_limits": { 00:17:08.129 "rw_ios_per_sec": 0, 00:17:08.129 "rw_mbytes_per_sec": 0, 00:17:08.129 "r_mbytes_per_sec": 0, 00:17:08.129 "w_mbytes_per_sec": 0 00:17:08.129 }, 00:17:08.129 "claimed": true, 00:17:08.129 "claim_type": "exclusive_write", 00:17:08.129 "zoned": false, 00:17:08.129 "supported_io_types": { 00:17:08.129 "read": true, 00:17:08.129 "write": true, 00:17:08.129 "unmap": true, 00:17:08.129 "write_zeroes": true, 00:17:08.129 "flush": true, 00:17:08.129 "reset": true, 00:17:08.129 "compare": false, 00:17:08.129 "compare_and_write": false, 00:17:08.129 "abort": true, 00:17:08.129 "nvme_admin": false, 00:17:08.129 "nvme_io": false 00:17:08.129 }, 00:17:08.129 "memory_domains": [ 00:17:08.129 { 00:17:08.129 "dma_device_id": "system", 00:17:08.129 "dma_device_type": 1 00:17:08.129 }, 00:17:08.129 { 00:17:08.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.129 "dma_device_type": 2 00:17:08.129 } 00:17:08.129 ], 00:17:08.129 "driver_specific": {} 00:17:08.129 }' 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:08.129 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.387 [2024-05-14 21:59:08.898409] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.387 21:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.645 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.645 "name": "Existed_Raid", 00:17:08.645 "uuid": "2e9cb68a-123d-11ef-8c90-4585f0cfab08", 00:17:08.645 "strip_size_kb": 0, 00:17:08.645 "state": "online", 00:17:08.645 "raid_level": "raid1", 00:17:08.645 "superblock": true, 00:17:08.645 "num_base_bdevs": 2, 00:17:08.645 "num_base_bdevs_discovered": 1, 00:17:08.645 "num_base_bdevs_operational": 1, 00:17:08.645 "base_bdevs_list": [ 00:17:08.645 { 00:17:08.645 "name": null, 00:17:08.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.645 "is_configured": false, 00:17:08.645 "data_offset": 256, 00:17:08.645 "data_size": 7936 00:17:08.645 }, 00:17:08.645 { 00:17:08.645 "name": "BaseBdev2", 00:17:08.645 "uuid": "2f2f344e-123d-11ef-8c90-4585f0cfab08", 00:17:08.645 "is_configured": true, 00:17:08.645 "data_offset": 256, 00:17:08.645 "data_size": 7936 00:17:08.645 } 00:17:08.645 ] 00:17:08.645 }' 00:17:08.645 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.645 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.210 21:59:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:09.468 [2024-05-14 21:59:09.996712] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.468 [2024-05-14 21:59:09.996769] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.468 [2024-05-14 21:59:10.003002] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.468 [2024-05-14 21:59:10.003103] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.468 [2024-05-14 21:59:10.003110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a3e3300 name Existed_Raid, state offline 00:17:09.468 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.468 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.468 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.468 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:09.725 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:09.725 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:09.725 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:17:09.725 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 63559 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 63559 ']' 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 63559 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps -c -o command 63559 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # tail -1 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:09.726 killing process with pid 63559 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63559' 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 63559 00:17:09.726 [2024-05-14 21:59:10.308833] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.726 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 63559 00:17:09.726 [2024-05-14 21:59:10.308869] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.983 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:17:09.983 00:17:09.983 real 0m9.284s 00:17:09.983 user 0m16.160s 00:17:09.983 sys 0m1.628s 00:17:09.983 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.983 ************************************ 00:17:09.983 END TEST raid_state_function_test_sb_4k 00:17:09.983 ************************************ 00:17:09.983 21:59:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 21:59:10 bdev_raid -- bdev/bdev_raid.sh@845 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:09.983 21:59:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:09.983 21:59:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.983 21:59:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 ************************************ 00:17:09.983 START TEST raid_superblock_test_4k 00:17:09.983 ************************************ 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=63833 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 63833 /var/tmp/spdk-raid.sock 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 63833 ']' 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.983 21:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 [2024-05-14 21:59:10.553897] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:09.983 [2024-05-14 21:59:10.554137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:10.548 EAL: TSC is not safe to use in SMP mode 00:17:10.548 EAL: TSC is not invariant 00:17:10.548 [2024-05-14 21:59:11.103239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.806 [2024-05-14 21:59:11.194221] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:10.806 [2024-05-14 21:59:11.196676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.806 [2024-05-14 21:59:11.197478] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.806 [2024-05-14 21:59:11.197494] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:17:11.372 malloc1 00:17:11.372 21:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.630 [2024-05-14 21:59:12.186442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.630 [2024-05-14 21:59:12.186540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.630 [2024-05-14 21:59:12.187192] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b748780 00:17:11.630 [2024-05-14 21:59:12.187231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.630 [2024-05-14 21:59:12.188157] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.630 [2024-05-14 21:59:12.188188] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.630 pt1 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.630 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:17:11.888 malloc2 00:17:12.145 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.145 [2024-05-14 21:59:12.698453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.145 [2024-05-14 21:59:12.698519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.145 [2024-05-14 21:59:12.698549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b748c80 00:17:12.145 [2024-05-14 21:59:12.698558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.145 [2024-05-14 21:59:12.699304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.145 [2024-05-14 21:59:12.699336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.145 pt2 00:17:12.145 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.145 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.145 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:12.403 [2024-05-14 21:59:12.970483] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.403 [2024-05-14 21:59:12.971080] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.403 [2024-05-14 21:59:12.971146] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b74d300 00:17:12.403 [2024-05-14 21:59:12.971154] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.403 [2024-05-14 21:59:12.971194] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7abe20 00:17:12.403 [2024-05-14 21:59:12.971271] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b74d300 00:17:12.403 [2024-05-14 21:59:12.971277] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b74d300 00:17:12.403 [2024-05-14 21:59:12.971305] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.403 21:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.969 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.969 "name": "raid_bdev1", 00:17:12.969 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:12.969 "strip_size_kb": 0, 00:17:12.969 "state": "online", 00:17:12.969 "raid_level": "raid1", 00:17:12.969 "superblock": true, 00:17:12.969 "num_base_bdevs": 2, 00:17:12.969 "num_base_bdevs_discovered": 2, 00:17:12.969 "num_base_bdevs_operational": 2, 00:17:12.969 "base_bdevs_list": [ 00:17:12.969 { 00:17:12.969 "name": "pt1", 00:17:12.969 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:12.969 "is_configured": true, 00:17:12.969 "data_offset": 256, 00:17:12.969 "data_size": 7936 00:17:12.969 }, 00:17:12.969 { 00:17:12.969 "name": "pt2", 00:17:12.969 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:12.969 "is_configured": true, 00:17:12.969 "data_offset": 256, 00:17:12.969 "data_size": 7936 00:17:12.969 } 00:17:12.969 ] 00:17:12.969 }' 00:17:12.969 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.969 21:59:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:13.226 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:13.483 [2024-05-14 21:59:13.882532] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:13.483 "name": "raid_bdev1", 00:17:13.483 "aliases": [ 00:17:13.483 "32ffcd21-123d-11ef-8c90-4585f0cfab08" 00:17:13.483 ], 00:17:13.483 "product_name": "Raid Volume", 00:17:13.483 "block_size": 4096, 00:17:13.483 "num_blocks": 7936, 00:17:13.483 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:13.483 "assigned_rate_limits": { 00:17:13.483 "rw_ios_per_sec": 0, 00:17:13.483 "rw_mbytes_per_sec": 0, 00:17:13.483 "r_mbytes_per_sec": 0, 00:17:13.483 "w_mbytes_per_sec": 0 00:17:13.483 }, 00:17:13.483 "claimed": false, 00:17:13.483 "zoned": false, 00:17:13.483 "supported_io_types": { 00:17:13.483 "read": true, 00:17:13.483 "write": true, 00:17:13.483 "unmap": false, 00:17:13.483 "write_zeroes": true, 00:17:13.483 "flush": false, 00:17:13.483 "reset": true, 00:17:13.483 "compare": false, 00:17:13.483 "compare_and_write": false, 00:17:13.483 "abort": false, 00:17:13.483 "nvme_admin": false, 00:17:13.483 "nvme_io": false 00:17:13.483 }, 00:17:13.483 "memory_domains": [ 00:17:13.483 { 00:17:13.483 "dma_device_id": "system", 00:17:13.483 "dma_device_type": 1 00:17:13.483 }, 00:17:13.483 { 00:17:13.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.483 "dma_device_type": 2 00:17:13.483 }, 00:17:13.483 { 00:17:13.483 "dma_device_id": "system", 00:17:13.483 "dma_device_type": 1 00:17:13.483 }, 00:17:13.483 { 00:17:13.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.483 "dma_device_type": 2 00:17:13.483 } 00:17:13.483 ], 00:17:13.483 "driver_specific": { 00:17:13.483 "raid": { 00:17:13.483 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:13.483 "strip_size_kb": 0, 00:17:13.483 "state": "online", 00:17:13.483 "raid_level": "raid1", 00:17:13.483 "superblock": true, 00:17:13.483 "num_base_bdevs": 2, 00:17:13.483 "num_base_bdevs_discovered": 2, 00:17:13.483 "num_base_bdevs_operational": 2, 00:17:13.483 "base_bdevs_list": [ 00:17:13.483 { 00:17:13.483 "name": "pt1", 00:17:13.483 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:13.483 "is_configured": true, 00:17:13.483 "data_offset": 256, 00:17:13.483 "data_size": 7936 00:17:13.483 }, 00:17:13.483 { 00:17:13.483 "name": "pt2", 00:17:13.483 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:13.483 "is_configured": true, 00:17:13.483 "data_offset": 256, 00:17:13.483 "data_size": 7936 00:17:13.483 } 00:17:13.483 ] 00:17:13.483 } 00:17:13.483 } 00:17:13.483 }' 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:13.483 pt2' 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:13.483 21:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:13.742 "name": "pt1", 00:17:13.742 "aliases": [ 00:17:13.742 "75cb2190-6551-a752-a6eb-d78e38262615" 00:17:13.742 ], 00:17:13.742 "product_name": "passthru", 00:17:13.742 "block_size": 4096, 00:17:13.742 "num_blocks": 8192, 00:17:13.742 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:13.742 "assigned_rate_limits": { 00:17:13.742 "rw_ios_per_sec": 0, 00:17:13.742 "rw_mbytes_per_sec": 0, 00:17:13.742 "r_mbytes_per_sec": 0, 00:17:13.742 "w_mbytes_per_sec": 0 00:17:13.742 }, 00:17:13.742 "claimed": true, 00:17:13.742 "claim_type": "exclusive_write", 00:17:13.742 "zoned": false, 00:17:13.742 "supported_io_types": { 00:17:13.742 "read": true, 00:17:13.742 "write": true, 00:17:13.742 "unmap": true, 00:17:13.742 "write_zeroes": true, 00:17:13.742 "flush": true, 00:17:13.742 "reset": true, 00:17:13.742 "compare": false, 00:17:13.742 "compare_and_write": false, 00:17:13.742 "abort": true, 00:17:13.742 "nvme_admin": false, 00:17:13.742 "nvme_io": false 00:17:13.742 }, 00:17:13.742 "memory_domains": [ 00:17:13.742 { 00:17:13.742 "dma_device_id": "system", 00:17:13.742 "dma_device_type": 1 00:17:13.742 }, 00:17:13.742 { 00:17:13.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.742 "dma_device_type": 2 00:17:13.742 } 00:17:13.742 ], 00:17:13.742 "driver_specific": { 00:17:13.742 "passthru": { 00:17:13.742 "name": "pt1", 00:17:13.742 "base_bdev_name": "malloc1" 00:17:13.742 } 00:17:13.742 } 00:17:13.742 }' 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:13.742 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:14.000 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:14.000 "name": "pt2", 00:17:14.000 "aliases": [ 00:17:14.000 "a8e16d37-fcde-1b54-b0f5-9897fd637489" 00:17:14.000 ], 00:17:14.000 "product_name": "passthru", 00:17:14.000 "block_size": 4096, 00:17:14.000 "num_blocks": 8192, 00:17:14.000 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:14.000 "assigned_rate_limits": { 00:17:14.000 "rw_ios_per_sec": 0, 00:17:14.000 "rw_mbytes_per_sec": 0, 00:17:14.000 "r_mbytes_per_sec": 0, 00:17:14.000 "w_mbytes_per_sec": 0 00:17:14.000 }, 00:17:14.000 "claimed": true, 00:17:14.000 "claim_type": "exclusive_write", 00:17:14.000 "zoned": false, 00:17:14.000 "supported_io_types": { 00:17:14.000 "read": true, 00:17:14.000 "write": true, 00:17:14.000 "unmap": true, 00:17:14.000 "write_zeroes": true, 00:17:14.000 "flush": true, 00:17:14.000 "reset": true, 00:17:14.000 "compare": false, 00:17:14.000 "compare_and_write": false, 00:17:14.000 "abort": true, 00:17:14.000 "nvme_admin": false, 00:17:14.000 "nvme_io": false 00:17:14.000 }, 00:17:14.000 "memory_domains": [ 00:17:14.000 { 00:17:14.000 "dma_device_id": "system", 00:17:14.000 "dma_device_type": 1 00:17:14.000 }, 00:17:14.000 { 00:17:14.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.000 "dma_device_type": 2 00:17:14.000 } 00:17:14.000 ], 00:17:14.000 "driver_specific": { 00:17:14.000 "passthru": { 00:17:14.000 "name": "pt2", 00:17:14.000 "base_bdev_name": "malloc2" 00:17:14.000 } 00:17:14.000 } 00:17:14.000 }' 00:17:14.000 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:14.257 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:14.515 [2024-05-14 21:59:14.914571] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.515 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32ffcd21-123d-11ef-8c90-4585f0cfab08 00:17:14.515 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 32ffcd21-123d-11ef-8c90-4585f0cfab08 ']' 00:17:14.515 21:59:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:14.773 [2024-05-14 21:59:15.210518] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.773 [2024-05-14 21:59:15.210549] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.773 [2024-05-14 21:59:15.210574] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.773 [2024-05-14 21:59:15.210589] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.773 [2024-05-14 21:59:15.210594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d300 name raid_bdev1, state offline 00:17:14.773 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.773 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:15.030 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:15.031 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:15.031 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.031 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:15.288 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.288 21:59:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:15.547 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:15.547 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:16.115 [2024-05-14 21:59:16.670562] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:16.115 [2024-05-14 21:59:16.671190] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:16.115 [2024-05-14 21:59:16.671219] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:16.115 [2024-05-14 21:59:16.671263] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:16.115 [2024-05-14 21:59:16.671311] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.115 [2024-05-14 21:59:16.671317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d300 name raid_bdev1, state configuring 00:17:16.115 request: 00:17:16.115 { 00:17:16.115 "name": "raid_bdev1", 00:17:16.115 "raid_level": "raid1", 00:17:16.115 "base_bdevs": [ 00:17:16.115 "malloc1", 00:17:16.115 "malloc2" 00:17:16.115 ], 00:17:16.115 "superblock": false, 00:17:16.115 "method": "bdev_raid_create", 00:17:16.115 "req_id": 1 00:17:16.115 } 00:17:16.115 Got JSON-RPC error response 00:17:16.115 response: 00:17:16.115 { 00:17:16.115 "code": -17, 00:17:16.115 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:16.115 } 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.115 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:16.372 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:16.372 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:16.372 21:59:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.630 [2024-05-14 21:59:17.194587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.630 [2024-05-14 21:59:17.194685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.630 [2024-05-14 21:59:17.194717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b748c80 00:17:16.630 [2024-05-14 21:59:17.194727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.630 [2024-05-14 21:59:17.195423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.630 [2024-05-14 21:59:17.195455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.630 [2024-05-14 21:59:17.195483] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:16.630 [2024-05-14 21:59:17.195496] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.630 pt1 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.630 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.195 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.195 "name": "raid_bdev1", 00:17:17.195 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:17.195 "strip_size_kb": 0, 00:17:17.195 "state": "configuring", 00:17:17.195 "raid_level": "raid1", 00:17:17.195 "superblock": true, 00:17:17.195 "num_base_bdevs": 2, 00:17:17.195 "num_base_bdevs_discovered": 1, 00:17:17.195 "num_base_bdevs_operational": 2, 00:17:17.195 "base_bdevs_list": [ 00:17:17.195 { 00:17:17.195 "name": "pt1", 00:17:17.195 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:17.195 "is_configured": true, 00:17:17.195 "data_offset": 256, 00:17:17.195 "data_size": 7936 00:17:17.195 }, 00:17:17.195 { 00:17:17.195 "name": null, 00:17:17.195 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:17.195 "is_configured": false, 00:17:17.195 "data_offset": 256, 00:17:17.195 "data_size": 7936 00:17:17.195 } 00:17:17.195 ] 00:17:17.195 }' 00:17:17.196 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.196 21:59:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:17.453 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:17.453 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.453 21:59:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.710 [2024-05-14 21:59:18.070622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.710 [2024-05-14 21:59:18.070714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.710 [2024-05-14 21:59:18.070747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b748f00 00:17:17.710 [2024-05-14 21:59:18.070764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.710 [2024-05-14 21:59:18.070902] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.710 [2024-05-14 21:59:18.070916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.710 [2024-05-14 21:59:18.070943] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:17.710 [2024-05-14 21:59:18.070952] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.710 [2024-05-14 21:59:18.070984] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b74d300 00:17:17.710 [2024-05-14 21:59:18.070989] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:17.710 [2024-05-14 21:59:18.071010] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7abe20 00:17:17.710 [2024-05-14 21:59:18.071090] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b74d300 00:17:17.710 [2024-05-14 21:59:18.071095] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b74d300 00:17:17.710 [2024-05-14 21:59:18.071119] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.710 pt2 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.710 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.968 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.968 "name": "raid_bdev1", 00:17:17.968 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:17.968 "strip_size_kb": 0, 00:17:17.968 "state": "online", 00:17:17.968 "raid_level": "raid1", 00:17:17.968 "superblock": true, 00:17:17.968 "num_base_bdevs": 2, 00:17:17.968 "num_base_bdevs_discovered": 2, 00:17:17.968 "num_base_bdevs_operational": 2, 00:17:17.968 "base_bdevs_list": [ 00:17:17.968 { 00:17:17.968 "name": "pt1", 00:17:17.968 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:17.968 "is_configured": true, 00:17:17.968 "data_offset": 256, 00:17:17.968 "data_size": 7936 00:17:17.968 }, 00:17:17.968 { 00:17:17.968 "name": "pt2", 00:17:17.968 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:17.968 "is_configured": true, 00:17:17.968 "data_offset": 256, 00:17:17.968 "data_size": 7936 00:17:17.968 } 00:17:17.968 ] 00:17:17.968 }' 00:17:17.968 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.968 21:59:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:18.225 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:18.483 [2024-05-14 21:59:18.954681] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.483 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:18.483 "name": "raid_bdev1", 00:17:18.483 "aliases": [ 00:17:18.483 "32ffcd21-123d-11ef-8c90-4585f0cfab08" 00:17:18.483 ], 00:17:18.483 "product_name": "Raid Volume", 00:17:18.483 "block_size": 4096, 00:17:18.483 "num_blocks": 7936, 00:17:18.483 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:18.483 "assigned_rate_limits": { 00:17:18.483 "rw_ios_per_sec": 0, 00:17:18.483 "rw_mbytes_per_sec": 0, 00:17:18.483 "r_mbytes_per_sec": 0, 00:17:18.483 "w_mbytes_per_sec": 0 00:17:18.483 }, 00:17:18.483 "claimed": false, 00:17:18.483 "zoned": false, 00:17:18.483 "supported_io_types": { 00:17:18.483 "read": true, 00:17:18.483 "write": true, 00:17:18.483 "unmap": false, 00:17:18.483 "write_zeroes": true, 00:17:18.483 "flush": false, 00:17:18.483 "reset": true, 00:17:18.483 "compare": false, 00:17:18.483 "compare_and_write": false, 00:17:18.483 "abort": false, 00:17:18.483 "nvme_admin": false, 00:17:18.483 "nvme_io": false 00:17:18.483 }, 00:17:18.483 "memory_domains": [ 00:17:18.483 { 00:17:18.483 "dma_device_id": "system", 00:17:18.483 "dma_device_type": 1 00:17:18.483 }, 00:17:18.483 { 00:17:18.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.483 "dma_device_type": 2 00:17:18.483 }, 00:17:18.483 { 00:17:18.483 "dma_device_id": "system", 00:17:18.483 "dma_device_type": 1 00:17:18.483 }, 00:17:18.483 { 00:17:18.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.483 "dma_device_type": 2 00:17:18.483 } 00:17:18.483 ], 00:17:18.483 "driver_specific": { 00:17:18.483 "raid": { 00:17:18.483 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:18.483 "strip_size_kb": 0, 00:17:18.483 "state": "online", 00:17:18.484 "raid_level": "raid1", 00:17:18.484 "superblock": true, 00:17:18.484 "num_base_bdevs": 2, 00:17:18.484 "num_base_bdevs_discovered": 2, 00:17:18.484 "num_base_bdevs_operational": 2, 00:17:18.484 "base_bdevs_list": [ 00:17:18.484 { 00:17:18.484 "name": "pt1", 00:17:18.484 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:18.484 "is_configured": true, 00:17:18.484 "data_offset": 256, 00:17:18.484 "data_size": 7936 00:17:18.484 }, 00:17:18.484 { 00:17:18.484 "name": "pt2", 00:17:18.484 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:18.484 "is_configured": true, 00:17:18.484 "data_offset": 256, 00:17:18.484 "data_size": 7936 00:17:18.484 } 00:17:18.484 ] 00:17:18.484 } 00:17:18.484 } 00:17:18.484 }' 00:17:18.484 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.484 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:18.484 pt2' 00:17:18.484 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:18.484 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:18.484 21:59:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:18.742 "name": "pt1", 00:17:18.742 "aliases": [ 00:17:18.742 "75cb2190-6551-a752-a6eb-d78e38262615" 00:17:18.742 ], 00:17:18.742 "product_name": "passthru", 00:17:18.742 "block_size": 4096, 00:17:18.742 "num_blocks": 8192, 00:17:18.742 "uuid": "75cb2190-6551-a752-a6eb-d78e38262615", 00:17:18.742 "assigned_rate_limits": { 00:17:18.742 "rw_ios_per_sec": 0, 00:17:18.742 "rw_mbytes_per_sec": 0, 00:17:18.742 "r_mbytes_per_sec": 0, 00:17:18.742 "w_mbytes_per_sec": 0 00:17:18.742 }, 00:17:18.742 "claimed": true, 00:17:18.742 "claim_type": "exclusive_write", 00:17:18.742 "zoned": false, 00:17:18.742 "supported_io_types": { 00:17:18.742 "read": true, 00:17:18.742 "write": true, 00:17:18.742 "unmap": true, 00:17:18.742 "write_zeroes": true, 00:17:18.742 "flush": true, 00:17:18.742 "reset": true, 00:17:18.742 "compare": false, 00:17:18.742 "compare_and_write": false, 00:17:18.742 "abort": true, 00:17:18.742 "nvme_admin": false, 00:17:18.742 "nvme_io": false 00:17:18.742 }, 00:17:18.742 "memory_domains": [ 00:17:18.742 { 00:17:18.742 "dma_device_id": "system", 00:17:18.742 "dma_device_type": 1 00:17:18.742 }, 00:17:18.742 { 00:17:18.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.742 "dma_device_type": 2 00:17:18.742 } 00:17:18.742 ], 00:17:18.742 "driver_specific": { 00:17:18.742 "passthru": { 00:17:18.742 "name": "pt1", 00:17:18.742 "base_bdev_name": "malloc1" 00:17:18.742 } 00:17:18.742 } 00:17:18.742 }' 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:18.742 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:19.000 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:19.259 "name": "pt2", 00:17:19.259 "aliases": [ 00:17:19.259 "a8e16d37-fcde-1b54-b0f5-9897fd637489" 00:17:19.259 ], 00:17:19.259 "product_name": "passthru", 00:17:19.259 "block_size": 4096, 00:17:19.259 "num_blocks": 8192, 00:17:19.259 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:19.259 "assigned_rate_limits": { 00:17:19.259 "rw_ios_per_sec": 0, 00:17:19.259 "rw_mbytes_per_sec": 0, 00:17:19.259 "r_mbytes_per_sec": 0, 00:17:19.259 "w_mbytes_per_sec": 0 00:17:19.259 }, 00:17:19.259 "claimed": true, 00:17:19.259 "claim_type": "exclusive_write", 00:17:19.259 "zoned": false, 00:17:19.259 "supported_io_types": { 00:17:19.259 "read": true, 00:17:19.259 "write": true, 00:17:19.259 "unmap": true, 00:17:19.259 "write_zeroes": true, 00:17:19.259 "flush": true, 00:17:19.259 "reset": true, 00:17:19.259 "compare": false, 00:17:19.259 "compare_and_write": false, 00:17:19.259 "abort": true, 00:17:19.259 "nvme_admin": false, 00:17:19.259 "nvme_io": false 00:17:19.259 }, 00:17:19.259 "memory_domains": [ 00:17:19.259 { 00:17:19.259 "dma_device_id": "system", 00:17:19.259 "dma_device_type": 1 00:17:19.259 }, 00:17:19.259 { 00:17:19.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.259 "dma_device_type": 2 00:17:19.259 } 00:17:19.259 ], 00:17:19.259 "driver_specific": { 00:17:19.259 "passthru": { 00:17:19.259 "name": "pt2", 00:17:19.259 "base_bdev_name": "malloc2" 00:17:19.259 } 00:17:19.259 } 00:17:19.259 }' 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:19.259 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:19.515 [2024-05-14 21:59:19.970767] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.515 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 32ffcd21-123d-11ef-8c90-4585f0cfab08 '!=' 32ffcd21-123d-11ef-8c90-4585f0cfab08 ']' 00:17:19.515 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:19.515 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:19.515 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:17:19.515 21:59:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:19.773 [2024-05-14 21:59:20.234750] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.773 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.029 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.029 "name": "raid_bdev1", 00:17:20.029 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:20.029 "strip_size_kb": 0, 00:17:20.029 "state": "online", 00:17:20.029 "raid_level": "raid1", 00:17:20.029 "superblock": true, 00:17:20.029 "num_base_bdevs": 2, 00:17:20.029 "num_base_bdevs_discovered": 1, 00:17:20.029 "num_base_bdevs_operational": 1, 00:17:20.029 "base_bdevs_list": [ 00:17:20.029 { 00:17:20.029 "name": null, 00:17:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.029 "is_configured": false, 00:17:20.029 "data_offset": 256, 00:17:20.029 "data_size": 7936 00:17:20.029 }, 00:17:20.029 { 00:17:20.029 "name": "pt2", 00:17:20.029 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:20.029 "is_configured": true, 00:17:20.029 "data_offset": 256, 00:17:20.029 "data_size": 7936 00:17:20.029 } 00:17:20.029 ] 00:17:20.029 }' 00:17:20.029 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.029 21:59:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.287 21:59:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:20.545 [2024-05-14 21:59:21.086749] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.545 [2024-05-14 21:59:21.086779] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.545 [2024-05-14 21:59:21.086820] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.545 [2024-05-14 21:59:21.086834] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.545 [2024-05-14 21:59:21.086838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d300 name raid_bdev1, state offline 00:17:20.545 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.545 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:20.803 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:20.803 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:20.803 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:20.803 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.803 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:21.061 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.319 [2024-05-14 21:59:21.830762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.319 [2024-05-14 21:59:21.830852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.319 [2024-05-14 21:59:21.830883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b748f00 00:17:21.319 [2024-05-14 21:59:21.830893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.319 [2024-05-14 21:59:21.831590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.319 [2024-05-14 21:59:21.831621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.320 [2024-05-14 21:59:21.831650] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:21.320 [2024-05-14 21:59:21.831664] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.320 [2024-05-14 21:59:21.831691] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b74d300 00:17:21.320 [2024-05-14 21:59:21.831696] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.320 [2024-05-14 21:59:21.831716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7abe20 00:17:21.320 [2024-05-14 21:59:21.831766] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b74d300 00:17:21.320 [2024-05-14 21:59:21.831771] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b74d300 00:17:21.320 [2024-05-14 21:59:21.831795] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.320 pt2 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.320 21:59:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.578 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.578 "name": "raid_bdev1", 00:17:21.578 "uuid": "32ffcd21-123d-11ef-8c90-4585f0cfab08", 00:17:21.578 "strip_size_kb": 0, 00:17:21.578 "state": "online", 00:17:21.578 "raid_level": "raid1", 00:17:21.578 "superblock": true, 00:17:21.578 "num_base_bdevs": 2, 00:17:21.578 "num_base_bdevs_discovered": 1, 00:17:21.578 "num_base_bdevs_operational": 1, 00:17:21.578 "base_bdevs_list": [ 00:17:21.578 { 00:17:21.578 "name": null, 00:17:21.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.578 "is_configured": false, 00:17:21.578 "data_offset": 256, 00:17:21.578 "data_size": 7936 00:17:21.578 }, 00:17:21.578 { 00:17:21.578 "name": "pt2", 00:17:21.578 "uuid": "a8e16d37-fcde-1b54-b0f5-9897fd637489", 00:17:21.578 "is_configured": true, 00:17:21.578 "data_offset": 256, 00:17:21.578 "data_size": 7936 00:17:21.578 } 00:17:21.578 ] 00:17:21.578 }' 00:17:21.578 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.578 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.835 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:17:21.835 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.835 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:17:22.401 [2024-05-14 21:59:22.722835] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # '[' 32ffcd21-123d-11ef-8c90-4585f0cfab08 '!=' 32ffcd21-123d-11ef-8c90-4585f0cfab08 ']' 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@568 -- # killprocess 63833 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 63833 ']' 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 63833 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps -c -o command 63833 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # tail -1 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:22.401 killing process with pid 63833 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63833' 00:17:22.401 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 63833 00:17:22.402 [2024-05-14 21:59:22.756826] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.402 [2024-05-14 21:59:22.756849] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.402 [2024-05-14 21:59:22.756861] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.402 [2024-05-14 21:59:22.756866] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b74d300 name raid_bdev1, state offline 00:17:22.402 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 63833 00:17:22.402 [2024-05-14 21:59:22.769136] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.402 21:59:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # return 0 00:17:22.402 00:17:22.402 real 0m12.407s 00:17:22.402 user 0m22.153s 00:17:22.402 sys 0m1.886s 00:17:22.402 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.402 21:59:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.402 ************************************ 00:17:22.402 END TEST raid_superblock_test_4k 00:17:22.402 ************************************ 00:17:22.660 21:59:22 bdev_raid -- bdev/bdev_raid.sh@846 -- # '[' '' = true ']' 00:17:22.660 21:59:22 bdev_raid -- bdev/bdev_raid.sh@850 -- # base_malloc_params='-m 32' 00:17:22.660 21:59:22 bdev_raid -- bdev/bdev_raid.sh@851 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:22.660 21:59:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:22.660 21:59:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.660 21:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.660 ************************************ 00:17:22.660 START TEST raid_state_function_test_sb_md_separate 00:17:22.660 ************************************ 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=64182 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64182' 00:17:22.660 Process raid pid: 64182 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 64182 /var/tmp/spdk-raid.sock 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64182 ']' 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.660 21:59:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.660 [2024-05-14 21:59:23.014714] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:22.660 [2024-05-14 21:59:23.014890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:23.228 EAL: TSC is not safe to use in SMP mode 00:17:23.228 EAL: TSC is not invariant 00:17:23.228 [2024-05-14 21:59:23.606080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.228 [2024-05-14 21:59:23.701579] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:23.228 [2024-05-14 21:59:23.703862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.229 [2024-05-14 21:59:23.704656] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.229 [2024-05-14 21:59:23.704673] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:23.794 [2024-05-14 21:59:24.349129] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.794 [2024-05-14 21:59:24.349227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.794 [2024-05-14 21:59:24.349249] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.794 [2024-05-14 21:59:24.349259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.794 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.051 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.051 "name": "Existed_Raid", 00:17:24.051 "uuid": "39c80afd-123d-11ef-8c90-4585f0cfab08", 00:17:24.051 "strip_size_kb": 0, 00:17:24.051 "state": "configuring", 00:17:24.051 "raid_level": "raid1", 00:17:24.051 "superblock": true, 00:17:24.051 "num_base_bdevs": 2, 00:17:24.051 "num_base_bdevs_discovered": 0, 00:17:24.051 "num_base_bdevs_operational": 2, 00:17:24.051 "base_bdevs_list": [ 00:17:24.051 { 00:17:24.051 "name": "BaseBdev1", 00:17:24.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.051 "is_configured": false, 00:17:24.051 "data_offset": 0, 00:17:24.051 "data_size": 0 00:17:24.051 }, 00:17:24.051 { 00:17:24.051 "name": "BaseBdev2", 00:17:24.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.051 "is_configured": false, 00:17:24.051 "data_offset": 0, 00:17:24.052 "data_size": 0 00:17:24.052 } 00:17:24.052 ] 00:17:24.052 }' 00:17:24.052 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.052 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.616 21:59:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.616 [2024-05-14 21:59:25.149089] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.616 [2024-05-14 21:59:25.149152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce76300 name Existed_Raid, state configuring 00:17:24.616 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:24.872 [2024-05-14 21:59:25.425119] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.872 [2024-05-14 21:59:25.425208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.872 [2024-05-14 21:59:25.425229] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.872 [2024-05-14 21:59:25.425249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.872 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:25.130 [2024-05-14 21:59:25.702241] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.130 BaseBdev1 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.386 21:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.643 [ 00:17:25.643 { 00:17:25.643 "name": "BaseBdev1", 00:17:25.643 "aliases": [ 00:17:25.643 "3a965ac1-123d-11ef-8c90-4585f0cfab08" 00:17:25.643 ], 00:17:25.643 "product_name": "Malloc disk", 00:17:25.643 "block_size": 4096, 00:17:25.643 "num_blocks": 8192, 00:17:25.643 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:25.643 "md_size": 32, 00:17:25.643 "md_interleave": false, 00:17:25.643 "dif_type": 0, 00:17:25.643 "assigned_rate_limits": { 00:17:25.643 "rw_ios_per_sec": 0, 00:17:25.643 "rw_mbytes_per_sec": 0, 00:17:25.643 "r_mbytes_per_sec": 0, 00:17:25.643 "w_mbytes_per_sec": 0 00:17:25.643 }, 00:17:25.643 "claimed": true, 00:17:25.643 "claim_type": "exclusive_write", 00:17:25.643 "zoned": false, 00:17:25.643 "supported_io_types": { 00:17:25.643 "read": true, 00:17:25.643 "write": true, 00:17:25.643 "unmap": true, 00:17:25.643 "write_zeroes": true, 00:17:25.643 "flush": true, 00:17:25.643 "reset": true, 00:17:25.643 "compare": false, 00:17:25.643 "compare_and_write": false, 00:17:25.643 "abort": true, 00:17:25.643 "nvme_admin": false, 00:17:25.643 "nvme_io": false 00:17:25.643 }, 00:17:25.643 "memory_domains": [ 00:17:25.643 { 00:17:25.643 "dma_device_id": "system", 00:17:25.643 "dma_device_type": 1 00:17:25.643 }, 00:17:25.643 { 00:17:25.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.643 "dma_device_type": 2 00:17:25.643 } 00:17:25.643 ], 00:17:25.643 "driver_specific": {} 00:17:25.643 } 00:17:25.643 ] 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.643 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.900 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.900 "name": "Existed_Raid", 00:17:25.900 "uuid": "3a6c3a73-123d-11ef-8c90-4585f0cfab08", 00:17:25.900 "strip_size_kb": 0, 00:17:25.900 "state": "configuring", 00:17:25.900 "raid_level": "raid1", 00:17:25.900 "superblock": true, 00:17:25.900 "num_base_bdevs": 2, 00:17:25.900 "num_base_bdevs_discovered": 1, 00:17:25.900 "num_base_bdevs_operational": 2, 00:17:25.900 "base_bdevs_list": [ 00:17:25.900 { 00:17:25.900 "name": "BaseBdev1", 00:17:25.900 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:25.900 "is_configured": true, 00:17:25.900 "data_offset": 256, 00:17:25.900 "data_size": 7936 00:17:25.900 }, 00:17:25.900 { 00:17:25.900 "name": "BaseBdev2", 00:17:25.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.900 "is_configured": false, 00:17:25.900 "data_offset": 0, 00:17:25.900 "data_size": 0 00:17:25.900 } 00:17:25.900 ] 00:17:25.900 }' 00:17:25.900 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.900 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.158 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.416 [2024-05-14 21:59:26.961203] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.416 [2024-05-14 21:59:26.961255] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce76300 name Existed_Raid, state configuring 00:17:26.416 21:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:26.674 [2024-05-14 21:59:27.233239] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.674 [2024-05-14 21:59:27.234126] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.674 [2024-05-14 21:59:27.234175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.674 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.239 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.239 "name": "Existed_Raid", 00:17:27.239 "uuid": "3b801fff-123d-11ef-8c90-4585f0cfab08", 00:17:27.239 "strip_size_kb": 0, 00:17:27.239 "state": "configuring", 00:17:27.239 "raid_level": "raid1", 00:17:27.239 "superblock": true, 00:17:27.239 "num_base_bdevs": 2, 00:17:27.239 "num_base_bdevs_discovered": 1, 00:17:27.239 "num_base_bdevs_operational": 2, 00:17:27.239 "base_bdevs_list": [ 00:17:27.239 { 00:17:27.239 "name": "BaseBdev1", 00:17:27.239 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:27.239 "is_configured": true, 00:17:27.239 "data_offset": 256, 00:17:27.239 "data_size": 7936 00:17:27.239 }, 00:17:27.239 { 00:17:27.239 "name": "BaseBdev2", 00:17:27.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.239 "is_configured": false, 00:17:27.239 "data_offset": 0, 00:17:27.239 "data_size": 0 00:17:27.239 } 00:17:27.239 ] 00:17:27.239 }' 00:17:27.239 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.239 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.498 21:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:27.498 [2024-05-14 21:59:28.085418] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.498 [2024-05-14 21:59:28.085496] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ce76300 00:17:27.498 [2024-05-14 21:59:28.085503] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.498 [2024-05-14 21:59:28.085541] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ced4e20 00:17:27.498 [2024-05-14 21:59:28.085572] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ce76300 00:17:27.498 [2024-05-14 21:59:28.085576] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ce76300 00:17:27.498 [2024-05-14 21:59:28.085592] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.758 BaseBdev2 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.758 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.017 [ 00:17:28.017 { 00:17:28.017 "name": "BaseBdev2", 00:17:28.017 "aliases": [ 00:17:28.017 "3c0225ab-123d-11ef-8c90-4585f0cfab08" 00:17:28.017 ], 00:17:28.017 "product_name": "Malloc disk", 00:17:28.017 "block_size": 4096, 00:17:28.017 "num_blocks": 8192, 00:17:28.017 "uuid": "3c0225ab-123d-11ef-8c90-4585f0cfab08", 00:17:28.017 "md_size": 32, 00:17:28.017 "md_interleave": false, 00:17:28.017 "dif_type": 0, 00:17:28.017 "assigned_rate_limits": { 00:17:28.017 "rw_ios_per_sec": 0, 00:17:28.017 "rw_mbytes_per_sec": 0, 00:17:28.017 "r_mbytes_per_sec": 0, 00:17:28.017 "w_mbytes_per_sec": 0 00:17:28.017 }, 00:17:28.017 "claimed": true, 00:17:28.017 "claim_type": "exclusive_write", 00:17:28.017 "zoned": false, 00:17:28.017 "supported_io_types": { 00:17:28.017 "read": true, 00:17:28.017 "write": true, 00:17:28.017 "unmap": true, 00:17:28.017 "write_zeroes": true, 00:17:28.017 "flush": true, 00:17:28.017 "reset": true, 00:17:28.017 "compare": false, 00:17:28.017 "compare_and_write": false, 00:17:28.018 "abort": true, 00:17:28.018 "nvme_admin": false, 00:17:28.018 "nvme_io": false 00:17:28.018 }, 00:17:28.018 "memory_domains": [ 00:17:28.018 { 00:17:28.018 "dma_device_id": "system", 00:17:28.018 "dma_device_type": 1 00:17:28.018 }, 00:17:28.018 { 00:17:28.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.018 "dma_device_type": 2 00:17:28.018 } 00:17:28.018 ], 00:17:28.018 "driver_specific": {} 00:17:28.018 } 00:17:28.018 ] 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.018 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.277 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.277 "name": "Existed_Raid", 00:17:28.277 "uuid": "3b801fff-123d-11ef-8c90-4585f0cfab08", 00:17:28.277 "strip_size_kb": 0, 00:17:28.277 "state": "online", 00:17:28.277 "raid_level": "raid1", 00:17:28.277 "superblock": true, 00:17:28.277 "num_base_bdevs": 2, 00:17:28.277 "num_base_bdevs_discovered": 2, 00:17:28.277 "num_base_bdevs_operational": 2, 00:17:28.277 "base_bdevs_list": [ 00:17:28.277 { 00:17:28.277 "name": "BaseBdev1", 00:17:28.277 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:28.277 "is_configured": true, 00:17:28.277 "data_offset": 256, 00:17:28.277 "data_size": 7936 00:17:28.277 }, 00:17:28.277 { 00:17:28.277 "name": "BaseBdev2", 00:17:28.277 "uuid": "3c0225ab-123d-11ef-8c90-4585f0cfab08", 00:17:28.277 "is_configured": true, 00:17:28.277 "data_offset": 256, 00:17:28.277 "data_size": 7936 00:17:28.277 } 00:17:28.277 ] 00:17:28.277 }' 00:17:28.277 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.277 21:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:28.844 [2024-05-14 21:59:29.413420] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.844 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:28.844 "name": "Existed_Raid", 00:17:28.844 "aliases": [ 00:17:28.844 "3b801fff-123d-11ef-8c90-4585f0cfab08" 00:17:28.844 ], 00:17:28.844 "product_name": "Raid Volume", 00:17:28.844 "block_size": 4096, 00:17:28.844 "num_blocks": 7936, 00:17:28.844 "uuid": "3b801fff-123d-11ef-8c90-4585f0cfab08", 00:17:28.844 "md_size": 32, 00:17:28.844 "md_interleave": false, 00:17:28.844 "dif_type": 0, 00:17:28.844 "assigned_rate_limits": { 00:17:28.844 "rw_ios_per_sec": 0, 00:17:28.844 "rw_mbytes_per_sec": 0, 00:17:28.844 "r_mbytes_per_sec": 0, 00:17:28.844 "w_mbytes_per_sec": 0 00:17:28.844 }, 00:17:28.844 "claimed": false, 00:17:28.844 "zoned": false, 00:17:28.844 "supported_io_types": { 00:17:28.844 "read": true, 00:17:28.844 "write": true, 00:17:28.844 "unmap": false, 00:17:28.844 "write_zeroes": true, 00:17:28.844 "flush": false, 00:17:28.844 "reset": true, 00:17:28.844 "compare": false, 00:17:28.844 "compare_and_write": false, 00:17:28.844 "abort": false, 00:17:28.844 "nvme_admin": false, 00:17:28.844 "nvme_io": false 00:17:28.844 }, 00:17:28.844 "memory_domains": [ 00:17:28.844 { 00:17:28.844 "dma_device_id": "system", 00:17:28.844 "dma_device_type": 1 00:17:28.844 }, 00:17:28.844 { 00:17:28.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.844 "dma_device_type": 2 00:17:28.844 }, 00:17:28.844 { 00:17:28.844 "dma_device_id": "system", 00:17:28.845 "dma_device_type": 1 00:17:28.845 }, 00:17:28.845 { 00:17:28.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.845 "dma_device_type": 2 00:17:28.845 } 00:17:28.845 ], 00:17:28.845 "driver_specific": { 00:17:28.845 "raid": { 00:17:28.845 "uuid": "3b801fff-123d-11ef-8c90-4585f0cfab08", 00:17:28.845 "strip_size_kb": 0, 00:17:28.845 "state": "online", 00:17:28.845 "raid_level": "raid1", 00:17:28.845 "superblock": true, 00:17:28.845 "num_base_bdevs": 2, 00:17:28.845 "num_base_bdevs_discovered": 2, 00:17:28.845 "num_base_bdevs_operational": 2, 00:17:28.845 "base_bdevs_list": [ 00:17:28.845 { 00:17:28.845 "name": "BaseBdev1", 00:17:28.845 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:28.845 "is_configured": true, 00:17:28.845 "data_offset": 256, 00:17:28.845 "data_size": 7936 00:17:28.845 }, 00:17:28.845 { 00:17:28.845 "name": "BaseBdev2", 00:17:28.845 "uuid": "3c0225ab-123d-11ef-8c90-4585f0cfab08", 00:17:28.845 "is_configured": true, 00:17:28.845 "data_offset": 256, 00:17:28.845 "data_size": 7936 00:17:28.845 } 00:17:28.845 ] 00:17:28.845 } 00:17:28.845 } 00:17:28.845 }' 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:29.104 BaseBdev2' 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:29.104 "name": "BaseBdev1", 00:17:29.104 "aliases": [ 00:17:29.104 "3a965ac1-123d-11ef-8c90-4585f0cfab08" 00:17:29.104 ], 00:17:29.104 "product_name": "Malloc disk", 00:17:29.104 "block_size": 4096, 00:17:29.104 "num_blocks": 8192, 00:17:29.104 "uuid": "3a965ac1-123d-11ef-8c90-4585f0cfab08", 00:17:29.104 "md_size": 32, 00:17:29.104 "md_interleave": false, 00:17:29.104 "dif_type": 0, 00:17:29.104 "assigned_rate_limits": { 00:17:29.104 "rw_ios_per_sec": 0, 00:17:29.104 "rw_mbytes_per_sec": 0, 00:17:29.104 "r_mbytes_per_sec": 0, 00:17:29.104 "w_mbytes_per_sec": 0 00:17:29.104 }, 00:17:29.104 "claimed": true, 00:17:29.104 "claim_type": "exclusive_write", 00:17:29.104 "zoned": false, 00:17:29.104 "supported_io_types": { 00:17:29.104 "read": true, 00:17:29.104 "write": true, 00:17:29.104 "unmap": true, 00:17:29.104 "write_zeroes": true, 00:17:29.104 "flush": true, 00:17:29.104 "reset": true, 00:17:29.104 "compare": false, 00:17:29.104 "compare_and_write": false, 00:17:29.104 "abort": true, 00:17:29.104 "nvme_admin": false, 00:17:29.104 "nvme_io": false 00:17:29.104 }, 00:17:29.104 "memory_domains": [ 00:17:29.104 { 00:17:29.104 "dma_device_id": "system", 00:17:29.104 "dma_device_type": 1 00:17:29.104 }, 00:17:29.104 { 00:17:29.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.104 "dma_device_type": 2 00:17:29.104 } 00:17:29.104 ], 00:17:29.104 "driver_specific": {} 00:17:29.104 }' 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.104 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:29.362 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:29.620 "name": "BaseBdev2", 00:17:29.620 "aliases": [ 00:17:29.620 "3c0225ab-123d-11ef-8c90-4585f0cfab08" 00:17:29.620 ], 00:17:29.620 "product_name": "Malloc disk", 00:17:29.620 "block_size": 4096, 00:17:29.620 "num_blocks": 8192, 00:17:29.620 "uuid": "3c0225ab-123d-11ef-8c90-4585f0cfab08", 00:17:29.620 "md_size": 32, 00:17:29.620 "md_interleave": false, 00:17:29.620 "dif_type": 0, 00:17:29.620 "assigned_rate_limits": { 00:17:29.620 "rw_ios_per_sec": 0, 00:17:29.620 "rw_mbytes_per_sec": 0, 00:17:29.620 "r_mbytes_per_sec": 0, 00:17:29.620 "w_mbytes_per_sec": 0 00:17:29.620 }, 00:17:29.620 "claimed": true, 00:17:29.620 "claim_type": "exclusive_write", 00:17:29.620 "zoned": false, 00:17:29.620 "supported_io_types": { 00:17:29.620 "read": true, 00:17:29.620 "write": true, 00:17:29.620 "unmap": true, 00:17:29.620 "write_zeroes": true, 00:17:29.620 "flush": true, 00:17:29.620 "reset": true, 00:17:29.620 "compare": false, 00:17:29.620 "compare_and_write": false, 00:17:29.620 "abort": true, 00:17:29.620 "nvme_admin": false, 00:17:29.620 "nvme_io": false 00:17:29.620 }, 00:17:29.620 "memory_domains": [ 00:17:29.620 { 00:17:29.620 "dma_device_id": "system", 00:17:29.620 "dma_device_type": 1 00:17:29.620 }, 00:17:29.620 { 00:17:29.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.620 "dma_device_type": 2 00:17:29.620 } 00:17:29.620 ], 00:17:29.620 "driver_specific": {} 00:17:29.620 }' 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.620 21:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:29.620 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.878 [2024-05-14 21:59:30.289441] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.878 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.136 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.136 "name": "Existed_Raid", 00:17:30.136 "uuid": "3b801fff-123d-11ef-8c90-4585f0cfab08", 00:17:30.136 "strip_size_kb": 0, 00:17:30.136 "state": "online", 00:17:30.136 "raid_level": "raid1", 00:17:30.136 "superblock": true, 00:17:30.136 "num_base_bdevs": 2, 00:17:30.136 "num_base_bdevs_discovered": 1, 00:17:30.136 "num_base_bdevs_operational": 1, 00:17:30.136 "base_bdevs_list": [ 00:17:30.136 { 00:17:30.136 "name": null, 00:17:30.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.136 "is_configured": false, 00:17:30.136 "data_offset": 256, 00:17:30.136 "data_size": 7936 00:17:30.136 }, 00:17:30.136 { 00:17:30.136 "name": "BaseBdev2", 00:17:30.136 "uuid": "3c0225ab-123d-11ef-8c90-4585f0cfab08", 00:17:30.136 "is_configured": true, 00:17:30.136 "data_offset": 256, 00:17:30.136 "data_size": 7936 00:17:30.136 } 00:17:30.136 ] 00:17:30.136 }' 00:17:30.136 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.136 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:30.394 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:30.394 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:30.394 21:59:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.653 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:30.653 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.653 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:30.911 [2024-05-14 21:59:31.407678] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.911 [2024-05-14 21:59:31.407730] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.911 [2024-05-14 21:59:31.413674] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.911 [2024-05-14 21:59:31.413742] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.911 [2024-05-14 21:59:31.413748] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce76300 name Existed_Raid, state offline 00:17:30.911 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:30.911 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:30.911 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.911 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 64182 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64182 ']' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 64182 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64182 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:31.169 killing process with pid 64182 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64182' 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 64182 00:17:31.169 [2024-05-14 21:59:31.714802] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.169 [2024-05-14 21:59:31.714841] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.169 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 64182 00:17:31.428 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:17:31.428 00:17:31.428 real 0m8.898s 00:17:31.428 user 0m15.495s 00:17:31.428 sys 0m1.555s 00:17:31.428 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.428 ************************************ 00:17:31.428 END TEST raid_state_function_test_sb_md_separate 00:17:31.428 ************************************ 00:17:31.428 21:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.428 21:59:31 bdev_raid -- bdev/bdev_raid.sh@852 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:31.428 21:59:31 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:31.428 21:59:31 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.428 21:59:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.428 ************************************ 00:17:31.428 START TEST raid_superblock_test_md_separate 00:17:31.428 ************************************ 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:31.428 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=64456 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 64456 /var/tmp/spdk-raid.sock 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64456 ']' 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.429 21:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 [2024-05-14 21:59:31.952441] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:31.429 [2024-05-14 21:59:31.952738] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:31.994 EAL: TSC is not safe to use in SMP mode 00:17:31.994 EAL: TSC is not invariant 00:17:31.994 [2024-05-14 21:59:32.479625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.253 [2024-05-14 21:59:32.619123] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:32.253 [2024-05-14 21:59:32.622370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.253 [2024-05-14 21:59:32.623634] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.253 [2024-05-14 21:59:32.623658] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.511 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:32.770 malloc1 00:17:32.770 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.028 [2024-05-14 21:59:33.589552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.029 [2024-05-14 21:59:33.589616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.029 [2024-05-14 21:59:33.590237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a412780 00:17:33.029 [2024-05-14 21:59:33.590270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.029 [2024-05-14 21:59:33.591097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.029 [2024-05-14 21:59:33.591129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.029 pt1 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.029 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:33.287 malloc2 00:17:33.287 21:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.545 [2024-05-14 21:59:34.065560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.545 [2024-05-14 21:59:34.065625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.545 [2024-05-14 21:59:34.065656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a412c80 00:17:33.545 [2024-05-14 21:59:34.065664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.545 [2024-05-14 21:59:34.066302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.545 [2024-05-14 21:59:34.066324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.545 pt2 00:17:33.545 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.545 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.545 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:33.803 [2024-05-14 21:59:34.365578] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.803 [2024-05-14 21:59:34.366167] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.803 [2024-05-14 21:59:34.366231] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a417300 00:17:33.803 [2024-05-14 21:59:34.366237] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.803 [2024-05-14 21:59:34.366275] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a475e20 00:17:33.803 [2024-05-14 21:59:34.366306] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a417300 00:17:33.803 [2024-05-14 21:59:34.366310] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a417300 00:17:33.804 [2024-05-14 21:59:34.366327] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.804 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.062 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.062 "name": "raid_bdev1", 00:17:34.062 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:34.062 "strip_size_kb": 0, 00:17:34.062 "state": "online", 00:17:34.062 "raid_level": "raid1", 00:17:34.062 "superblock": true, 00:17:34.062 "num_base_bdevs": 2, 00:17:34.062 "num_base_bdevs_discovered": 2, 00:17:34.062 "num_base_bdevs_operational": 2, 00:17:34.062 "base_bdevs_list": [ 00:17:34.062 { 00:17:34.062 "name": "pt1", 00:17:34.062 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:34.062 "is_configured": true, 00:17:34.062 "data_offset": 256, 00:17:34.062 "data_size": 7936 00:17:34.062 }, 00:17:34.062 { 00:17:34.062 "name": "pt2", 00:17:34.062 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:34.062 "is_configured": true, 00:17:34.062 "data_offset": 256, 00:17:34.062 "data_size": 7936 00:17:34.062 } 00:17:34.062 ] 00:17:34.062 }' 00:17:34.062 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.062 21:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:34.641 21:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:34.641 [2024-05-14 21:59:35.181630] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.641 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:34.641 "name": "raid_bdev1", 00:17:34.641 "aliases": [ 00:17:34.641 "3fc06ee6-123d-11ef-8c90-4585f0cfab08" 00:17:34.641 ], 00:17:34.641 "product_name": "Raid Volume", 00:17:34.641 "block_size": 4096, 00:17:34.641 "num_blocks": 7936, 00:17:34.641 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:34.641 "md_size": 32, 00:17:34.641 "md_interleave": false, 00:17:34.641 "dif_type": 0, 00:17:34.641 "assigned_rate_limits": { 00:17:34.641 "rw_ios_per_sec": 0, 00:17:34.641 "rw_mbytes_per_sec": 0, 00:17:34.641 "r_mbytes_per_sec": 0, 00:17:34.641 "w_mbytes_per_sec": 0 00:17:34.641 }, 00:17:34.641 "claimed": false, 00:17:34.641 "zoned": false, 00:17:34.641 "supported_io_types": { 00:17:34.641 "read": true, 00:17:34.641 "write": true, 00:17:34.641 "unmap": false, 00:17:34.641 "write_zeroes": true, 00:17:34.641 "flush": false, 00:17:34.641 "reset": true, 00:17:34.641 "compare": false, 00:17:34.641 "compare_and_write": false, 00:17:34.641 "abort": false, 00:17:34.641 "nvme_admin": false, 00:17:34.641 "nvme_io": false 00:17:34.641 }, 00:17:34.641 "memory_domains": [ 00:17:34.641 { 00:17:34.641 "dma_device_id": "system", 00:17:34.641 "dma_device_type": 1 00:17:34.641 }, 00:17:34.641 { 00:17:34.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.641 "dma_device_type": 2 00:17:34.641 }, 00:17:34.641 { 00:17:34.641 "dma_device_id": "system", 00:17:34.641 "dma_device_type": 1 00:17:34.641 }, 00:17:34.641 { 00:17:34.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.641 "dma_device_type": 2 00:17:34.641 } 00:17:34.641 ], 00:17:34.641 "driver_specific": { 00:17:34.641 "raid": { 00:17:34.641 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:34.642 "strip_size_kb": 0, 00:17:34.642 "state": "online", 00:17:34.642 "raid_level": "raid1", 00:17:34.642 "superblock": true, 00:17:34.642 "num_base_bdevs": 2, 00:17:34.642 "num_base_bdevs_discovered": 2, 00:17:34.642 "num_base_bdevs_operational": 2, 00:17:34.642 "base_bdevs_list": [ 00:17:34.642 { 00:17:34.642 "name": "pt1", 00:17:34.642 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:34.642 "is_configured": true, 00:17:34.642 "data_offset": 256, 00:17:34.642 "data_size": 7936 00:17:34.642 }, 00:17:34.642 { 00:17:34.642 "name": "pt2", 00:17:34.642 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:34.642 "is_configured": true, 00:17:34.642 "data_offset": 256, 00:17:34.642 "data_size": 7936 00:17:34.642 } 00:17:34.642 ] 00:17:34.642 } 00:17:34.642 } 00:17:34.642 }' 00:17:34.642 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.642 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:34.642 pt2' 00:17:34.642 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:34.642 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:34.642 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:34.931 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:34.931 "name": "pt1", 00:17:34.931 "aliases": [ 00:17:34.931 "11e5ff4b-3d36-e953-8162-f84f7af965e3" 00:17:34.931 ], 00:17:34.931 "product_name": "passthru", 00:17:34.931 "block_size": 4096, 00:17:34.931 "num_blocks": 8192, 00:17:34.931 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:34.931 "md_size": 32, 00:17:34.931 "md_interleave": false, 00:17:34.931 "dif_type": 0, 00:17:34.931 "assigned_rate_limits": { 00:17:34.931 "rw_ios_per_sec": 0, 00:17:34.931 "rw_mbytes_per_sec": 0, 00:17:34.931 "r_mbytes_per_sec": 0, 00:17:34.931 "w_mbytes_per_sec": 0 00:17:34.931 }, 00:17:34.931 "claimed": true, 00:17:34.931 "claim_type": "exclusive_write", 00:17:34.931 "zoned": false, 00:17:34.931 "supported_io_types": { 00:17:34.931 "read": true, 00:17:34.931 "write": true, 00:17:34.931 "unmap": true, 00:17:34.931 "write_zeroes": true, 00:17:34.931 "flush": true, 00:17:34.931 "reset": true, 00:17:34.931 "compare": false, 00:17:34.931 "compare_and_write": false, 00:17:34.931 "abort": true, 00:17:34.931 "nvme_admin": false, 00:17:34.931 "nvme_io": false 00:17:34.931 }, 00:17:34.931 "memory_domains": [ 00:17:34.931 { 00:17:34.931 "dma_device_id": "system", 00:17:34.931 "dma_device_type": 1 00:17:34.931 }, 00:17:34.931 { 00:17:34.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.931 "dma_device_type": 2 00:17:34.931 } 00:17:34.931 ], 00:17:34.931 "driver_specific": { 00:17:34.931 "passthru": { 00:17:34.931 "name": "pt1", 00:17:34.931 "base_bdev_name": "malloc1" 00:17:34.931 } 00:17:34.931 } 00:17:34.931 }' 00:17:34.931 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:34.931 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:34.931 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:34.931 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:35.190 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:35.448 "name": "pt2", 00:17:35.448 "aliases": [ 00:17:35.448 "01010d76-b538-955f-b98c-18a090293a23" 00:17:35.448 ], 00:17:35.448 "product_name": "passthru", 00:17:35.448 "block_size": 4096, 00:17:35.448 "num_blocks": 8192, 00:17:35.448 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:35.448 "md_size": 32, 00:17:35.448 "md_interleave": false, 00:17:35.448 "dif_type": 0, 00:17:35.448 "assigned_rate_limits": { 00:17:35.448 "rw_ios_per_sec": 0, 00:17:35.448 "rw_mbytes_per_sec": 0, 00:17:35.448 "r_mbytes_per_sec": 0, 00:17:35.448 "w_mbytes_per_sec": 0 00:17:35.448 }, 00:17:35.448 "claimed": true, 00:17:35.448 "claim_type": "exclusive_write", 00:17:35.448 "zoned": false, 00:17:35.448 "supported_io_types": { 00:17:35.448 "read": true, 00:17:35.448 "write": true, 00:17:35.448 "unmap": true, 00:17:35.448 "write_zeroes": true, 00:17:35.448 "flush": true, 00:17:35.448 "reset": true, 00:17:35.448 "compare": false, 00:17:35.448 "compare_and_write": false, 00:17:35.448 "abort": true, 00:17:35.448 "nvme_admin": false, 00:17:35.448 "nvme_io": false 00:17:35.448 }, 00:17:35.448 "memory_domains": [ 00:17:35.448 { 00:17:35.448 "dma_device_id": "system", 00:17:35.448 "dma_device_type": 1 00:17:35.448 }, 00:17:35.448 { 00:17:35.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.448 "dma_device_type": 2 00:17:35.448 } 00:17:35.448 ], 00:17:35.448 "driver_specific": { 00:17:35.448 "passthru": { 00:17:35.448 "name": "pt2", 00:17:35.448 "base_bdev_name": "malloc2" 00:17:35.448 } 00:17:35.448 } 00:17:35.448 }' 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:35.448 21:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:35.707 [2024-05-14 21:59:36.193629] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.707 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3fc06ee6-123d-11ef-8c90-4585f0cfab08 00:17:35.707 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 3fc06ee6-123d-11ef-8c90-4585f0cfab08 ']' 00:17:35.707 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:35.965 [2024-05-14 21:59:36.461585] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.965 [2024-05-14 21:59:36.461610] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.965 [2024-05-14 21:59:36.461633] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.965 [2024-05-14 21:59:36.461647] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.965 [2024-05-14 21:59:36.461652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a417300 name raid_bdev1, state offline 00:17:35.965 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.965 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:36.222 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:36.222 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:36.222 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.222 21:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:36.479 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.479 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.737 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:36.737 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:36.995 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:37.253 [2024-05-14 21:59:37.817624] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:37.254 [2024-05-14 21:59:37.818196] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:37.254 [2024-05-14 21:59:37.818222] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:37.254 [2024-05-14 21:59:37.818261] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:37.254 [2024-05-14 21:59:37.818272] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.254 [2024-05-14 21:59:37.818277] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a417300 name raid_bdev1, state configuring 00:17:37.254 request: 00:17:37.254 { 00:17:37.254 "name": "raid_bdev1", 00:17:37.254 "raid_level": "raid1", 00:17:37.254 "base_bdevs": [ 00:17:37.254 "malloc1", 00:17:37.254 "malloc2" 00:17:37.254 ], 00:17:37.254 "superblock": false, 00:17:37.254 "method": "bdev_raid_create", 00:17:37.254 "req_id": 1 00:17:37.254 } 00:17:37.254 Got JSON-RPC error response 00:17:37.254 response: 00:17:37.254 { 00:17:37.254 "code": -17, 00:17:37.254 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:37.254 } 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.254 21:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:37.512 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:37.512 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:37.512 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.770 [2024-05-14 21:59:38.353631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.770 [2024-05-14 21:59:38.353695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.770 [2024-05-14 21:59:38.353725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a412c80 00:17:37.770 [2024-05-14 21:59:38.353733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.770 [2024-05-14 21:59:38.354332] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.770 [2024-05-14 21:59:38.354358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.770 [2024-05-14 21:59:38.354384] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:37.770 [2024-05-14 21:59:38.354396] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.770 pt1 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.030 "name": "raid_bdev1", 00:17:38.030 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:38.030 "strip_size_kb": 0, 00:17:38.030 "state": "configuring", 00:17:38.030 "raid_level": "raid1", 00:17:38.030 "superblock": true, 00:17:38.030 "num_base_bdevs": 2, 00:17:38.030 "num_base_bdevs_discovered": 1, 00:17:38.030 "num_base_bdevs_operational": 2, 00:17:38.030 "base_bdevs_list": [ 00:17:38.030 { 00:17:38.030 "name": "pt1", 00:17:38.030 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:38.030 "is_configured": true, 00:17:38.030 "data_offset": 256, 00:17:38.030 "data_size": 7936 00:17:38.030 }, 00:17:38.030 { 00:17:38.030 "name": null, 00:17:38.030 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:38.030 "is_configured": false, 00:17:38.030 "data_offset": 256, 00:17:38.030 "data_size": 7936 00:17:38.030 } 00:17:38.030 ] 00:17:38.030 }' 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.030 21:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.597 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:38.597 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:38.597 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:38.597 21:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:38.856 [2024-05-14 21:59:39.217649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:38.856 [2024-05-14 21:59:39.217724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.856 [2024-05-14 21:59:39.217753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a412f00 00:17:38.856 [2024-05-14 21:59:39.217762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.856 [2024-05-14 21:59:39.217835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.856 [2024-05-14 21:59:39.217846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:38.856 [2024-05-14 21:59:39.217869] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:38.856 [2024-05-14 21:59:39.217877] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.856 [2024-05-14 21:59:39.217894] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a417300 00:17:38.856 [2024-05-14 21:59:39.217898] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:38.856 [2024-05-14 21:59:39.217918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a475e20 00:17:38.856 [2024-05-14 21:59:39.217940] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a417300 00:17:38.856 [2024-05-14 21:59:39.217943] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a417300 00:17:38.856 [2024-05-14 21:59:39.217959] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.856 pt2 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.856 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.115 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.115 "name": "raid_bdev1", 00:17:39.115 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:39.115 "strip_size_kb": 0, 00:17:39.115 "state": "online", 00:17:39.115 "raid_level": "raid1", 00:17:39.115 "superblock": true, 00:17:39.115 "num_base_bdevs": 2, 00:17:39.115 "num_base_bdevs_discovered": 2, 00:17:39.115 "num_base_bdevs_operational": 2, 00:17:39.115 "base_bdevs_list": [ 00:17:39.115 { 00:17:39.115 "name": "pt1", 00:17:39.115 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:39.115 "is_configured": true, 00:17:39.115 "data_offset": 256, 00:17:39.115 "data_size": 7936 00:17:39.115 }, 00:17:39.115 { 00:17:39.115 "name": "pt2", 00:17:39.115 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:39.115 "is_configured": true, 00:17:39.115 "data_offset": 256, 00:17:39.115 "data_size": 7936 00:17:39.115 } 00:17:39.115 ] 00:17:39.115 }' 00:17:39.115 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.115 21:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.373 21:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:39.665 [2024-05-14 21:59:40.125776] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.665 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:39.665 "name": "raid_bdev1", 00:17:39.665 "aliases": [ 00:17:39.665 "3fc06ee6-123d-11ef-8c90-4585f0cfab08" 00:17:39.665 ], 00:17:39.665 "product_name": "Raid Volume", 00:17:39.665 "block_size": 4096, 00:17:39.665 "num_blocks": 7936, 00:17:39.665 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:39.665 "md_size": 32, 00:17:39.665 "md_interleave": false, 00:17:39.665 "dif_type": 0, 00:17:39.665 "assigned_rate_limits": { 00:17:39.665 "rw_ios_per_sec": 0, 00:17:39.665 "rw_mbytes_per_sec": 0, 00:17:39.665 "r_mbytes_per_sec": 0, 00:17:39.665 "w_mbytes_per_sec": 0 00:17:39.665 }, 00:17:39.665 "claimed": false, 00:17:39.665 "zoned": false, 00:17:39.665 "supported_io_types": { 00:17:39.665 "read": true, 00:17:39.665 "write": true, 00:17:39.665 "unmap": false, 00:17:39.665 "write_zeroes": true, 00:17:39.665 "flush": false, 00:17:39.665 "reset": true, 00:17:39.665 "compare": false, 00:17:39.665 "compare_and_write": false, 00:17:39.665 "abort": false, 00:17:39.665 "nvme_admin": false, 00:17:39.665 "nvme_io": false 00:17:39.665 }, 00:17:39.665 "memory_domains": [ 00:17:39.665 { 00:17:39.665 "dma_device_id": "system", 00:17:39.665 "dma_device_type": 1 00:17:39.665 }, 00:17:39.665 { 00:17:39.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.665 "dma_device_type": 2 00:17:39.665 }, 00:17:39.665 { 00:17:39.665 "dma_device_id": "system", 00:17:39.665 "dma_device_type": 1 00:17:39.665 }, 00:17:39.665 { 00:17:39.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.665 "dma_device_type": 2 00:17:39.665 } 00:17:39.665 ], 00:17:39.665 "driver_specific": { 00:17:39.665 "raid": { 00:17:39.665 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:39.665 "strip_size_kb": 0, 00:17:39.665 "state": "online", 00:17:39.665 "raid_level": "raid1", 00:17:39.665 "superblock": true, 00:17:39.665 "num_base_bdevs": 2, 00:17:39.665 "num_base_bdevs_discovered": 2, 00:17:39.665 "num_base_bdevs_operational": 2, 00:17:39.665 "base_bdevs_list": [ 00:17:39.665 { 00:17:39.665 "name": "pt1", 00:17:39.665 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:39.665 "is_configured": true, 00:17:39.665 "data_offset": 256, 00:17:39.665 "data_size": 7936 00:17:39.665 }, 00:17:39.665 { 00:17:39.665 "name": "pt2", 00:17:39.665 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:39.665 "is_configured": true, 00:17:39.666 "data_offset": 256, 00:17:39.666 "data_size": 7936 00:17:39.666 } 00:17:39.666 ] 00:17:39.666 } 00:17:39.666 } 00:17:39.666 }' 00:17:39.666 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.666 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:39.666 pt2' 00:17:39.666 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:39.666 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:39.666 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:39.924 "name": "pt1", 00:17:39.924 "aliases": [ 00:17:39.924 "11e5ff4b-3d36-e953-8162-f84f7af965e3" 00:17:39.924 ], 00:17:39.924 "product_name": "passthru", 00:17:39.924 "block_size": 4096, 00:17:39.924 "num_blocks": 8192, 00:17:39.924 "uuid": "11e5ff4b-3d36-e953-8162-f84f7af965e3", 00:17:39.924 "md_size": 32, 00:17:39.924 "md_interleave": false, 00:17:39.924 "dif_type": 0, 00:17:39.924 "assigned_rate_limits": { 00:17:39.924 "rw_ios_per_sec": 0, 00:17:39.924 "rw_mbytes_per_sec": 0, 00:17:39.924 "r_mbytes_per_sec": 0, 00:17:39.924 "w_mbytes_per_sec": 0 00:17:39.924 }, 00:17:39.924 "claimed": true, 00:17:39.924 "claim_type": "exclusive_write", 00:17:39.924 "zoned": false, 00:17:39.924 "supported_io_types": { 00:17:39.924 "read": true, 00:17:39.924 "write": true, 00:17:39.924 "unmap": true, 00:17:39.924 "write_zeroes": true, 00:17:39.924 "flush": true, 00:17:39.924 "reset": true, 00:17:39.924 "compare": false, 00:17:39.924 "compare_and_write": false, 00:17:39.924 "abort": true, 00:17:39.924 "nvme_admin": false, 00:17:39.924 "nvme_io": false 00:17:39.924 }, 00:17:39.924 "memory_domains": [ 00:17:39.924 { 00:17:39.924 "dma_device_id": "system", 00:17:39.924 "dma_device_type": 1 00:17:39.924 }, 00:17:39.924 { 00:17:39.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.924 "dma_device_type": 2 00:17:39.924 } 00:17:39.924 ], 00:17:39.924 "driver_specific": { 00:17:39.924 "passthru": { 00:17:39.924 "name": "pt1", 00:17:39.924 "base_bdev_name": "malloc1" 00:17:39.924 } 00:17:39.924 } 00:17:39.924 }' 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:39.924 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:40.491 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:40.491 "name": "pt2", 00:17:40.491 "aliases": [ 00:17:40.491 "01010d76-b538-955f-b98c-18a090293a23" 00:17:40.491 ], 00:17:40.491 "product_name": "passthru", 00:17:40.491 "block_size": 4096, 00:17:40.491 "num_blocks": 8192, 00:17:40.491 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:40.491 "md_size": 32, 00:17:40.491 "md_interleave": false, 00:17:40.491 "dif_type": 0, 00:17:40.491 "assigned_rate_limits": { 00:17:40.491 "rw_ios_per_sec": 0, 00:17:40.491 "rw_mbytes_per_sec": 0, 00:17:40.491 "r_mbytes_per_sec": 0, 00:17:40.491 "w_mbytes_per_sec": 0 00:17:40.491 }, 00:17:40.491 "claimed": true, 00:17:40.491 "claim_type": "exclusive_write", 00:17:40.491 "zoned": false, 00:17:40.491 "supported_io_types": { 00:17:40.491 "read": true, 00:17:40.491 "write": true, 00:17:40.491 "unmap": true, 00:17:40.491 "write_zeroes": true, 00:17:40.491 "flush": true, 00:17:40.491 "reset": true, 00:17:40.491 "compare": false, 00:17:40.491 "compare_and_write": false, 00:17:40.491 "abort": true, 00:17:40.491 "nvme_admin": false, 00:17:40.491 "nvme_io": false 00:17:40.491 }, 00:17:40.491 "memory_domains": [ 00:17:40.491 { 00:17:40.491 "dma_device_id": "system", 00:17:40.491 "dma_device_type": 1 00:17:40.491 }, 00:17:40.491 { 00:17:40.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.491 "dma_device_type": 2 00:17:40.491 } 00:17:40.491 ], 00:17:40.491 "driver_specific": { 00:17:40.491 "passthru": { 00:17:40.491 "name": "pt2", 00:17:40.491 "base_bdev_name": "malloc2" 00:17:40.491 } 00:17:40.491 } 00:17:40.491 }' 00:17:40.491 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:40.491 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:40.491 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:17:40.491 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:40.492 21:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:40.492 [2024-05-14 21:59:41.077860] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.749 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 3fc06ee6-123d-11ef-8c90-4585f0cfab08 '!=' 3fc06ee6-123d-11ef-8c90-4585f0cfab08 ']' 00:17:40.749 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:40.749 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:40.749 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:17:40.749 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:41.007 [2024-05-14 21:59:41.369840] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:41.007 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.007 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.007 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:41.007 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.007 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.008 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.266 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.266 "name": "raid_bdev1", 00:17:41.266 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:41.266 "strip_size_kb": 0, 00:17:41.266 "state": "online", 00:17:41.266 "raid_level": "raid1", 00:17:41.266 "superblock": true, 00:17:41.266 "num_base_bdevs": 2, 00:17:41.266 "num_base_bdevs_discovered": 1, 00:17:41.266 "num_base_bdevs_operational": 1, 00:17:41.266 "base_bdevs_list": [ 00:17:41.266 { 00:17:41.266 "name": null, 00:17:41.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.266 "is_configured": false, 00:17:41.266 "data_offset": 256, 00:17:41.266 "data_size": 7936 00:17:41.266 }, 00:17:41.266 { 00:17:41.266 "name": "pt2", 00:17:41.266 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:41.266 "is_configured": true, 00:17:41.266 "data_offset": 256, 00:17:41.266 "data_size": 7936 00:17:41.266 } 00:17:41.266 ] 00:17:41.266 }' 00:17:41.266 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.266 21:59:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.525 21:59:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:41.783 [2024-05-14 21:59:42.261850] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.783 [2024-05-14 21:59:42.261875] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.783 [2024-05-14 21:59:42.261896] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.783 [2024-05-14 21:59:42.261908] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.783 [2024-05-14 21:59:42.261912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a417300 name raid_bdev1, state offline 00:17:41.783 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.783 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:42.042 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:42.042 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:42.042 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:42.042 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:42.042 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:42.299 21:59:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.556 [2024-05-14 21:59:43.017864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.556 [2024-05-14 21:59:43.017919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.556 [2024-05-14 21:59:43.017946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a412f00 00:17:42.556 [2024-05-14 21:59:43.017955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.556 [2024-05-14 21:59:43.018642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.556 [2024-05-14 21:59:43.018668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.556 [2024-05-14 21:59:43.018694] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:42.556 [2024-05-14 21:59:43.018705] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.556 [2024-05-14 21:59:43.018720] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a417300 00:17:42.556 [2024-05-14 21:59:43.018724] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.556 [2024-05-14 21:59:43.018743] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a475e20 00:17:42.556 [2024-05-14 21:59:43.018767] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a417300 00:17:42.556 [2024-05-14 21:59:43.018771] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a417300 00:17:42.556 [2024-05-14 21:59:43.018785] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.556 pt2 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.556 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.814 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.814 "name": "raid_bdev1", 00:17:42.814 "uuid": "3fc06ee6-123d-11ef-8c90-4585f0cfab08", 00:17:42.814 "strip_size_kb": 0, 00:17:42.814 "state": "online", 00:17:42.814 "raid_level": "raid1", 00:17:42.814 "superblock": true, 00:17:42.814 "num_base_bdevs": 2, 00:17:42.814 "num_base_bdevs_discovered": 1, 00:17:42.814 "num_base_bdevs_operational": 1, 00:17:42.814 "base_bdevs_list": [ 00:17:42.814 { 00:17:42.814 "name": null, 00:17:42.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.814 "is_configured": false, 00:17:42.814 "data_offset": 256, 00:17:42.814 "data_size": 7936 00:17:42.814 }, 00:17:42.814 { 00:17:42.814 "name": "pt2", 00:17:42.814 "uuid": "01010d76-b538-955f-b98c-18a090293a23", 00:17:42.814 "is_configured": true, 00:17:42.814 "data_offset": 256, 00:17:42.814 "data_size": 7936 00:17:42.814 } 00:17:42.814 ] 00:17:42.814 }' 00:17:42.814 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.814 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.378 [2024-05-14 21:59:43.929942] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # '[' 3fc06ee6-123d-11ef-8c90-4585f0cfab08 '!=' 3fc06ee6-123d-11ef-8c90-4585f0cfab08 ']' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@568 -- # killprocess 64456 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64456 ']' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 64456 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64456 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:43.378 killing process with pid 64456 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64456' 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 64456 00:17:43.378 [2024-05-14 21:59:43.960066] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.378 [2024-05-14 21:59:43.960093] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.378 [2024-05-14 21:59:43.960105] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.378 [2024-05-14 21:59:43.960110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a417300 name raid_bdev1, state offline 00:17:43.378 21:59:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 64456 00:17:43.637 [2024-05-14 21:59:43.971748] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.637 21:59:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # return 0 00:17:43.637 00:17:43.637 real 0m12.215s 00:17:43.637 user 0m21.384s 00:17:43.637 sys 0m2.287s 00:17:43.637 21:59:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:43.637 ************************************ 00:17:43.637 END TEST raid_superblock_test_md_separate 00:17:43.637 ************************************ 00:17:43.637 21:59:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 21:59:44 bdev_raid -- bdev/bdev_raid.sh@853 -- # '[' '' = true ']' 00:17:43.637 21:59:44 bdev_raid -- bdev/bdev_raid.sh@857 -- # base_malloc_params='-m 32 -i' 00:17:43.637 21:59:44 bdev_raid -- bdev/bdev_raid.sh@858 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:43.637 21:59:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:43.637 21:59:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:43.637 21:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 ************************************ 00:17:43.637 START TEST raid_state_function_test_sb_md_interleaved 00:17:43.637 ************************************ 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=64801 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64801' 00:17:43.637 Process raid pid: 64801 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 64801 /var/tmp/spdk-raid.sock 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 64801 ']' 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:43.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.637 21:59:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 [2024-05-14 21:59:44.218532] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:43.637 [2024-05-14 21:59:44.218713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:44.205 EAL: TSC is not safe to use in SMP mode 00:17:44.205 EAL: TSC is not invariant 00:17:44.205 [2024-05-14 21:59:44.776832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.462 [2024-05-14 21:59:44.879629] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:44.462 [2024-05-14 21:59:44.882380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.462 [2024-05-14 21:59:44.883380] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.462 [2024-05-14 21:59:44.883398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.028 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:45.028 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:17:45.028 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:45.028 [2024-05-14 21:59:45.561522] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.028 [2024-05-14 21:59:45.561584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.028 [2024-05-14 21:59:45.561589] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.028 [2024-05-14 21:59:45.561608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.029 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.287 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.287 "name": "Existed_Raid", 00:17:45.287 "uuid": "466cccaa-123d-11ef-8c90-4585f0cfab08", 00:17:45.287 "strip_size_kb": 0, 00:17:45.287 "state": "configuring", 00:17:45.287 "raid_level": "raid1", 00:17:45.287 "superblock": true, 00:17:45.287 "num_base_bdevs": 2, 00:17:45.287 "num_base_bdevs_discovered": 0, 00:17:45.287 "num_base_bdevs_operational": 2, 00:17:45.287 "base_bdevs_list": [ 00:17:45.287 { 00:17:45.287 "name": "BaseBdev1", 00:17:45.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.287 "is_configured": false, 00:17:45.287 "data_offset": 0, 00:17:45.287 "data_size": 0 00:17:45.287 }, 00:17:45.287 { 00:17:45.287 "name": "BaseBdev2", 00:17:45.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.287 "is_configured": false, 00:17:45.287 "data_offset": 0, 00:17:45.287 "data_size": 0 00:17:45.287 } 00:17:45.287 ] 00:17:45.287 }' 00:17:45.287 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.287 21:59:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.853 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:45.853 [2024-05-14 21:59:46.397519] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.853 [2024-05-14 21:59:46.397554] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b66300 name Existed_Raid, state configuring 00:17:45.853 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:46.112 [2024-05-14 21:59:46.669538] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.112 [2024-05-14 21:59:46.669605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.112 [2024-05-14 21:59:46.669612] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.112 [2024-05-14 21:59:46.669621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.112 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:46.370 [2024-05-14 21:59:46.942464] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.370 BaseBdev1 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:46.627 21:59:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.627 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.884 [ 00:17:46.884 { 00:17:46.884 "name": "BaseBdev1", 00:17:46.884 "aliases": [ 00:17:46.884 "473f5f8b-123d-11ef-8c90-4585f0cfab08" 00:17:46.884 ], 00:17:46.884 "product_name": "Malloc disk", 00:17:46.884 "block_size": 4128, 00:17:46.884 "num_blocks": 8192, 00:17:46.884 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:46.884 "md_size": 32, 00:17:46.884 "md_interleave": true, 00:17:46.884 "dif_type": 0, 00:17:46.884 "assigned_rate_limits": { 00:17:46.884 "rw_ios_per_sec": 0, 00:17:46.884 "rw_mbytes_per_sec": 0, 00:17:46.884 "r_mbytes_per_sec": 0, 00:17:46.884 "w_mbytes_per_sec": 0 00:17:46.884 }, 00:17:46.884 "claimed": true, 00:17:46.884 "claim_type": "exclusive_write", 00:17:46.884 "zoned": false, 00:17:46.884 "supported_io_types": { 00:17:46.884 "read": true, 00:17:46.884 "write": true, 00:17:46.884 "unmap": true, 00:17:46.884 "write_zeroes": true, 00:17:46.884 "flush": true, 00:17:46.884 "reset": true, 00:17:46.884 "compare": false, 00:17:46.884 "compare_and_write": false, 00:17:46.884 "abort": true, 00:17:46.884 "nvme_admin": false, 00:17:46.884 "nvme_io": false 00:17:46.884 }, 00:17:46.884 "memory_domains": [ 00:17:46.884 { 00:17:46.884 "dma_device_id": "system", 00:17:46.884 "dma_device_type": 1 00:17:46.884 }, 00:17:46.884 { 00:17:46.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.884 "dma_device_type": 2 00:17:46.884 } 00:17:46.884 ], 00:17:46.884 "driver_specific": {} 00:17:46.884 } 00:17:46.884 ] 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.884 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.448 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.448 "name": "Existed_Raid", 00:17:47.448 "uuid": "4715de82-123d-11ef-8c90-4585f0cfab08", 00:17:47.448 "strip_size_kb": 0, 00:17:47.448 "state": "configuring", 00:17:47.448 "raid_level": "raid1", 00:17:47.448 "superblock": true, 00:17:47.448 "num_base_bdevs": 2, 00:17:47.448 "num_base_bdevs_discovered": 1, 00:17:47.448 "num_base_bdevs_operational": 2, 00:17:47.448 "base_bdevs_list": [ 00:17:47.448 { 00:17:47.448 "name": "BaseBdev1", 00:17:47.448 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:47.448 "is_configured": true, 00:17:47.448 "data_offset": 256, 00:17:47.448 "data_size": 7936 00:17:47.448 }, 00:17:47.448 { 00:17:47.448 "name": "BaseBdev2", 00:17:47.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.448 "is_configured": false, 00:17:47.448 "data_offset": 0, 00:17:47.448 "data_size": 0 00:17:47.448 } 00:17:47.448 ] 00:17:47.448 }' 00:17:47.449 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.449 21:59:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.705 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:47.962 [2024-05-14 21:59:48.305542] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.962 [2024-05-14 21:59:48.305583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b66300 name Existed_Raid, state configuring 00:17:47.962 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:48.220 [2024-05-14 21:59:48.625569] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.220 [2024-05-14 21:59:48.626390] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.220 [2024-05-14 21:59:48.626435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.220 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.478 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.478 "name": "Existed_Raid", 00:17:48.478 "uuid": "484055cd-123d-11ef-8c90-4585f0cfab08", 00:17:48.478 "strip_size_kb": 0, 00:17:48.478 "state": "configuring", 00:17:48.478 "raid_level": "raid1", 00:17:48.478 "superblock": true, 00:17:48.478 "num_base_bdevs": 2, 00:17:48.478 "num_base_bdevs_discovered": 1, 00:17:48.478 "num_base_bdevs_operational": 2, 00:17:48.478 "base_bdevs_list": [ 00:17:48.478 { 00:17:48.478 "name": "BaseBdev1", 00:17:48.478 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:48.478 "is_configured": true, 00:17:48.478 "data_offset": 256, 00:17:48.478 "data_size": 7936 00:17:48.478 }, 00:17:48.478 { 00:17:48.478 "name": "BaseBdev2", 00:17:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.478 "is_configured": false, 00:17:48.478 "data_offset": 0, 00:17:48.478 "data_size": 0 00:17:48.478 } 00:17:48.478 ] 00:17:48.478 }' 00:17:48.478 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.478 21:59:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.736 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:48.994 [2024-05-14 21:59:49.541687] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.994 [2024-05-14 21:59:49.541786] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x829b66300 00:17:48.994 [2024-05-14 21:59:49.541793] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:48.994 [2024-05-14 21:59:49.541818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829bc4e20 00:17:48.994 [2024-05-14 21:59:49.541842] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829b66300 00:17:48.994 [2024-05-14 21:59:49.541846] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829b66300 00:17:48.994 [2024-05-14 21:59:49.541861] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.994 BaseBdev2 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:48.994 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.252 21:59:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.510 [ 00:17:49.510 { 00:17:49.510 "name": "BaseBdev2", 00:17:49.510 "aliases": [ 00:17:49.510 "48cc1d03-123d-11ef-8c90-4585f0cfab08" 00:17:49.510 ], 00:17:49.510 "product_name": "Malloc disk", 00:17:49.510 "block_size": 4128, 00:17:49.510 "num_blocks": 8192, 00:17:49.510 "uuid": "48cc1d03-123d-11ef-8c90-4585f0cfab08", 00:17:49.510 "md_size": 32, 00:17:49.510 "md_interleave": true, 00:17:49.510 "dif_type": 0, 00:17:49.510 "assigned_rate_limits": { 00:17:49.510 "rw_ios_per_sec": 0, 00:17:49.510 "rw_mbytes_per_sec": 0, 00:17:49.510 "r_mbytes_per_sec": 0, 00:17:49.510 "w_mbytes_per_sec": 0 00:17:49.510 }, 00:17:49.510 "claimed": true, 00:17:49.510 "claim_type": "exclusive_write", 00:17:49.510 "zoned": false, 00:17:49.510 "supported_io_types": { 00:17:49.510 "read": true, 00:17:49.510 "write": true, 00:17:49.510 "unmap": true, 00:17:49.510 "write_zeroes": true, 00:17:49.510 "flush": true, 00:17:49.510 "reset": true, 00:17:49.510 "compare": false, 00:17:49.510 "compare_and_write": false, 00:17:49.510 "abort": true, 00:17:49.510 "nvme_admin": false, 00:17:49.510 "nvme_io": false 00:17:49.510 }, 00:17:49.510 "memory_domains": [ 00:17:49.510 { 00:17:49.510 "dma_device_id": "system", 00:17:49.510 "dma_device_type": 1 00:17:49.510 }, 00:17:49.510 { 00:17:49.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.511 "dma_device_type": 2 00:17:49.511 } 00:17:49.511 ], 00:17:49.511 "driver_specific": {} 00:17:49.511 } 00:17:49.511 ] 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.770 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.028 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.028 "name": "Existed_Raid", 00:17:50.028 "uuid": "484055cd-123d-11ef-8c90-4585f0cfab08", 00:17:50.028 "strip_size_kb": 0, 00:17:50.028 "state": "online", 00:17:50.028 "raid_level": "raid1", 00:17:50.028 "superblock": true, 00:17:50.028 "num_base_bdevs": 2, 00:17:50.028 "num_base_bdevs_discovered": 2, 00:17:50.028 "num_base_bdevs_operational": 2, 00:17:50.028 "base_bdevs_list": [ 00:17:50.028 { 00:17:50.028 "name": "BaseBdev1", 00:17:50.028 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:50.028 "is_configured": true, 00:17:50.028 "data_offset": 256, 00:17:50.028 "data_size": 7936 00:17:50.028 }, 00:17:50.028 { 00:17:50.028 "name": "BaseBdev2", 00:17:50.028 "uuid": "48cc1d03-123d-11ef-8c90-4585f0cfab08", 00:17:50.028 "is_configured": true, 00:17:50.028 "data_offset": 256, 00:17:50.028 "data_size": 7936 00:17:50.028 } 00:17:50.028 ] 00:17:50.028 }' 00:17:50.028 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.028 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:50.286 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:50.545 [2024-05-14 21:59:50.965666] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:50.545 "name": "Existed_Raid", 00:17:50.545 "aliases": [ 00:17:50.545 "484055cd-123d-11ef-8c90-4585f0cfab08" 00:17:50.545 ], 00:17:50.545 "product_name": "Raid Volume", 00:17:50.545 "block_size": 4128, 00:17:50.545 "num_blocks": 7936, 00:17:50.545 "uuid": "484055cd-123d-11ef-8c90-4585f0cfab08", 00:17:50.545 "md_size": 32, 00:17:50.545 "md_interleave": true, 00:17:50.545 "dif_type": 0, 00:17:50.545 "assigned_rate_limits": { 00:17:50.545 "rw_ios_per_sec": 0, 00:17:50.545 "rw_mbytes_per_sec": 0, 00:17:50.545 "r_mbytes_per_sec": 0, 00:17:50.545 "w_mbytes_per_sec": 0 00:17:50.545 }, 00:17:50.545 "claimed": false, 00:17:50.545 "zoned": false, 00:17:50.545 "supported_io_types": { 00:17:50.545 "read": true, 00:17:50.545 "write": true, 00:17:50.545 "unmap": false, 00:17:50.545 "write_zeroes": true, 00:17:50.545 "flush": false, 00:17:50.545 "reset": true, 00:17:50.545 "compare": false, 00:17:50.545 "compare_and_write": false, 00:17:50.545 "abort": false, 00:17:50.545 "nvme_admin": false, 00:17:50.545 "nvme_io": false 00:17:50.545 }, 00:17:50.545 "memory_domains": [ 00:17:50.545 { 00:17:50.545 "dma_device_id": "system", 00:17:50.545 "dma_device_type": 1 00:17:50.545 }, 00:17:50.545 { 00:17:50.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.545 "dma_device_type": 2 00:17:50.545 }, 00:17:50.545 { 00:17:50.545 "dma_device_id": "system", 00:17:50.545 "dma_device_type": 1 00:17:50.545 }, 00:17:50.545 { 00:17:50.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.545 "dma_device_type": 2 00:17:50.545 } 00:17:50.545 ], 00:17:50.545 "driver_specific": { 00:17:50.545 "raid": { 00:17:50.545 "uuid": "484055cd-123d-11ef-8c90-4585f0cfab08", 00:17:50.545 "strip_size_kb": 0, 00:17:50.545 "state": "online", 00:17:50.545 "raid_level": "raid1", 00:17:50.545 "superblock": true, 00:17:50.545 "num_base_bdevs": 2, 00:17:50.545 "num_base_bdevs_discovered": 2, 00:17:50.545 "num_base_bdevs_operational": 2, 00:17:50.545 "base_bdevs_list": [ 00:17:50.545 { 00:17:50.545 "name": "BaseBdev1", 00:17:50.545 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:50.545 "is_configured": true, 00:17:50.545 "data_offset": 256, 00:17:50.545 "data_size": 7936 00:17:50.545 }, 00:17:50.545 { 00:17:50.545 "name": "BaseBdev2", 00:17:50.545 "uuid": "48cc1d03-123d-11ef-8c90-4585f0cfab08", 00:17:50.545 "is_configured": true, 00:17:50.545 "data_offset": 256, 00:17:50.545 "data_size": 7936 00:17:50.545 } 00:17:50.545 ] 00:17:50.545 } 00:17:50.545 } 00:17:50.545 }' 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:50.545 BaseBdev2' 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:50.545 21:59:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:50.804 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:50.804 "name": "BaseBdev1", 00:17:50.804 "aliases": [ 00:17:50.805 "473f5f8b-123d-11ef-8c90-4585f0cfab08" 00:17:50.805 ], 00:17:50.805 "product_name": "Malloc disk", 00:17:50.805 "block_size": 4128, 00:17:50.805 "num_blocks": 8192, 00:17:50.805 "uuid": "473f5f8b-123d-11ef-8c90-4585f0cfab08", 00:17:50.805 "md_size": 32, 00:17:50.805 "md_interleave": true, 00:17:50.805 "dif_type": 0, 00:17:50.805 "assigned_rate_limits": { 00:17:50.805 "rw_ios_per_sec": 0, 00:17:50.805 "rw_mbytes_per_sec": 0, 00:17:50.805 "r_mbytes_per_sec": 0, 00:17:50.805 "w_mbytes_per_sec": 0 00:17:50.805 }, 00:17:50.805 "claimed": true, 00:17:50.805 "claim_type": "exclusive_write", 00:17:50.805 "zoned": false, 00:17:50.805 "supported_io_types": { 00:17:50.805 "read": true, 00:17:50.805 "write": true, 00:17:50.805 "unmap": true, 00:17:50.805 "write_zeroes": true, 00:17:50.805 "flush": true, 00:17:50.805 "reset": true, 00:17:50.805 "compare": false, 00:17:50.805 "compare_and_write": false, 00:17:50.805 "abort": true, 00:17:50.805 "nvme_admin": false, 00:17:50.805 "nvme_io": false 00:17:50.805 }, 00:17:50.805 "memory_domains": [ 00:17:50.805 { 00:17:50.805 "dma_device_id": "system", 00:17:50.805 "dma_device_type": 1 00:17:50.805 }, 00:17:50.805 { 00:17:50.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.805 "dma_device_type": 2 00:17:50.805 } 00:17:50.805 ], 00:17:50.805 "driver_specific": {} 00:17:50.805 }' 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:50.805 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:51.063 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:51.064 "name": "BaseBdev2", 00:17:51.064 "aliases": [ 00:17:51.064 "48cc1d03-123d-11ef-8c90-4585f0cfab08" 00:17:51.064 ], 00:17:51.064 "product_name": "Malloc disk", 00:17:51.064 "block_size": 4128, 00:17:51.064 "num_blocks": 8192, 00:17:51.064 "uuid": "48cc1d03-123d-11ef-8c90-4585f0cfab08", 00:17:51.064 "md_size": 32, 00:17:51.064 "md_interleave": true, 00:17:51.064 "dif_type": 0, 00:17:51.064 "assigned_rate_limits": { 00:17:51.064 "rw_ios_per_sec": 0, 00:17:51.064 "rw_mbytes_per_sec": 0, 00:17:51.064 "r_mbytes_per_sec": 0, 00:17:51.064 "w_mbytes_per_sec": 0 00:17:51.064 }, 00:17:51.064 "claimed": true, 00:17:51.064 "claim_type": "exclusive_write", 00:17:51.064 "zoned": false, 00:17:51.064 "supported_io_types": { 00:17:51.064 "read": true, 00:17:51.064 "write": true, 00:17:51.064 "unmap": true, 00:17:51.064 "write_zeroes": true, 00:17:51.064 "flush": true, 00:17:51.064 "reset": true, 00:17:51.064 "compare": false, 00:17:51.064 "compare_and_write": false, 00:17:51.064 "abort": true, 00:17:51.064 "nvme_admin": false, 00:17:51.064 "nvme_io": false 00:17:51.064 }, 00:17:51.064 "memory_domains": [ 00:17:51.064 { 00:17:51.064 "dma_device_id": "system", 00:17:51.064 "dma_device_type": 1 00:17:51.064 }, 00:17:51.064 { 00:17:51.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.064 "dma_device_type": 2 00:17:51.064 } 00:17:51.064 ], 00:17:51.064 "driver_specific": {} 00:17:51.064 }' 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:51.064 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:51.322 [2024-05-14 21:59:51.825659] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.322 21:59:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.580 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.580 "name": "Existed_Raid", 00:17:51.580 "uuid": "484055cd-123d-11ef-8c90-4585f0cfab08", 00:17:51.580 "strip_size_kb": 0, 00:17:51.580 "state": "online", 00:17:51.580 "raid_level": "raid1", 00:17:51.580 "superblock": true, 00:17:51.580 "num_base_bdevs": 2, 00:17:51.580 "num_base_bdevs_discovered": 1, 00:17:51.580 "num_base_bdevs_operational": 1, 00:17:51.580 "base_bdevs_list": [ 00:17:51.580 { 00:17:51.580 "name": null, 00:17:51.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.580 "is_configured": false, 00:17:51.580 "data_offset": 256, 00:17:51.580 "data_size": 7936 00:17:51.580 }, 00:17:51.580 { 00:17:51.580 "name": "BaseBdev2", 00:17:51.580 "uuid": "48cc1d03-123d-11ef-8c90-4585f0cfab08", 00:17:51.580 "is_configured": true, 00:17:51.580 "data_offset": 256, 00:17:51.580 "data_size": 7936 00:17:51.580 } 00:17:51.580 ] 00:17:51.580 }' 00:17:51.580 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.580 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:51.838 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.838 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.838 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:52.095 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:52.095 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.095 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:52.352 [2024-05-14 21:59:52.926574] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.353 [2024-05-14 21:59:52.926638] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.353 [2024-05-14 21:59:52.935496] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.353 [2024-05-14 21:59:52.935556] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.353 [2024-05-14 21:59:52.935562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829b66300 name Existed_Raid, state offline 00:17:52.611 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:52.611 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:52.611 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.611 21:59:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 64801 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 64801 ']' 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 64801 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 64801 00:17:52.611 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:17:52.869 killing process with pid 64801 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64801' 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 64801 00:17:52.869 [2024-05-14 21:59:53.202181] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 64801 00:17:52.869 [2024-05-14 21:59:53.202237] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:17:52.869 00:17:52.869 real 0m9.251s 00:17:52.869 user 0m15.967s 00:17:52.869 sys 0m1.692s 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:52.869 ************************************ 00:17:52.869 21:59:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.869 END TEST raid_state_function_test_sb_md_interleaved 00:17:52.869 ************************************ 00:17:53.127 21:59:53 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:53.127 21:59:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:53.127 21:59:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:53.127 21:59:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.127 ************************************ 00:17:53.127 START TEST raid_superblock_test_md_interleaved 00:17:53.127 ************************************ 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=65075 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 65075 /var/tmp/spdk-raid.sock 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65075 ']' 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:53.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:53.127 21:59:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.127 [2024-05-14 21:59:53.511956] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:53.127 [2024-05-14 21:59:53.512137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:53.697 EAL: TSC is not safe to use in SMP mode 00:17:53.697 EAL: TSC is not invariant 00:17:53.697 [2024-05-14 21:59:54.056400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.697 [2024-05-14 21:59:54.169007] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:53.697 [2024-05-14 21:59:54.171613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.697 [2024-05-14 21:59:54.172498] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.697 [2024-05-14 21:59:54.172513] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:54.264 malloc1 00:17:54.264 21:59:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:54.828 [2024-05-14 21:59:55.112509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:54.828 [2024-05-14 21:59:55.112624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.828 [2024-05-14 21:59:55.113340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7a6780 00:17:54.828 [2024-05-14 21:59:55.113373] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.828 [2024-05-14 21:59:55.114355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.828 [2024-05-14 21:59:55.114383] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:54.828 pt1 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.828 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:55.086 malloc2 00:17:55.086 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:55.344 [2024-05-14 21:59:55.744536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:55.344 [2024-05-14 21:59:55.744651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.344 [2024-05-14 21:59:55.744691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7a6c80 00:17:55.344 [2024-05-14 21:59:55.744701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.344 [2024-05-14 21:59:55.745499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.344 [2024-05-14 21:59:55.745529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:55.344 pt2 00:17:55.344 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:55.344 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:55.344 21:59:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:55.602 [2024-05-14 21:59:56.024534] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.602 [2024-05-14 21:59:56.025316] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.602 [2024-05-14 21:59:56.025399] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7ab300 00:17:55.602 [2024-05-14 21:59:56.025408] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:55.602 [2024-05-14 21:59:56.025456] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c809e20 00:17:55.602 [2024-05-14 21:59:56.025478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7ab300 00:17:55.602 [2024-05-14 21:59:56.025482] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c7ab300 00:17:55.602 [2024-05-14 21:59:56.025498] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.602 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.859 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.859 "name": "raid_bdev1", 00:17:55.859 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:17:55.859 "strip_size_kb": 0, 00:17:55.859 "state": "online", 00:17:55.859 "raid_level": "raid1", 00:17:55.859 "superblock": true, 00:17:55.859 "num_base_bdevs": 2, 00:17:55.859 "num_base_bdevs_discovered": 2, 00:17:55.859 "num_base_bdevs_operational": 2, 00:17:55.859 "base_bdevs_list": [ 00:17:55.859 { 00:17:55.859 "name": "pt1", 00:17:55.859 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:17:55.859 "is_configured": true, 00:17:55.859 "data_offset": 256, 00:17:55.859 "data_size": 7936 00:17:55.859 }, 00:17:55.859 { 00:17:55.859 "name": "pt2", 00:17:55.859 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:17:55.859 "is_configured": true, 00:17:55.859 "data_offset": 256, 00:17:55.859 "data_size": 7936 00:17:55.859 } 00:17:55.859 ] 00:17:55.859 }' 00:17:55.859 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.859 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:56.116 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:56.374 [2024-05-14 21:59:56.920614] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:56.374 "name": "raid_bdev1", 00:17:56.374 "aliases": [ 00:17:56.374 "4ca953c5-123d-11ef-8c90-4585f0cfab08" 00:17:56.374 ], 00:17:56.374 "product_name": "Raid Volume", 00:17:56.374 "block_size": 4128, 00:17:56.374 "num_blocks": 7936, 00:17:56.374 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:17:56.374 "md_size": 32, 00:17:56.374 "md_interleave": true, 00:17:56.374 "dif_type": 0, 00:17:56.374 "assigned_rate_limits": { 00:17:56.374 "rw_ios_per_sec": 0, 00:17:56.374 "rw_mbytes_per_sec": 0, 00:17:56.374 "r_mbytes_per_sec": 0, 00:17:56.374 "w_mbytes_per_sec": 0 00:17:56.374 }, 00:17:56.374 "claimed": false, 00:17:56.374 "zoned": false, 00:17:56.374 "supported_io_types": { 00:17:56.374 "read": true, 00:17:56.374 "write": true, 00:17:56.374 "unmap": false, 00:17:56.374 "write_zeroes": true, 00:17:56.374 "flush": false, 00:17:56.374 "reset": true, 00:17:56.374 "compare": false, 00:17:56.374 "compare_and_write": false, 00:17:56.374 "abort": false, 00:17:56.374 "nvme_admin": false, 00:17:56.374 "nvme_io": false 00:17:56.374 }, 00:17:56.374 "memory_domains": [ 00:17:56.374 { 00:17:56.374 "dma_device_id": "system", 00:17:56.374 "dma_device_type": 1 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.374 "dma_device_type": 2 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "dma_device_id": "system", 00:17:56.374 "dma_device_type": 1 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.374 "dma_device_type": 2 00:17:56.374 } 00:17:56.374 ], 00:17:56.374 "driver_specific": { 00:17:56.374 "raid": { 00:17:56.374 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:17:56.374 "strip_size_kb": 0, 00:17:56.374 "state": "online", 00:17:56.374 "raid_level": "raid1", 00:17:56.374 "superblock": true, 00:17:56.374 "num_base_bdevs": 2, 00:17:56.374 "num_base_bdevs_discovered": 2, 00:17:56.374 "num_base_bdevs_operational": 2, 00:17:56.374 "base_bdevs_list": [ 00:17:56.374 { 00:17:56.374 "name": "pt1", 00:17:56.374 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:17:56.374 "is_configured": true, 00:17:56.374 "data_offset": 256, 00:17:56.374 "data_size": 7936 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "name": "pt2", 00:17:56.374 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:17:56.374 "is_configured": true, 00:17:56.374 "data_offset": 256, 00:17:56.374 "data_size": 7936 00:17:56.374 } 00:17:56.374 ] 00:17:56.374 } 00:17:56.374 } 00:17:56.374 }' 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:56.374 pt2' 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:56.374 21:59:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:56.632 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:56.632 "name": "pt1", 00:17:56.632 "aliases": [ 00:17:56.632 "48eaa229-1d63-645d-a885-40ef29c27c16" 00:17:56.632 ], 00:17:56.632 "product_name": "passthru", 00:17:56.632 "block_size": 4128, 00:17:56.632 "num_blocks": 8192, 00:17:56.632 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:17:56.632 "md_size": 32, 00:17:56.632 "md_interleave": true, 00:17:56.632 "dif_type": 0, 00:17:56.632 "assigned_rate_limits": { 00:17:56.632 "rw_ios_per_sec": 0, 00:17:56.632 "rw_mbytes_per_sec": 0, 00:17:56.632 "r_mbytes_per_sec": 0, 00:17:56.632 "w_mbytes_per_sec": 0 00:17:56.632 }, 00:17:56.632 "claimed": true, 00:17:56.632 "claim_type": "exclusive_write", 00:17:56.632 "zoned": false, 00:17:56.632 "supported_io_types": { 00:17:56.632 "read": true, 00:17:56.632 "write": true, 00:17:56.632 "unmap": true, 00:17:56.632 "write_zeroes": true, 00:17:56.632 "flush": true, 00:17:56.632 "reset": true, 00:17:56.632 "compare": false, 00:17:56.632 "compare_and_write": false, 00:17:56.632 "abort": true, 00:17:56.632 "nvme_admin": false, 00:17:56.632 "nvme_io": false 00:17:56.632 }, 00:17:56.632 "memory_domains": [ 00:17:56.632 { 00:17:56.632 "dma_device_id": "system", 00:17:56.632 "dma_device_type": 1 00:17:56.632 }, 00:17:56.632 { 00:17:56.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.632 "dma_device_type": 2 00:17:56.632 } 00:17:56.632 ], 00:17:56.632 "driver_specific": { 00:17:56.632 "passthru": { 00:17:56.632 "name": "pt1", 00:17:56.632 "base_bdev_name": "malloc1" 00:17:56.632 } 00:17:56.632 } 00:17:56.632 }' 00:17:56.632 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.632 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:56.890 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:57.148 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:57.148 "name": "pt2", 00:17:57.148 "aliases": [ 00:17:57.148 "4624930a-4df7-8058-bbfd-89df2dfec82f" 00:17:57.148 ], 00:17:57.148 "product_name": "passthru", 00:17:57.148 "block_size": 4128, 00:17:57.148 "num_blocks": 8192, 00:17:57.148 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:17:57.148 "md_size": 32, 00:17:57.148 "md_interleave": true, 00:17:57.148 "dif_type": 0, 00:17:57.148 "assigned_rate_limits": { 00:17:57.148 "rw_ios_per_sec": 0, 00:17:57.148 "rw_mbytes_per_sec": 0, 00:17:57.148 "r_mbytes_per_sec": 0, 00:17:57.148 "w_mbytes_per_sec": 0 00:17:57.148 }, 00:17:57.148 "claimed": true, 00:17:57.148 "claim_type": "exclusive_write", 00:17:57.148 "zoned": false, 00:17:57.148 "supported_io_types": { 00:17:57.148 "read": true, 00:17:57.148 "write": true, 00:17:57.148 "unmap": true, 00:17:57.148 "write_zeroes": true, 00:17:57.148 "flush": true, 00:17:57.148 "reset": true, 00:17:57.148 "compare": false, 00:17:57.148 "compare_and_write": false, 00:17:57.149 "abort": true, 00:17:57.149 "nvme_admin": false, 00:17:57.149 "nvme_io": false 00:17:57.149 }, 00:17:57.149 "memory_domains": [ 00:17:57.149 { 00:17:57.149 "dma_device_id": "system", 00:17:57.149 "dma_device_type": 1 00:17:57.149 }, 00:17:57.149 { 00:17:57.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.149 "dma_device_type": 2 00:17:57.149 } 00:17:57.149 ], 00:17:57.149 "driver_specific": { 00:17:57.149 "passthru": { 00:17:57.149 "name": "pt2", 00:17:57.149 "base_bdev_name": "malloc2" 00:17:57.149 } 00:17:57.149 } 00:17:57.149 }' 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.149 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.407 [2024-05-14 21:59:57.828640] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.407 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4ca953c5-123d-11ef-8c90-4585f0cfab08 00:17:57.407 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 4ca953c5-123d-11ef-8c90-4585f0cfab08 ']' 00:17:57.407 21:59:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:57.665 [2024-05-14 21:59:58.096600] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.666 [2024-05-14 21:59:58.096627] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.666 [2024-05-14 21:59:58.096652] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.666 [2024-05-14 21:59:58.096668] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.666 [2024-05-14 21:59:58.096673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7ab300 name raid_bdev1, state offline 00:17:57.666 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.666 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.922 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.922 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.922 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.922 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:58.179 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.179 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:58.436 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.436 21:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:58.693 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:58.951 [2024-05-14 21:59:59.448700] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.951 [2024-05-14 21:59:59.449309] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.951 [2024-05-14 21:59:59.449343] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:58.951 [2024-05-14 21:59:59.449386] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:58.951 [2024-05-14 21:59:59.449398] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.951 [2024-05-14 21:59:59.449403] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7ab300 name raid_bdev1, state configuring 00:17:58.951 request: 00:17:58.951 { 00:17:58.951 "name": "raid_bdev1", 00:17:58.951 "raid_level": "raid1", 00:17:58.951 "base_bdevs": [ 00:17:58.951 "malloc1", 00:17:58.951 "malloc2" 00:17:58.951 ], 00:17:58.951 "superblock": false, 00:17:58.951 "method": "bdev_raid_create", 00:17:58.951 "req_id": 1 00:17:58.951 } 00:17:58.951 Got JSON-RPC error response 00:17:58.951 response: 00:17:58.951 { 00:17:58.951 "code": -17, 00:17:58.951 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.951 } 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.951 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:59.208 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:59.208 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:59.208 21:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.466 [2024-05-14 22:00:00.004702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.466 [2024-05-14 22:00:00.004778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.466 [2024-05-14 22:00:00.004835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7a6c80 00:17:59.466 [2024-05-14 22:00:00.004844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.466 [2024-05-14 22:00:00.005456] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.466 [2024-05-14 22:00:00.005486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.466 [2024-05-14 22:00:00.005507] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:59.466 [2024-05-14 22:00:00.005529] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.466 pt1 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.466 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.724 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.724 "name": "raid_bdev1", 00:17:59.724 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:17:59.724 "strip_size_kb": 0, 00:17:59.724 "state": "configuring", 00:17:59.724 "raid_level": "raid1", 00:17:59.724 "superblock": true, 00:17:59.724 "num_base_bdevs": 2, 00:17:59.724 "num_base_bdevs_discovered": 1, 00:17:59.724 "num_base_bdevs_operational": 2, 00:17:59.724 "base_bdevs_list": [ 00:17:59.724 { 00:17:59.724 "name": "pt1", 00:17:59.724 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:17:59.724 "is_configured": true, 00:17:59.724 "data_offset": 256, 00:17:59.724 "data_size": 7936 00:17:59.724 }, 00:17:59.724 { 00:17:59.724 "name": null, 00:17:59.724 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:17:59.724 "is_configured": false, 00:17:59.724 "data_offset": 256, 00:17:59.724 "data_size": 7936 00:17:59.724 } 00:17:59.724 ] 00:17:59.724 }' 00:17:59.724 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.724 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.290 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:00.290 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:00.291 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.291 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.291 [2024-05-14 22:00:00.868751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.291 [2024-05-14 22:00:00.868818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.291 [2024-05-14 22:00:00.868847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7a6f00 00:18:00.291 [2024-05-14 22:00:00.868856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.291 [2024-05-14 22:00:00.868916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.291 [2024-05-14 22:00:00.868928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.291 [2024-05-14 22:00:00.868947] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:00.291 [2024-05-14 22:00:00.868956] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.291 [2024-05-14 22:00:00.868980] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7ab300 00:18:00.291 [2024-05-14 22:00:00.868984] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.291 [2024-05-14 22:00:00.869004] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c809e20 00:18:00.291 [2024-05-14 22:00:00.869017] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7ab300 00:18:00.291 [2024-05-14 22:00:00.869021] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c7ab300 00:18:00.291 [2024-05-14 22:00:00.869034] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.291 pt2 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.549 22:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.549 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.549 "name": "raid_bdev1", 00:18:00.549 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:18:00.549 "strip_size_kb": 0, 00:18:00.549 "state": "online", 00:18:00.549 "raid_level": "raid1", 00:18:00.549 "superblock": true, 00:18:00.549 "num_base_bdevs": 2, 00:18:00.549 "num_base_bdevs_discovered": 2, 00:18:00.549 "num_base_bdevs_operational": 2, 00:18:00.549 "base_bdevs_list": [ 00:18:00.549 { 00:18:00.549 "name": "pt1", 00:18:00.549 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:18:00.549 "is_configured": true, 00:18:00.549 "data_offset": 256, 00:18:00.549 "data_size": 7936 00:18:00.549 }, 00:18:00.549 { 00:18:00.549 "name": "pt2", 00:18:00.549 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:18:00.549 "is_configured": true, 00:18:00.549 "data_offset": 256, 00:18:00.549 "data_size": 7936 00:18:00.549 } 00:18:00.549 ] 00:18:00.549 }' 00:18:00.549 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.549 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:01.116 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:18:01.117 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.117 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:01.374 [2024-05-14 22:00:01.736815] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:01.374 "name": "raid_bdev1", 00:18:01.374 "aliases": [ 00:18:01.374 "4ca953c5-123d-11ef-8c90-4585f0cfab08" 00:18:01.374 ], 00:18:01.374 "product_name": "Raid Volume", 00:18:01.374 "block_size": 4128, 00:18:01.374 "num_blocks": 7936, 00:18:01.374 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:18:01.374 "md_size": 32, 00:18:01.374 "md_interleave": true, 00:18:01.374 "dif_type": 0, 00:18:01.374 "assigned_rate_limits": { 00:18:01.374 "rw_ios_per_sec": 0, 00:18:01.374 "rw_mbytes_per_sec": 0, 00:18:01.374 "r_mbytes_per_sec": 0, 00:18:01.374 "w_mbytes_per_sec": 0 00:18:01.374 }, 00:18:01.374 "claimed": false, 00:18:01.374 "zoned": false, 00:18:01.374 "supported_io_types": { 00:18:01.374 "read": true, 00:18:01.374 "write": true, 00:18:01.374 "unmap": false, 00:18:01.374 "write_zeroes": true, 00:18:01.374 "flush": false, 00:18:01.374 "reset": true, 00:18:01.374 "compare": false, 00:18:01.374 "compare_and_write": false, 00:18:01.374 "abort": false, 00:18:01.374 "nvme_admin": false, 00:18:01.374 "nvme_io": false 00:18:01.374 }, 00:18:01.374 "memory_domains": [ 00:18:01.374 { 00:18:01.374 "dma_device_id": "system", 00:18:01.374 "dma_device_type": 1 00:18:01.374 }, 00:18:01.374 { 00:18:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.374 "dma_device_type": 2 00:18:01.374 }, 00:18:01.374 { 00:18:01.374 "dma_device_id": "system", 00:18:01.374 "dma_device_type": 1 00:18:01.374 }, 00:18:01.374 { 00:18:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.374 "dma_device_type": 2 00:18:01.374 } 00:18:01.374 ], 00:18:01.374 "driver_specific": { 00:18:01.374 "raid": { 00:18:01.374 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:18:01.374 "strip_size_kb": 0, 00:18:01.374 "state": "online", 00:18:01.374 "raid_level": "raid1", 00:18:01.374 "superblock": true, 00:18:01.374 "num_base_bdevs": 2, 00:18:01.374 "num_base_bdevs_discovered": 2, 00:18:01.374 "num_base_bdevs_operational": 2, 00:18:01.374 "base_bdevs_list": [ 00:18:01.374 { 00:18:01.374 "name": "pt1", 00:18:01.374 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:18:01.374 "is_configured": true, 00:18:01.374 "data_offset": 256, 00:18:01.374 "data_size": 7936 00:18:01.374 }, 00:18:01.374 { 00:18:01.374 "name": "pt2", 00:18:01.374 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:18:01.374 "is_configured": true, 00:18:01.374 "data_offset": 256, 00:18:01.374 "data_size": 7936 00:18:01.374 } 00:18:01.374 ] 00:18:01.374 } 00:18:01.374 } 00:18:01.374 }' 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:18:01.374 pt2' 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:01.374 22:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:01.632 "name": "pt1", 00:18:01.632 "aliases": [ 00:18:01.632 "48eaa229-1d63-645d-a885-40ef29c27c16" 00:18:01.632 ], 00:18:01.632 "product_name": "passthru", 00:18:01.632 "block_size": 4128, 00:18:01.632 "num_blocks": 8192, 00:18:01.632 "uuid": "48eaa229-1d63-645d-a885-40ef29c27c16", 00:18:01.632 "md_size": 32, 00:18:01.632 "md_interleave": true, 00:18:01.632 "dif_type": 0, 00:18:01.632 "assigned_rate_limits": { 00:18:01.632 "rw_ios_per_sec": 0, 00:18:01.632 "rw_mbytes_per_sec": 0, 00:18:01.632 "r_mbytes_per_sec": 0, 00:18:01.632 "w_mbytes_per_sec": 0 00:18:01.632 }, 00:18:01.632 "claimed": true, 00:18:01.632 "claim_type": "exclusive_write", 00:18:01.632 "zoned": false, 00:18:01.632 "supported_io_types": { 00:18:01.632 "read": true, 00:18:01.632 "write": true, 00:18:01.632 "unmap": true, 00:18:01.632 "write_zeroes": true, 00:18:01.632 "flush": true, 00:18:01.632 "reset": true, 00:18:01.632 "compare": false, 00:18:01.632 "compare_and_write": false, 00:18:01.632 "abort": true, 00:18:01.632 "nvme_admin": false, 00:18:01.632 "nvme_io": false 00:18:01.632 }, 00:18:01.632 "memory_domains": [ 00:18:01.632 { 00:18:01.632 "dma_device_id": "system", 00:18:01.632 "dma_device_type": 1 00:18:01.632 }, 00:18:01.632 { 00:18:01.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.632 "dma_device_type": 2 00:18:01.632 } 00:18:01.632 ], 00:18:01.632 "driver_specific": { 00:18:01.632 "passthru": { 00:18:01.632 "name": "pt1", 00:18:01.632 "base_bdev_name": "malloc1" 00:18:01.632 } 00:18:01.632 } 00:18:01.632 }' 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:01.632 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:01.889 "name": "pt2", 00:18:01.889 "aliases": [ 00:18:01.889 "4624930a-4df7-8058-bbfd-89df2dfec82f" 00:18:01.889 ], 00:18:01.889 "product_name": "passthru", 00:18:01.889 "block_size": 4128, 00:18:01.889 "num_blocks": 8192, 00:18:01.889 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:18:01.889 "md_size": 32, 00:18:01.889 "md_interleave": true, 00:18:01.889 "dif_type": 0, 00:18:01.889 "assigned_rate_limits": { 00:18:01.889 "rw_ios_per_sec": 0, 00:18:01.889 "rw_mbytes_per_sec": 0, 00:18:01.889 "r_mbytes_per_sec": 0, 00:18:01.889 "w_mbytes_per_sec": 0 00:18:01.889 }, 00:18:01.889 "claimed": true, 00:18:01.889 "claim_type": "exclusive_write", 00:18:01.889 "zoned": false, 00:18:01.889 "supported_io_types": { 00:18:01.889 "read": true, 00:18:01.889 "write": true, 00:18:01.889 "unmap": true, 00:18:01.889 "write_zeroes": true, 00:18:01.889 "flush": true, 00:18:01.889 "reset": true, 00:18:01.889 "compare": false, 00:18:01.889 "compare_and_write": false, 00:18:01.889 "abort": true, 00:18:01.889 "nvme_admin": false, 00:18:01.889 "nvme_io": false 00:18:01.889 }, 00:18:01.889 "memory_domains": [ 00:18:01.889 { 00:18:01.889 "dma_device_id": "system", 00:18:01.889 "dma_device_type": 1 00:18:01.889 }, 00:18:01.889 { 00:18:01.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.889 "dma_device_type": 2 00:18:01.889 } 00:18:01.889 ], 00:18:01.889 "driver_specific": { 00:18:01.889 "passthru": { 00:18:01.889 "name": "pt2", 00:18:01.889 "base_bdev_name": "malloc2" 00:18:01.889 } 00:18:01.889 } 00:18:01.889 }' 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.889 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.144 [2024-05-14 22:00:02.640822] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.144 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 4ca953c5-123d-11ef-8c90-4585f0cfab08 '!=' 4ca953c5-123d-11ef-8c90-4585f0cfab08 ']' 00:18:02.144 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:02.144 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:02.144 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:18:02.144 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.399 [2024-05-14 22:00:02.916800] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.400 22:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.657 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.657 "name": "raid_bdev1", 00:18:02.657 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:18:02.657 "strip_size_kb": 0, 00:18:02.657 "state": "online", 00:18:02.657 "raid_level": "raid1", 00:18:02.657 "superblock": true, 00:18:02.657 "num_base_bdevs": 2, 00:18:02.657 "num_base_bdevs_discovered": 1, 00:18:02.657 "num_base_bdevs_operational": 1, 00:18:02.657 "base_bdevs_list": [ 00:18:02.657 { 00:18:02.657 "name": null, 00:18:02.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.657 "is_configured": false, 00:18:02.657 "data_offset": 256, 00:18:02.657 "data_size": 7936 00:18:02.657 }, 00:18:02.657 { 00:18:02.657 "name": "pt2", 00:18:02.657 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:18:02.657 "is_configured": true, 00:18:02.657 "data_offset": 256, 00:18:02.657 "data_size": 7936 00:18:02.657 } 00:18:02.657 ] 00:18:02.657 }' 00:18:02.657 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.657 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.222 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:03.480 [2024-05-14 22:00:03.860801] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.480 [2024-05-14 22:00:03.860829] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.480 [2024-05-14 22:00:03.860854] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.480 [2024-05-14 22:00:03.860867] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.480 [2024-05-14 22:00:03.860872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7ab300 name raid_bdev1, state offline 00:18:03.480 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.480 22:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:03.738 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:03.738 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:03.738 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:03.738 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.738 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:04.026 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.284 [2024-05-14 22:00:04.696843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.284 [2024-05-14 22:00:04.696895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.284 [2024-05-14 22:00:04.696923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7a6f00 00:18:04.284 [2024-05-14 22:00:04.696932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.284 [2024-05-14 22:00:04.697569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.284 [2024-05-14 22:00:04.697601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.284 [2024-05-14 22:00:04.697622] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:04.284 [2024-05-14 22:00:04.697635] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.284 [2024-05-14 22:00:04.697655] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7ab300 00:18:04.284 [2024-05-14 22:00:04.697659] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:04.284 [2024-05-14 22:00:04.697682] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c809e20 00:18:04.284 [2024-05-14 22:00:04.697695] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7ab300 00:18:04.284 [2024-05-14 22:00:04.697698] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c7ab300 00:18:04.284 [2024-05-14 22:00:04.697711] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.284 pt2 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.284 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.543 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.543 "name": "raid_bdev1", 00:18:04.543 "uuid": "4ca953c5-123d-11ef-8c90-4585f0cfab08", 00:18:04.543 "strip_size_kb": 0, 00:18:04.543 "state": "online", 00:18:04.543 "raid_level": "raid1", 00:18:04.543 "superblock": true, 00:18:04.543 "num_base_bdevs": 2, 00:18:04.543 "num_base_bdevs_discovered": 1, 00:18:04.543 "num_base_bdevs_operational": 1, 00:18:04.543 "base_bdevs_list": [ 00:18:04.543 { 00:18:04.543 "name": null, 00:18:04.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.543 "is_configured": false, 00:18:04.543 "data_offset": 256, 00:18:04.543 "data_size": 7936 00:18:04.543 }, 00:18:04.543 { 00:18:04.543 "name": "pt2", 00:18:04.543 "uuid": "4624930a-4df7-8058-bbfd-89df2dfec82f", 00:18:04.543 "is_configured": true, 00:18:04.543 "data_offset": 256, 00:18:04.543 "data_size": 7936 00:18:04.543 } 00:18:04.543 ] 00:18:04.543 }' 00:18:04.543 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.543 22:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.801 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:18:04.801 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.801 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:18:05.060 [2024-05-14 22:00:05.560903] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # '[' 4ca953c5-123d-11ef-8c90-4585f0cfab08 '!=' 4ca953c5-123d-11ef-8c90-4585f0cfab08 ']' 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@568 -- # killprocess 65075 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65075 ']' 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65075 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65075 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:18:05.060 killing process with pid 65075 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65075' 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 65075 00:18:05.060 [2024-05-14 22:00:05.587225] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.060 [2024-05-14 22:00:05.587251] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.060 [2024-05-14 22:00:05.587263] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.060 [2024-05-14 22:00:05.587268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7ab300 name raid_bdev1, state offline 00:18:05.060 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 65075 00:18:05.060 [2024-05-14 22:00:05.599164] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.318 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # return 0 00:18:05.318 00:18:05.318 real 0m12.281s 00:18:05.318 user 0m21.923s 00:18:05.318 sys 0m1.866s 00:18:05.318 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:05.318 ************************************ 00:18:05.319 END TEST raid_superblock_test_md_interleaved 00:18:05.319 ************************************ 00:18:05.319 22:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.319 22:00:05 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:05.319 22:00:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:05.319 22:00:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.319 22:00:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.319 ************************************ 00:18:05.319 START TEST raid_rebuild_test_sb_md_interleaved 00:18:05.319 ************************************ 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local verify=false 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # echo BaseBdev1 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # echo BaseBdev2 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # local strip_size 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@582 -- # local create_arg 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local data_offset 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # raid_pid=65424 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # waitforlisten 65424 /var/tmp/spdk-raid.sock 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65424 ']' 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.319 22:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.319 [2024-05-14 22:00:05.840013] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:05.319 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.319 Zero copy mechanism will not be used. 00:18:05.319 [2024-05-14 22:00:05.840215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:05.885 EAL: TSC is not safe to use in SMP mode 00:18:05.885 EAL: TSC is not invariant 00:18:05.885 [2024-05-14 22:00:06.373883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.885 [2024-05-14 22:00:06.462809] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:05.885 [2024-05-14 22:00:06.465068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.885 [2024-05-14 22:00:06.465863] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.885 [2024-05-14 22:00:06.465880] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.451 22:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:06.451 22:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:18:06.451 22:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:18:06.451 22:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:06.709 BaseBdev1_malloc 00:18:06.709 22:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.967 [2024-05-14 22:00:07.470496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.967 [2024-05-14 22:00:07.470559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.967 [2024-05-14 22:00:07.471221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c5780 00:18:06.967 [2024-05-14 22:00:07.471268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.967 [2024-05-14 22:00:07.472101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.967 [2024-05-14 22:00:07.472131] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.967 BaseBdev1 00:18:06.967 22:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:18:06.967 22:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:07.224 BaseBdev2_malloc 00:18:07.224 22:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:07.482 [2024-05-14 22:00:07.942519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:07.482 [2024-05-14 22:00:07.942585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.482 [2024-05-14 22:00:07.942614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c5c80 00:18:07.482 [2024-05-14 22:00:07.942623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.482 [2024-05-14 22:00:07.943234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.482 [2024-05-14 22:00:07.943265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.482 BaseBdev2 00:18:07.482 22:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:07.741 spare_malloc 00:18:07.741 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:07.999 spare_delay 00:18:07.999 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@614 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:08.261 [2024-05-14 22:00:08.718520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.261 [2024-05-14 22:00:08.718586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.261 [2024-05-14 22:00:08.718614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c6400 00:18:08.261 [2024-05-14 22:00:08.718622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.261 [2024-05-14 22:00:08.719236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.261 [2024-05-14 22:00:08.719266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.261 spare 00:18:08.261 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:08.524 [2024-05-14 22:00:08.946543] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.524 [2024-05-14 22:00:08.947153] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.524 [2024-05-14 22:00:08.947240] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c2ca300 00:18:08.524 [2024-05-14 22:00:08.947247] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:08.524 [2024-05-14 22:00:08.947280] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328e20 00:18:08.524 [2024-05-14 22:00:08.947301] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c2ca300 00:18:08.524 [2024-05-14 22:00:08.947307] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c2ca300 00:18:08.524 [2024-05-14 22:00:08.947329] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.524 22:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.824 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.824 "name": "raid_bdev1", 00:18:08.824 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:08.824 "strip_size_kb": 0, 00:18:08.824 "state": "online", 00:18:08.824 "raid_level": "raid1", 00:18:08.824 "superblock": true, 00:18:08.824 "num_base_bdevs": 2, 00:18:08.824 "num_base_bdevs_discovered": 2, 00:18:08.824 "num_base_bdevs_operational": 2, 00:18:08.824 "base_bdevs_list": [ 00:18:08.824 { 00:18:08.824 "name": "BaseBdev1", 00:18:08.824 "uuid": "41bb5f73-3219-c757-a060-a1dff8a4dab0", 00:18:08.824 "is_configured": true, 00:18:08.824 "data_offset": 256, 00:18:08.824 "data_size": 7936 00:18:08.824 }, 00:18:08.824 { 00:18:08.824 "name": "BaseBdev2", 00:18:08.824 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:08.824 "is_configured": true, 00:18:08.824 "data_offset": 256, 00:18:08.824 "data_size": 7936 00:18:08.824 } 00:18:08.824 ] 00:18:08.824 }' 00:18:08.824 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.824 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.110 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:09.110 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:18:09.376 [2024-05-14 22:00:09.742577] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.376 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:18:09.376 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.376 22:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.643 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:18:09.643 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:18:09.643 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@629 -- # '[' false = true ']' 00:18:09.643 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:09.643 [2024-05-14 22:00:10.218544] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.908 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.909 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.909 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.176 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.176 "name": "raid_bdev1", 00:18:10.176 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:10.176 "strip_size_kb": 0, 00:18:10.176 "state": "online", 00:18:10.176 "raid_level": "raid1", 00:18:10.176 "superblock": true, 00:18:10.176 "num_base_bdevs": 2, 00:18:10.176 "num_base_bdevs_discovered": 1, 00:18:10.176 "num_base_bdevs_operational": 1, 00:18:10.176 "base_bdevs_list": [ 00:18:10.176 { 00:18:10.176 "name": null, 00:18:10.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.176 "is_configured": false, 00:18:10.176 "data_offset": 256, 00:18:10.176 "data_size": 7936 00:18:10.176 }, 00:18:10.176 { 00:18:10.176 "name": "BaseBdev2", 00:18:10.177 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:10.177 "is_configured": true, 00:18:10.177 "data_offset": 256, 00:18:10.177 "data_size": 7936 00:18:10.177 } 00:18:10.177 ] 00:18:10.177 }' 00:18:10.177 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.177 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.457 22:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.457 [2024-05-14 22:00:11.038566] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.457 [2024-05-14 22:00:11.038820] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328ec0 00:18:10.457 [2024-05-14 22:00:11.039715] bdev_raid.c:2777:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.719 22:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # sleep 1 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.676 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:11.944 "name": "raid_bdev1", 00:18:11.944 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:11.944 "strip_size_kb": 0, 00:18:11.944 "state": "online", 00:18:11.944 "raid_level": "raid1", 00:18:11.944 "superblock": true, 00:18:11.944 "num_base_bdevs": 2, 00:18:11.944 "num_base_bdevs_discovered": 2, 00:18:11.944 "num_base_bdevs_operational": 2, 00:18:11.944 "process": { 00:18:11.944 "type": "rebuild", 00:18:11.944 "target": "spare", 00:18:11.944 "progress": { 00:18:11.944 "blocks": 3072, 00:18:11.944 "percent": 38 00:18:11.944 } 00:18:11.944 }, 00:18:11.944 "base_bdevs_list": [ 00:18:11.944 { 00:18:11.944 "name": "spare", 00:18:11.944 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:11.944 "is_configured": true, 00:18:11.944 "data_offset": 256, 00:18:11.944 "data_size": 7936 00:18:11.944 }, 00:18:11.944 { 00:18:11.944 "name": "BaseBdev2", 00:18:11.944 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:11.944 "is_configured": true, 00:18:11.944 "data_offset": 256, 00:18:11.944 "data_size": 7936 00:18:11.944 } 00:18:11.944 ] 00:18:11.944 }' 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.944 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:12.211 [2024-05-14 22:00:12.607331] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.211 [2024-05-14 22:00:12.647874] bdev_raid.c:2470:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:12.211 [2024-05-14 22:00:12.647945] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.211 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.212 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.212 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.474 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.474 "name": "raid_bdev1", 00:18:12.474 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:12.474 "strip_size_kb": 0, 00:18:12.474 "state": "online", 00:18:12.474 "raid_level": "raid1", 00:18:12.474 "superblock": true, 00:18:12.474 "num_base_bdevs": 2, 00:18:12.474 "num_base_bdevs_discovered": 1, 00:18:12.474 "num_base_bdevs_operational": 1, 00:18:12.474 "base_bdevs_list": [ 00:18:12.474 { 00:18:12.474 "name": null, 00:18:12.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.474 "is_configured": false, 00:18:12.474 "data_offset": 256, 00:18:12.474 "data_size": 7936 00:18:12.474 }, 00:18:12.474 { 00:18:12.474 "name": "BaseBdev2", 00:18:12.474 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:12.474 "is_configured": true, 00:18:12.474 "data_offset": 256, 00:18:12.474 "data_size": 7936 00:18:12.474 } 00:18:12.475 ] 00:18:12.475 }' 00:18:12.475 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.475 22:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.737 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.998 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:12.998 "name": "raid_bdev1", 00:18:12.998 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:12.998 "strip_size_kb": 0, 00:18:12.998 "state": "online", 00:18:12.998 "raid_level": "raid1", 00:18:12.998 "superblock": true, 00:18:12.998 "num_base_bdevs": 2, 00:18:12.998 "num_base_bdevs_discovered": 1, 00:18:12.998 "num_base_bdevs_operational": 1, 00:18:12.998 "base_bdevs_list": [ 00:18:12.998 { 00:18:12.999 "name": null, 00:18:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.999 "is_configured": false, 00:18:12.999 "data_offset": 256, 00:18:12.999 "data_size": 7936 00:18:12.999 }, 00:18:12.999 { 00:18:12.999 "name": "BaseBdev2", 00:18:12.999 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:12.999 "is_configured": true, 00:18:12.999 "data_offset": 256, 00:18:12.999 "data_size": 7936 00:18:12.999 } 00:18:12.999 ] 00:18:12.999 }' 00:18:12.999 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:12.999 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:12.999 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:12.999 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:12.999 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@667 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:13.267 [2024-05-14 22:00:13.795440] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.267 [2024-05-14 22:00:13.795682] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328e20 00:18:13.267 [2024-05-14 22:00:13.796509] bdev_raid.c:2777:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.267 22:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # sleep 1 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.654 22:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.654 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:14.654 "name": "raid_bdev1", 00:18:14.654 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:14.654 "strip_size_kb": 0, 00:18:14.654 "state": "online", 00:18:14.654 "raid_level": "raid1", 00:18:14.654 "superblock": true, 00:18:14.654 "num_base_bdevs": 2, 00:18:14.654 "num_base_bdevs_discovered": 2, 00:18:14.654 "num_base_bdevs_operational": 2, 00:18:14.654 "process": { 00:18:14.654 "type": "rebuild", 00:18:14.654 "target": "spare", 00:18:14.654 "progress": { 00:18:14.654 "blocks": 3328, 00:18:14.654 "percent": 41 00:18:14.654 } 00:18:14.654 }, 00:18:14.654 "base_bdevs_list": [ 00:18:14.654 { 00:18:14.654 "name": "spare", 00:18:14.654 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:14.654 "is_configured": true, 00:18:14.654 "data_offset": 256, 00:18:14.654 "data_size": 7936 00:18:14.654 }, 00:18:14.654 { 00:18:14.654 "name": "BaseBdev2", 00:18:14.654 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:14.654 "is_configured": true, 00:18:14.654 "data_offset": 256, 00:18:14.654 "data_size": 7936 00:18:14.654 } 00:18:14.654 ] 00:18:14.654 }' 00:18:14.654 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:14.654 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:18:14.655 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # local timeout=599 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.655 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:14.912 "name": "raid_bdev1", 00:18:14.912 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:14.912 "strip_size_kb": 0, 00:18:14.912 "state": "online", 00:18:14.912 "raid_level": "raid1", 00:18:14.912 "superblock": true, 00:18:14.912 "num_base_bdevs": 2, 00:18:14.912 "num_base_bdevs_discovered": 2, 00:18:14.912 "num_base_bdevs_operational": 2, 00:18:14.912 "process": { 00:18:14.912 "type": "rebuild", 00:18:14.912 "target": "spare", 00:18:14.912 "progress": { 00:18:14.912 "blocks": 3840, 00:18:14.912 "percent": 48 00:18:14.912 } 00:18:14.912 }, 00:18:14.912 "base_bdevs_list": [ 00:18:14.912 { 00:18:14.912 "name": "spare", 00:18:14.912 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:14.912 "is_configured": true, 00:18:14.912 "data_offset": 256, 00:18:14.912 "data_size": 7936 00:18:14.912 }, 00:18:14.912 { 00:18:14.912 "name": "BaseBdev2", 00:18:14.912 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:14.912 "is_configured": true, 00:18:14.912 "data_offset": 256, 00:18:14.912 "data_size": 7936 00:18:14.912 } 00:18:14.912 ] 00:18:14.912 }' 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.912 22:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:16.283 "name": "raid_bdev1", 00:18:16.283 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:16.283 "strip_size_kb": 0, 00:18:16.283 "state": "online", 00:18:16.283 "raid_level": "raid1", 00:18:16.283 "superblock": true, 00:18:16.283 "num_base_bdevs": 2, 00:18:16.283 "num_base_bdevs_discovered": 2, 00:18:16.283 "num_base_bdevs_operational": 2, 00:18:16.283 "process": { 00:18:16.283 "type": "rebuild", 00:18:16.283 "target": "spare", 00:18:16.283 "progress": { 00:18:16.283 "blocks": 7424, 00:18:16.283 "percent": 93 00:18:16.283 } 00:18:16.283 }, 00:18:16.283 "base_bdevs_list": [ 00:18:16.283 { 00:18:16.283 "name": "spare", 00:18:16.283 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:16.283 "is_configured": true, 00:18:16.283 "data_offset": 256, 00:18:16.283 "data_size": 7936 00:18:16.283 }, 00:18:16.283 { 00:18:16.283 "name": "BaseBdev2", 00:18:16.283 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:16.283 "is_configured": true, 00:18:16.283 "data_offset": 256, 00:18:16.283 "data_size": 7936 00:18:16.283 } 00:18:16.283 ] 00:18:16.283 }' 00:18:16.283 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:16.284 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.284 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:16.284 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.284 22:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:18:16.542 [2024-05-14 22:00:16.912227] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:16.542 [2024-05-14 22:00:16.912270] bdev_raid.c:2460:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:16.542 [2024-05-14 22:00:16.912334] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.474 22:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:17.731 "name": "raid_bdev1", 00:18:17.731 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:17.731 "strip_size_kb": 0, 00:18:17.731 "state": "online", 00:18:17.731 "raid_level": "raid1", 00:18:17.731 "superblock": true, 00:18:17.731 "num_base_bdevs": 2, 00:18:17.731 "num_base_bdevs_discovered": 2, 00:18:17.731 "num_base_bdevs_operational": 2, 00:18:17.731 "base_bdevs_list": [ 00:18:17.731 { 00:18:17.731 "name": "spare", 00:18:17.731 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:17.731 "is_configured": true, 00:18:17.731 "data_offset": 256, 00:18:17.731 "data_size": 7936 00:18:17.731 }, 00:18:17.731 { 00:18:17.731 "name": "BaseBdev2", 00:18:17.731 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:17.731 "is_configured": true, 00:18:17.731 "data_offset": 256, 00:18:17.731 "data_size": 7936 00:18:17.731 } 00:18:17.731 ] 00:18:17.731 }' 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # break 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.731 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:17.989 "name": "raid_bdev1", 00:18:17.989 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:17.989 "strip_size_kb": 0, 00:18:17.989 "state": "online", 00:18:17.989 "raid_level": "raid1", 00:18:17.989 "superblock": true, 00:18:17.989 "num_base_bdevs": 2, 00:18:17.989 "num_base_bdevs_discovered": 2, 00:18:17.989 "num_base_bdevs_operational": 2, 00:18:17.989 "base_bdevs_list": [ 00:18:17.989 { 00:18:17.989 "name": "spare", 00:18:17.989 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:17.989 "is_configured": true, 00:18:17.989 "data_offset": 256, 00:18:17.989 "data_size": 7936 00:18:17.989 }, 00:18:17.989 { 00:18:17.989 "name": "BaseBdev2", 00:18:17.989 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:17.989 "is_configured": true, 00:18:17.989 "data_offset": 256, 00:18:17.989 "data_size": 7936 00:18:17.989 } 00:18:17.989 ] 00:18:17.989 }' 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.989 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.247 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.247 "name": "raid_bdev1", 00:18:18.247 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:18.247 "strip_size_kb": 0, 00:18:18.247 "state": "online", 00:18:18.247 "raid_level": "raid1", 00:18:18.247 "superblock": true, 00:18:18.247 "num_base_bdevs": 2, 00:18:18.247 "num_base_bdevs_discovered": 2, 00:18:18.247 "num_base_bdevs_operational": 2, 00:18:18.247 "base_bdevs_list": [ 00:18:18.247 { 00:18:18.247 "name": "spare", 00:18:18.247 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:18.247 "is_configured": true, 00:18:18.247 "data_offset": 256, 00:18:18.247 "data_size": 7936 00:18:18.247 }, 00:18:18.247 { 00:18:18.247 "name": "BaseBdev2", 00:18:18.248 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:18.248 "is_configured": true, 00:18:18.248 "data_offset": 256, 00:18:18.248 "data_size": 7936 00:18:18.248 } 00:18:18.248 ] 00:18:18.248 }' 00:18:18.248 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.248 22:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.506 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:18.764 [2024-05-14 22:00:19.332347] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.764 [2024-05-14 22:00:19.332377] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.764 [2024-05-14 22:00:19.332401] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.764 [2024-05-14 22:00:19.332416] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.764 [2024-05-14 22:00:19.332421] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c2ca300 name raid_bdev1, state offline 00:18:19.021 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.021 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # jq length 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@727 -- # '[' false = true ']' 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:18:19.279 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:19.536 22:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:19.792 [2024-05-14 22:00:20.168366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:19.792 [2024-05-14 22:00:20.168429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.792 [2024-05-14 22:00:20.168458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c5780 00:18:19.792 [2024-05-14 22:00:20.168467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.792 [2024-05-14 22:00:20.169081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.793 [2024-05-14 22:00:20.169115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:19.793 [2024-05-14 22:00:20.169136] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:19.793 [2024-05-14 22:00:20.169149] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.793 BaseBdev1 00:18:19.793 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:18:19.793 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:18:19.793 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:18:20.050 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:20.050 [2024-05-14 22:00:20.632364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:20.050 [2024-05-14 22:00:20.632424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.050 [2024-05-14 22:00:20.632452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c5c80 00:18:20.050 [2024-05-14 22:00:20.632461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.050 [2024-05-14 22:00:20.632520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.050 [2024-05-14 22:00:20.632530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:20.050 [2024-05-14 22:00:20.632549] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:18:20.050 [2024-05-14 22:00:20.632555] bdev_raid.c:3398:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:18:20.050 [2024-05-14 22:00:20.632559] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.050 [2024-05-14 22:00:20.632565] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c2ca300 name raid_bdev1, state configuring 00:18:20.050 [2024-05-14 22:00:20.632582] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.050 BaseBdev2 00:18:20.307 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:20.307 22:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:20.563 [2024-05-14 22:00:21.116390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.564 [2024-05-14 22:00:21.116453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.564 [2024-05-14 22:00:21.116480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c6400 00:18:20.564 [2024-05-14 22:00:21.116489] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.564 [2024-05-14 22:00:21.116552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.564 [2024-05-14 22:00:21.116562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.564 [2024-05-14 22:00:21.116582] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:18:20.564 [2024-05-14 22:00:21.116590] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.564 spare 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.564 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.821 [2024-05-14 22:00:21.216617] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c2ca300 00:18:20.821 [2024-05-14 22:00:21.216643] bdev_raid.c:1697:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.821 [2024-05-14 22:00:21.216687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328e20 00:18:20.821 [2024-05-14 22:00:21.216718] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c2ca300 00:18:20.821 [2024-05-14 22:00:21.216722] bdev_raid.c:1727:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c2ca300 00:18:20.821 [2024-05-14 22:00:21.216749] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.821 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.821 "name": "raid_bdev1", 00:18:20.821 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:20.822 "strip_size_kb": 0, 00:18:20.822 "state": "online", 00:18:20.822 "raid_level": "raid1", 00:18:20.822 "superblock": true, 00:18:20.822 "num_base_bdevs": 2, 00:18:20.822 "num_base_bdevs_discovered": 2, 00:18:20.822 "num_base_bdevs_operational": 2, 00:18:20.822 "base_bdevs_list": [ 00:18:20.822 { 00:18:20.822 "name": "spare", 00:18:20.822 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:20.822 "is_configured": true, 00:18:20.822 "data_offset": 256, 00:18:20.822 "data_size": 7936 00:18:20.822 }, 00:18:20.822 { 00:18:20.822 "name": "BaseBdev2", 00:18:20.822 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:20.822 "is_configured": true, 00:18:20.822 "data_offset": 256, 00:18:20.822 "data_size": 7936 00:18:20.822 } 00:18:20.822 ] 00:18:20.822 }' 00:18:20.822 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.822 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.386 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:21.644 "name": "raid_bdev1", 00:18:21.644 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:21.644 "strip_size_kb": 0, 00:18:21.644 "state": "online", 00:18:21.644 "raid_level": "raid1", 00:18:21.644 "superblock": true, 00:18:21.644 "num_base_bdevs": 2, 00:18:21.644 "num_base_bdevs_discovered": 2, 00:18:21.644 "num_base_bdevs_operational": 2, 00:18:21.644 "base_bdevs_list": [ 00:18:21.644 { 00:18:21.644 "name": "spare", 00:18:21.644 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:21.644 "is_configured": true, 00:18:21.644 "data_offset": 256, 00:18:21.644 "data_size": 7936 00:18:21.644 }, 00:18:21.644 { 00:18:21.644 "name": "BaseBdev2", 00:18:21.644 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:21.644 "is_configured": true, 00:18:21.644 "data_offset": 256, 00:18:21.644 "data_size": 7936 00:18:21.644 } 00:18:21.644 ] 00:18:21.644 }' 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.644 22:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:21.900 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:21.901 [2024-05-14 22:00:22.468415] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.901 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.473 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.473 "name": "raid_bdev1", 00:18:22.473 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:22.473 "strip_size_kb": 0, 00:18:22.473 "state": "online", 00:18:22.473 "raid_level": "raid1", 00:18:22.473 "superblock": true, 00:18:22.473 "num_base_bdevs": 2, 00:18:22.473 "num_base_bdevs_discovered": 1, 00:18:22.473 "num_base_bdevs_operational": 1, 00:18:22.473 "base_bdevs_list": [ 00:18:22.473 { 00:18:22.473 "name": null, 00:18:22.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.473 "is_configured": false, 00:18:22.473 "data_offset": 256, 00:18:22.473 "data_size": 7936 00:18:22.473 }, 00:18:22.473 { 00:18:22.473 "name": "BaseBdev2", 00:18:22.473 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:22.473 "is_configured": true, 00:18:22.473 "data_offset": 256, 00:18:22.473 "data_size": 7936 00:18:22.473 } 00:18:22.473 ] 00:18:22.473 }' 00:18:22.473 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.473 22:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.473 22:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.056 [2024-05-14 22:00:23.324448] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.056 [2024-05-14 22:00:23.324520] bdev_raid.c:3413:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.056 [2024-05-14 22:00:23.324525] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.056 [2024-05-14 22:00:23.324560] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.056 [2024-05-14 22:00:23.324739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328ec0 00:18:23.056 [2024-05-14 22:00:23.325331] bdev_raid.c:2777:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.056 22:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # sleep 1 00:18:23.991 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.992 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:24.249 "name": "raid_bdev1", 00:18:24.249 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:24.249 "strip_size_kb": 0, 00:18:24.249 "state": "online", 00:18:24.249 "raid_level": "raid1", 00:18:24.249 "superblock": true, 00:18:24.249 "num_base_bdevs": 2, 00:18:24.249 "num_base_bdevs_discovered": 2, 00:18:24.249 "num_base_bdevs_operational": 2, 00:18:24.249 "process": { 00:18:24.249 "type": "rebuild", 00:18:24.249 "target": "spare", 00:18:24.249 "progress": { 00:18:24.249 "blocks": 3328, 00:18:24.249 "percent": 41 00:18:24.249 } 00:18:24.249 }, 00:18:24.249 "base_bdevs_list": [ 00:18:24.249 { 00:18:24.249 "name": "spare", 00:18:24.249 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:24.249 "is_configured": true, 00:18:24.249 "data_offset": 256, 00:18:24.249 "data_size": 7936 00:18:24.249 }, 00:18:24.249 { 00:18:24.249 "name": "BaseBdev2", 00:18:24.249 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:24.249 "is_configured": true, 00:18:24.249 "data_offset": 256, 00:18:24.249 "data_size": 7936 00:18:24.249 } 00:18:24.249 ] 00:18:24.249 }' 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.249 22:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:24.507 [2024-05-14 22:00:24.985375] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.507 [2024-05-14 22:00:25.033856] bdev_raid.c:2470:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:24.507 [2024-05-14 22:00:25.033915] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.507 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.765 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.765 "name": "raid_bdev1", 00:18:24.765 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:24.765 "strip_size_kb": 0, 00:18:24.765 "state": "online", 00:18:24.765 "raid_level": "raid1", 00:18:24.765 "superblock": true, 00:18:24.765 "num_base_bdevs": 2, 00:18:24.765 "num_base_bdevs_discovered": 1, 00:18:24.765 "num_base_bdevs_operational": 1, 00:18:24.765 "base_bdevs_list": [ 00:18:24.765 { 00:18:24.765 "name": null, 00:18:24.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.765 "is_configured": false, 00:18:24.765 "data_offset": 256, 00:18:24.765 "data_size": 7936 00:18:24.765 }, 00:18:24.765 { 00:18:24.765 "name": "BaseBdev2", 00:18:24.765 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:24.765 "is_configured": true, 00:18:24.765 "data_offset": 256, 00:18:24.765 "data_size": 7936 00:18:24.765 } 00:18:24.765 ] 00:18:24.765 }' 00:18:24.765 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.765 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.331 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:25.589 [2024-05-14 22:00:25.945385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.589 [2024-05-14 22:00:25.945461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.589 [2024-05-14 22:00:25.945506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c6400 00:18:25.589 [2024-05-14 22:00:25.945515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.589 [2024-05-14 22:00:25.945589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.589 [2024-05-14 22:00:25.945599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.589 [2024-05-14 22:00:25.945619] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:18:25.589 [2024-05-14 22:00:25.945624] bdev_raid.c:3413:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:25.589 [2024-05-14 22:00:25.945628] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:25.589 [2024-05-14 22:00:25.945655] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.589 [2024-05-14 22:00:25.945841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c328e20 00:18:25.589 [2024-05-14 22:00:25.946440] bdev_raid.c:2777:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.589 spare 00:18:25.589 22:00:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.522 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:26.780 "name": "raid_bdev1", 00:18:26.780 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:26.780 "strip_size_kb": 0, 00:18:26.780 "state": "online", 00:18:26.780 "raid_level": "raid1", 00:18:26.780 "superblock": true, 00:18:26.780 "num_base_bdevs": 2, 00:18:26.780 "num_base_bdevs_discovered": 2, 00:18:26.780 "num_base_bdevs_operational": 2, 00:18:26.780 "process": { 00:18:26.780 "type": "rebuild", 00:18:26.780 "target": "spare", 00:18:26.780 "progress": { 00:18:26.780 "blocks": 3328, 00:18:26.780 "percent": 41 00:18:26.780 } 00:18:26.780 }, 00:18:26.780 "base_bdevs_list": [ 00:18:26.780 { 00:18:26.780 "name": "spare", 00:18:26.780 "uuid": "848507f8-9652-3853-b3e5-845063751fd1", 00:18:26.780 "is_configured": true, 00:18:26.780 "data_offset": 256, 00:18:26.780 "data_size": 7936 00:18:26.780 }, 00:18:26.780 { 00:18:26.780 "name": "BaseBdev2", 00:18:26.780 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:26.780 "is_configured": true, 00:18:26.780 "data_offset": 256, 00:18:26.780 "data_size": 7936 00:18:26.780 } 00:18:26.780 ] 00:18:26.780 }' 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.780 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:27.347 [2024-05-14 22:00:27.658054] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.347 [2024-05-14 22:00:27.755509] bdev_raid.c:2470:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:27.347 [2024-05-14 22:00:27.755605] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.347 22:00:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.606 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.606 "name": "raid_bdev1", 00:18:27.606 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:27.606 "strip_size_kb": 0, 00:18:27.606 "state": "online", 00:18:27.606 "raid_level": "raid1", 00:18:27.606 "superblock": true, 00:18:27.606 "num_base_bdevs": 2, 00:18:27.606 "num_base_bdevs_discovered": 1, 00:18:27.606 "num_base_bdevs_operational": 1, 00:18:27.606 "base_bdevs_list": [ 00:18:27.606 { 00:18:27.606 "name": null, 00:18:27.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.606 "is_configured": false, 00:18:27.606 "data_offset": 256, 00:18:27.606 "data_size": 7936 00:18:27.606 }, 00:18:27.606 { 00:18:27.606 "name": "BaseBdev2", 00:18:27.606 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:27.606 "is_configured": true, 00:18:27.606 "data_offset": 256, 00:18:27.606 "data_size": 7936 00:18:27.606 } 00:18:27.606 ] 00:18:27.606 }' 00:18:27.606 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.606 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.864 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:28.434 "name": "raid_bdev1", 00:18:28.434 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:28.434 "strip_size_kb": 0, 00:18:28.434 "state": "online", 00:18:28.434 "raid_level": "raid1", 00:18:28.434 "superblock": true, 00:18:28.434 "num_base_bdevs": 2, 00:18:28.434 "num_base_bdevs_discovered": 1, 00:18:28.434 "num_base_bdevs_operational": 1, 00:18:28.434 "base_bdevs_list": [ 00:18:28.434 { 00:18:28.434 "name": null, 00:18:28.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.434 "is_configured": false, 00:18:28.434 "data_offset": 256, 00:18:28.434 "data_size": 7936 00:18:28.434 }, 00:18:28.434 { 00:18:28.434 "name": "BaseBdev2", 00:18:28.434 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:28.434 "is_configured": true, 00:18:28.434 "data_offset": 256, 00:18:28.434 "data_size": 7936 00:18:28.434 } 00:18:28.434 ] 00:18:28.434 }' 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:28.434 22:00:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:28.691 22:00:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.691 [2024-05-14 22:00:29.278761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.691 [2024-05-14 22:00:29.278821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.691 [2024-05-14 22:00:29.278850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c2c5780 00:18:28.691 [2024-05-14 22:00:29.278859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.691 [2024-05-14 22:00:29.278918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.691 [2024-05-14 22:00:29.278933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.692 [2024-05-14 22:00:29.278954] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:28.692 [2024-05-14 22:00:29.278960] bdev_raid.c:3413:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.692 [2024-05-14 22:00:29.278964] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.949 BaseBdev1 00:18:28.949 22:00:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # sleep 1 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.881 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.177 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.177 "name": "raid_bdev1", 00:18:30.177 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:30.177 "strip_size_kb": 0, 00:18:30.177 "state": "online", 00:18:30.177 "raid_level": "raid1", 00:18:30.177 "superblock": true, 00:18:30.177 "num_base_bdevs": 2, 00:18:30.177 "num_base_bdevs_discovered": 1, 00:18:30.177 "num_base_bdevs_operational": 1, 00:18:30.177 "base_bdevs_list": [ 00:18:30.177 { 00:18:30.177 "name": null, 00:18:30.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.177 "is_configured": false, 00:18:30.177 "data_offset": 256, 00:18:30.177 "data_size": 7936 00:18:30.177 }, 00:18:30.177 { 00:18:30.177 "name": "BaseBdev2", 00:18:30.177 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:30.177 "is_configured": true, 00:18:30.177 "data_offset": 256, 00:18:30.177 "data_size": 7936 00:18:30.177 } 00:18:30.177 ] 00:18:30.177 }' 00:18:30.177 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.177 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.435 22:00:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:30.694 "name": "raid_bdev1", 00:18:30.694 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:30.694 "strip_size_kb": 0, 00:18:30.694 "state": "online", 00:18:30.694 "raid_level": "raid1", 00:18:30.694 "superblock": true, 00:18:30.694 "num_base_bdevs": 2, 00:18:30.694 "num_base_bdevs_discovered": 1, 00:18:30.694 "num_base_bdevs_operational": 1, 00:18:30.694 "base_bdevs_list": [ 00:18:30.694 { 00:18:30.694 "name": null, 00:18:30.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.694 "is_configured": false, 00:18:30.694 "data_offset": 256, 00:18:30.694 "data_size": 7936 00:18:30.694 }, 00:18:30.694 { 00:18:30.694 "name": "BaseBdev2", 00:18:30.694 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:30.694 "is_configured": true, 00:18:30.694 "data_offset": 256, 00:18:30.694 "data_size": 7936 00:18:30.694 } 00:18:30.694 ] 00:18:30.694 }' 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:30.694 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:30.695 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.953 [2024-05-14 22:00:31.478802] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.953 [2024-05-14 22:00:31.478880] bdev_raid.c:3413:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.953 [2024-05-14 22:00:31.478886] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.953 request: 00:18:30.953 { 00:18:30.953 "raid_bdev": "raid_bdev1", 00:18:30.953 "base_bdev": "BaseBdev1", 00:18:30.953 "method": "bdev_raid_add_base_bdev", 00:18:30.953 "req_id": 1 00:18:30.953 } 00:18:30.953 Got JSON-RPC error response 00:18:30.954 response: 00:18:30.954 { 00:18:30.954 "code": -22, 00:18:30.954 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:30.954 } 00:18:30.954 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:30.954 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.954 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.954 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.954 22:00:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # sleep 1 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.327 "name": "raid_bdev1", 00:18:32.327 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:32.327 "strip_size_kb": 0, 00:18:32.327 "state": "online", 00:18:32.327 "raid_level": "raid1", 00:18:32.327 "superblock": true, 00:18:32.327 "num_base_bdevs": 2, 00:18:32.327 "num_base_bdevs_discovered": 1, 00:18:32.327 "num_base_bdevs_operational": 1, 00:18:32.327 "base_bdevs_list": [ 00:18:32.327 { 00:18:32.327 "name": null, 00:18:32.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.327 "is_configured": false, 00:18:32.327 "data_offset": 256, 00:18:32.327 "data_size": 7936 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "name": "BaseBdev2", 00:18:32.327 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:32.327 "is_configured": true, 00:18:32.327 "data_offset": 256, 00:18:32.327 "data_size": 7936 00:18:32.327 } 00:18:32.327 ] 00:18:32.327 }' 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.327 22:00:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.891 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.148 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:33.148 "name": "raid_bdev1", 00:18:33.148 "uuid": "545d11d9-123d-11ef-8c90-4585f0cfab08", 00:18:33.148 "strip_size_kb": 0, 00:18:33.148 "state": "online", 00:18:33.148 "raid_level": "raid1", 00:18:33.148 "superblock": true, 00:18:33.148 "num_base_bdevs": 2, 00:18:33.148 "num_base_bdevs_discovered": 1, 00:18:33.148 "num_base_bdevs_operational": 1, 00:18:33.148 "base_bdevs_list": [ 00:18:33.148 { 00:18:33.148 "name": null, 00:18:33.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.148 "is_configured": false, 00:18:33.149 "data_offset": 256, 00:18:33.149 "data_size": 7936 00:18:33.149 }, 00:18:33.149 { 00:18:33.149 "name": "BaseBdev2", 00:18:33.149 "uuid": "c6b11edd-3e8f-335a-bb41-7de10a03325e", 00:18:33.149 "is_configured": true, 00:18:33.149 "data_offset": 256, 00:18:33.149 "data_size": 7936 00:18:33.149 } 00:18:33.149 ] 00:18:33.149 }' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # killprocess 65424 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65424 ']' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65424 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65424 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:18:33.149 killing process with pid 65424 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65424' 00:18:33.149 Received shutdown signal, test time was about 60.000000 seconds 00:18:33.149 00:18:33.149 Latency(us) 00:18:33.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.149 =================================================================================================================== 00:18:33.149 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 65424 00:18:33.149 [2024-05-14 22:00:33.507605] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.149 [2024-05-14 22:00:33.507641] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 65424 00:18:33.149 [2024-05-14 22:00:33.507658] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.149 [2024-05-14 22:00:33.507665] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c2ca300 name raid_bdev1, state offline 00:18:33.149 [2024-05-14 22:00:33.525387] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@797 -- # return 0 00:18:33.149 00:18:33.149 real 0m27.877s 00:18:33.149 user 0m43.526s 00:18:33.149 sys 0m2.815s 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.149 22:00:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.149 ************************************ 00:18:33.149 END TEST raid_rebuild_test_sb_md_interleaved 00:18:33.149 ************************************ 00:18:33.149 22:00:33 bdev_raid -- bdev/bdev_raid.sh@862 -- # rm -f /raidrandtest 00:18:33.406 00:18:33.406 real 9m47.555s 00:18:33.406 user 17m29.332s 00:18:33.406 sys 1m26.671s 00:18:33.406 22:00:33 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.406 22:00:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.406 ************************************ 00:18:33.406 END TEST bdev_raid 00:18:33.406 ************************************ 00:18:33.406 22:00:33 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:33.406 22:00:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:33.406 22:00:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.406 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.406 ************************************ 00:18:33.406 START TEST bdevperf_config 00:18:33.406 ************************************ 00:18:33.406 22:00:33 bdevperf_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:33.406 * Looking for test storage... 00:18:33.406 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:33.406 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:33.406 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:33.406 22:00:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:33.407 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:33.407 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:33.407 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:33.407 22:00:33 bdevperf_config -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:36.692 22:00:36 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-14 22:00:33.933120] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:36.692 [2024-05-14 22:00:33.933288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:36.692 Using job config with 4 jobs 00:18:36.692 EAL: TSC is not safe to use in SMP mode 00:18:36.692 EAL: TSC is not invariant 00:18:36.692 [2024-05-14 22:00:34.455523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.692 [2024-05-14 22:00:34.567494] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:36.692 [2024-05-14 22:00:34.569849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.692 cpumask for '\''job0'\'' is too big 00:18:36.692 cpumask for '\''job1'\'' is too big 00:18:36.692 cpumask for '\''job2'\'' is too big 00:18:36.692 cpumask for '\''job3'\'' is too big 00:18:36.692 Running I/O for 2 seconds... 00:18:36.692 00:18:36.692 Latency(us) 00:18:36.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308293.17 301.07 0.00 0.00 830.09 216.90 1608.60 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308280.54 301.06 0.00 0.00 829.94 216.90 1370.29 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308323.26 301.10 0.00 0.00 829.61 209.45 1102.19 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308306.01 301.08 0.00 0.00 829.50 180.60 893.67 00:18:36.692 =================================================================================================================== 00:18:36.692 Total : 1233202.99 1204.30 0.00 0.00 829.79 180.60 1608.60' 00:18:36.692 22:00:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-14 22:00:33.933120] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:36.692 [2024-05-14 22:00:33.933288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:36.692 Using job config with 4 jobs 00:18:36.692 EAL: TSC is not safe to use in SMP mode 00:18:36.692 EAL: TSC is not invariant 00:18:36.692 [2024-05-14 22:00:34.455523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.692 [2024-05-14 22:00:34.567494] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:36.692 [2024-05-14 22:00:34.569849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.692 cpumask for '\''job0'\'' is too big 00:18:36.692 cpumask for '\''job1'\'' is too big 00:18:36.692 cpumask for '\''job2'\'' is too big 00:18:36.692 cpumask for '\''job3'\'' is too big 00:18:36.692 Running I/O for 2 seconds... 00:18:36.692 00:18:36.692 Latency(us) 00:18:36.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308293.17 301.07 0.00 0.00 830.09 216.90 1608.60 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308280.54 301.06 0.00 0.00 829.94 216.90 1370.29 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308323.26 301.10 0.00 0.00 829.61 209.45 1102.19 00:18:36.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.692 Malloc0 : 2.00 308306.01 301.08 0.00 0.00 829.50 180.60 893.67 00:18:36.692 =================================================================================================================== 00:18:36.692 Total : 1233202.99 1204.30 0.00 0.00 829.79 180.60 1608.60' 00:18:36.692 22:00:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 22:00:33.933120] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:36.692 [2024-05-14 22:00:33.933288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:36.692 Using job config with 4 jobs 00:18:36.692 EAL: TSC is not safe to use in SMP mode 00:18:36.692 EAL: TSC is not invariant 00:18:36.692 [2024-05-14 22:00:34.455523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.692 [2024-05-14 22:00:34.567494] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:36.692 [2024-05-14 22:00:34.569849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.692 cpumask for '\''job0'\'' is too big 00:18:36.692 cpumask for '\''job1'\'' is too big 00:18:36.692 cpumask for '\''job2'\'' is too big 00:18:36.692 cpumask for '\''job3'\'' is too big 00:18:36.692 Running I/O for 2 seconds... 00:18:36.692 00:18:36.692 Latency(us) 00:18:36.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.693 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.693 Malloc0 : 2.00 308293.17 301.07 0.00 0.00 830.09 216.90 1608.60 00:18:36.693 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.693 Malloc0 : 2.00 308280.54 301.06 0.00 0.00 829.94 216.90 1370.29 00:18:36.693 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.693 Malloc0 : 2.00 308323.26 301.10 0.00 0.00 829.61 209.45 1102.19 00:18:36.693 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:36.693 Malloc0 : 2.00 308306.01 301.08 0.00 0.00 829.50 180.60 893.67 00:18:36.693 =================================================================================================================== 00:18:36.693 Total : 1233202.99 1204.30 0.00 0.00 829.79 180.60 1608.60' 00:18:36.693 22:00:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:36.693 22:00:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:36.693 22:00:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:36.693 22:00:36 bdevperf_config -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:36.693 [2024-05-14 22:00:36.821257] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:36.693 [2024-05-14 22:00:36.821511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:36.952 EAL: TSC is not safe to use in SMP mode 00:18:36.952 EAL: TSC is not invariant 00:18:36.952 [2024-05-14 22:00:37.375676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.952 [2024-05-14 22:00:37.467092] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:36.952 [2024-05-14 22:00:37.469544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.952 cpumask for 'job0' is too big 00:18:36.952 cpumask for 'job1' is too big 00:18:36.952 cpumask for 'job2' is too big 00:18:36.952 cpumask for 'job3' is too big 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:39.503 Running I/O for 2 seconds... 00:18:39.503 00:18:39.503 Latency(us) 00:18:39.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:39.503 Malloc0 : 2.00 296588.95 289.64 0.00 0.00 862.84 217.83 1802.23 00:18:39.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:39.503 Malloc0 : 2.00 296576.94 289.63 0.00 0.00 862.67 226.21 1802.23 00:18:39.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:39.503 Malloc0 : 2.00 296559.31 289.61 0.00 0.00 862.52 211.32 1787.34 00:18:39.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:39.503 Malloc0 : 2.00 296633.26 289.68 0.00 0.00 862.12 74.01 1787.34 00:18:39.503 =================================================================================================================== 00:18:39.503 Total : 1186358.46 1158.55 0.00 0.00 862.54 74.01 1802.23' 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:39.503 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:39.503 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:39.503 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:39.503 22:00:39 bdevperf_config -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:42.786 22:00:42 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-14 22:00:39.756199] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:42.786 [2024-05-14 22:00:39.756420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:42.786 Using job config with 3 jobs 00:18:42.786 EAL: TSC is not safe to use in SMP mode 00:18:42.786 EAL: TSC is not invariant 00:18:42.786 [2024-05-14 22:00:40.296027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.786 [2024-05-14 22:00:40.386743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:42.786 [2024-05-14 22:00:40.389110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.786 cpumask for '\''job0'\'' is too big 00:18:42.786 cpumask for '\''job1'\'' is too big 00:18:42.786 cpumask for '\''job2'\'' is too big 00:18:42.786 Running I/O for 2 seconds... 00:18:42.786 00:18:42.786 Latency(us) 00:18:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375110.22 366.32 0.00 0.00 682.16 255.07 1318.16 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375081.18 366.29 0.00 0.00 682.05 212.25 1295.82 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375056.26 366.27 0.00 0.00 681.95 212.25 1333.06 00:18:42.786 =================================================================================================================== 00:18:42.786 Total : 1125247.65 1098.87 0.00 0.00 682.06 212.25 1333.06' 00:18:42.786 22:00:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-14 22:00:39.756199] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:42.786 [2024-05-14 22:00:39.756420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:42.786 Using job config with 3 jobs 00:18:42.786 EAL: TSC is not safe to use in SMP mode 00:18:42.786 EAL: TSC is not invariant 00:18:42.786 [2024-05-14 22:00:40.296027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.786 [2024-05-14 22:00:40.386743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:42.786 [2024-05-14 22:00:40.389110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.786 cpumask for '\''job0'\'' is too big 00:18:42.786 cpumask for '\''job1'\'' is too big 00:18:42.786 cpumask for '\''job2'\'' is too big 00:18:42.786 Running I/O for 2 seconds... 00:18:42.786 00:18:42.786 Latency(us) 00:18:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375110.22 366.32 0.00 0.00 682.16 255.07 1318.16 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375081.18 366.29 0.00 0.00 682.05 212.25 1295.82 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375056.26 366.27 0.00 0.00 681.95 212.25 1333.06 00:18:42.786 =================================================================================================================== 00:18:42.786 Total : 1125247.65 1098.87 0.00 0.00 682.06 212.25 1333.06' 00:18:42.786 22:00:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 22:00:39.756199] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:42.786 [2024-05-14 22:00:39.756420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:42.786 Using job config with 3 jobs 00:18:42.786 EAL: TSC is not safe to use in SMP mode 00:18:42.786 EAL: TSC is not invariant 00:18:42.786 [2024-05-14 22:00:40.296027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.786 [2024-05-14 22:00:40.386743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:42.786 [2024-05-14 22:00:40.389110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.786 cpumask for '\''job0'\'' is too big 00:18:42.786 cpumask for '\''job1'\'' is too big 00:18:42.786 cpumask for '\''job2'\'' is too big 00:18:42.786 Running I/O for 2 seconds... 00:18:42.786 00:18:42.786 Latency(us) 00:18:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375110.22 366.32 0.00 0.00 682.16 255.07 1318.16 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375081.18 366.29 0.00 0.00 682.05 212.25 1295.82 00:18:42.786 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:42.786 Malloc0 : 2.00 375056.26 366.27 0.00 0.00 681.95 212.25 1333.06 00:18:42.786 =================================================================================================================== 00:18:42.786 Total : 1125247.65 1098.87 0.00 0.00 682.06 212.25 1333.06' 00:18:42.786 22:00:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:42.786 22:00:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:42.787 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:42.787 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:42.787 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:42.787 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:42.787 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:42.787 22:00:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:45.314 22:00:45 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-14 22:00:42.667145] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:45.314 [2024-05-14 22:00:42.667383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:45.314 Using job config with 4 jobs 00:18:45.314 EAL: TSC is not safe to use in SMP mode 00:18:45.314 EAL: TSC is not invariant 00:18:45.314 [2024-05-14 22:00:43.227507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.314 [2024-05-14 22:00:43.335457] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:45.314 [2024-05-14 22:00:43.338471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.314 cpumask for '\''job0'\'' is too big 00:18:45.314 cpumask for '\''job1'\'' is too big 00:18:45.314 cpumask for '\''job2'\'' is too big 00:18:45.314 cpumask for '\''job3'\'' is too big 00:18:45.314 Running I/O for 2 seconds... 00:18:45.314 00:18:45.314 Latency(us) 00:18:45.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.314 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.314 Malloc0 : 2.00 138145.38 134.91 0.00 0.00 1852.57 644.19 4230.03 00:18:45.314 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.314 Malloc1 : 2.00 138138.21 134.90 0.00 0.00 1852.41 603.23 4259.82 00:18:45.314 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.314 Malloc0 : 2.00 138129.45 134.89 0.00 0.00 1851.66 577.16 3649.15 00:18:45.314 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.314 Malloc1 : 2.00 138120.49 134.88 0.00 0.00 1851.52 558.54 3634.25 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138176.05 134.94 0.00 0.00 1849.94 588.33 3023.58 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138167.44 134.93 0.00 0.00 1849.76 573.44 3023.58 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138159.86 134.92 0.00 0.00 1849.10 528.75 2740.59 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138150.83 134.91 0.00 0.00 1848.88 441.25 2770.37 00:18:45.315 =================================================================================================================== 00:18:45.315 Total : 1105187.70 1079.28 0.00 0.00 1850.73 441.25 4259.82' 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-14 22:00:42.667145] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:45.315 [2024-05-14 22:00:42.667383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:45.315 Using job config with 4 jobs 00:18:45.315 EAL: TSC is not safe to use in SMP mode 00:18:45.315 EAL: TSC is not invariant 00:18:45.315 [2024-05-14 22:00:43.227507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.315 [2024-05-14 22:00:43.335457] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:45.315 [2024-05-14 22:00:43.338471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.315 cpumask for '\''job0'\'' is too big 00:18:45.315 cpumask for '\''job1'\'' is too big 00:18:45.315 cpumask for '\''job2'\'' is too big 00:18:45.315 cpumask for '\''job3'\'' is too big 00:18:45.315 Running I/O for 2 seconds... 00:18:45.315 00:18:45.315 Latency(us) 00:18:45.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138145.38 134.91 0.00 0.00 1852.57 644.19 4230.03 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138138.21 134.90 0.00 0.00 1852.41 603.23 4259.82 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138129.45 134.89 0.00 0.00 1851.66 577.16 3649.15 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138120.49 134.88 0.00 0.00 1851.52 558.54 3634.25 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138176.05 134.94 0.00 0.00 1849.94 588.33 3023.58 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138167.44 134.93 0.00 0.00 1849.76 573.44 3023.58 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138159.86 134.92 0.00 0.00 1849.10 528.75 2740.59 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138150.83 134.91 0.00 0.00 1848.88 441.25 2770.37 00:18:45.315 =================================================================================================================== 00:18:45.315 Total : 1105187.70 1079.28 0.00 0.00 1850.73 441.25 4259.82' 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 22:00:42.667145] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:45.315 [2024-05-14 22:00:42.667383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:45.315 Using job config with 4 jobs 00:18:45.315 EAL: TSC is not safe to use in SMP mode 00:18:45.315 EAL: TSC is not invariant 00:18:45.315 [2024-05-14 22:00:43.227507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.315 [2024-05-14 22:00:43.335457] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:45.315 [2024-05-14 22:00:43.338471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.315 cpumask for '\''job0'\'' is too big 00:18:45.315 cpumask for '\''job1'\'' is too big 00:18:45.315 cpumask for '\''job2'\'' is too big 00:18:45.315 cpumask for '\''job3'\'' is too big 00:18:45.315 Running I/O for 2 seconds... 00:18:45.315 00:18:45.315 Latency(us) 00:18:45.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138145.38 134.91 0.00 0.00 1852.57 644.19 4230.03 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138138.21 134.90 0.00 0.00 1852.41 603.23 4259.82 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138129.45 134.89 0.00 0.00 1851.66 577.16 3649.15 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138120.49 134.88 0.00 0.00 1851.52 558.54 3634.25 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138176.05 134.94 0.00 0.00 1849.94 588.33 3023.58 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138167.44 134.93 0.00 0.00 1849.76 573.44 3023.58 00:18:45.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc0 : 2.00 138159.86 134.92 0.00 0.00 1849.10 528.75 2740.59 00:18:45.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:45.315 Malloc1 : 2.00 138150.83 134.91 0.00 0.00 1848.88 441.25 2770.37 00:18:45.315 =================================================================================================================== 00:18:45.315 Total : 1105187.70 1079.28 0.00 0.00 1850.73 441.25 4259.82' 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:45.315 22:00:45 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:45.315 00:18:45.315 real 0m11.818s 00:18:45.315 user 0m9.289s 00:18:45.315 sys 0m2.497s 00:18:45.315 22:00:45 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.315 ************************************ 00:18:45.315 END TEST bdevperf_config 00:18:45.315 ************************************ 00:18:45.315 22:00:45 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:18:45.315 22:00:45 -- spdk/autotest.sh@188 -- # uname -s 00:18:45.315 22:00:45 -- spdk/autotest.sh@188 -- # [[ FreeBSD == Linux ]] 00:18:45.315 22:00:45 -- spdk/autotest.sh@194 -- # uname -s 00:18:45.315 22:00:45 -- spdk/autotest.sh@194 -- # [[ FreeBSD == Linux ]] 00:18:45.315 22:00:45 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:18:45.315 22:00:45 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:45.315 22:00:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.315 22:00:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.315 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.315 ************************************ 00:18:45.315 START TEST blockdev_nvme 00:18:45.315 ************************************ 00:18:45.315 22:00:45 blockdev_nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:45.315 * Looking for test storage... 00:18:45.315 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:45.315 22:00:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66171 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:45.315 22:00:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66171 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 66171 ']' 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:45.316 22:00:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:45.316 22:00:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:45.316 [2024-05-14 22:00:45.815945] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:45.316 [2024-05-14 22:00:45.816252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:45.882 EAL: TSC is not safe to use in SMP mode 00:18:45.882 EAL: TSC is not invariant 00:18:45.882 [2024-05-14 22:00:46.374830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.882 [2024-05-14 22:00:46.463564] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:45.882 [2024-05-14 22:00:46.465789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.481 22:00:46 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:46.481 22:00:46 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:18:46.481 22:00:46 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:46.481 22:00:46 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:18:46.482 22:00:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:46.482 22:00:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:46.482 22:00:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:46.482 22:00:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:18:46.482 22:00:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.482 22:00:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.482 [2024-05-14 22:00:47.012185] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:46.482 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.482 22:00:47 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:46.482 22:00:47 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.482 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6b155342-123d-11ef-8c90-4585f0cfab08"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b155342-123d-11ef-8c90-4585f0cfab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:46.740 22:00:47 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66171 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 66171 ']' 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 66171 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@954 -- # ps -c -o command 66171 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@954 -- # tail -1 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:18:46.740 killing process with pid 66171 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66171' 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 66171 00:18:46.740 22:00:47 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 66171 00:18:47.002 22:00:47 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:47.002 22:00:47 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:47.002 22:00:47 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:47.002 22:00:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.002 22:00:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:47.002 ************************************ 00:18:47.002 START TEST bdev_hello_world 00:18:47.002 ************************************ 00:18:47.002 22:00:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:47.002 [2024-05-14 22:00:47.452027] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:47.002 [2024-05-14 22:00:47.452295] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:47.566 EAL: TSC is not safe to use in SMP mode 00:18:47.566 EAL: TSC is not invariant 00:18:47.566 [2024-05-14 22:00:48.005646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.566 [2024-05-14 22:00:48.098893] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:47.566 [2024-05-14 22:00:48.101358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.824 [2024-05-14 22:00:48.160147] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:47.824 [2024-05-14 22:00:48.228295] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:47.824 [2024-05-14 22:00:48.228347] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:47.824 [2024-05-14 22:00:48.228359] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:47.824 [2024-05-14 22:00:48.229090] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:47.824 [2024-05-14 22:00:48.229347] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:47.824 [2024-05-14 22:00:48.229371] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:47.824 [2024-05-14 22:00:48.229524] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:47.824 00:18:47.824 [2024-05-14 22:00:48.229546] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:48.083 00:18:48.083 real 0m0.974s 00:18:48.083 user 0m0.366s 00:18:48.083 sys 0m0.607s 00:18:48.083 22:00:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.083 22:00:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:48.083 ************************************ 00:18:48.083 END TEST bdev_hello_world 00:18:48.083 ************************************ 00:18:48.083 22:00:48 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:48.083 22:00:48 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:48.083 22:00:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.083 22:00:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.083 ************************************ 00:18:48.083 START TEST bdev_bounds 00:18:48.083 ************************************ 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66242 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:48.083 Process bdevio pid: 66242 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66242' 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66242 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 66242 ']' 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.083 22:00:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:48.083 [2024-05-14 22:00:48.469648] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:48.083 [2024-05-14 22:00:48.469884] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:48.649 EAL: TSC is not safe to use in SMP mode 00:18:48.649 EAL: TSC is not invariant 00:18:48.649 [2024-05-14 22:00:48.997179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.649 [2024-05-14 22:00:49.082192] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:48.649 [2024-05-14 22:00:49.082299] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:48.649 [2024-05-14 22:00:49.082317] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:48.649 [2024-05-14 22:00:49.086253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.649 [2024-05-14 22:00:49.086138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.649 [2024-05-14 22:00:49.086248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.649 [2024-05-14 22:00:49.144594] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:49.213 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:49.213 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:18:49.213 22:00:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:49.213 I/O targets: 00:18:49.213 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:49.213 00:18:49.213 00:18:49.213 CUnit - A unit testing framework for C - Version 2.1-3 00:18:49.213 http://cunit.sourceforge.net/ 00:18:49.213 00:18:49.213 00:18:49.213 Suite: bdevio tests on: Nvme0n1 00:18:49.213 Test: blockdev write read block ...passed 00:18:49.213 Test: blockdev write zeroes read block ...passed 00:18:49.213 Test: blockdev write zeroes read no split ...passed 00:18:49.213 Test: blockdev write zeroes read split ...passed 00:18:49.213 Test: blockdev write zeroes read split partial ...passed 00:18:49.213 Test: blockdev reset ...[2024-05-14 22:00:49.618633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:49.213 [2024-05-14 22:00:49.619953] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.213 passed 00:18:49.213 Test: blockdev write read 8 blocks ...passed 00:18:49.213 Test: blockdev write read size > 128k ...passed 00:18:49.213 Test: blockdev write read invalid size ...passed 00:18:49.213 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:49.213 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:49.213 Test: blockdev write read max offset ...passed 00:18:49.213 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:49.213 Test: blockdev writev readv 8 blocks ...passed 00:18:49.213 Test: blockdev writev readv 30 x 1block ...passed 00:18:49.213 Test: blockdev writev readv block ...passed 00:18:49.213 Test: blockdev writev readv size > 128k ...passed 00:18:49.213 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:49.213 Test: blockdev comparev and writev ...[2024-05-14 22:00:49.624277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7947000 len:0x1000 00:18:49.213 [2024-05-14 22:00:49.624330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:49.213 passed 00:18:49.213 Test: blockdev nvme passthru rw ...passed 00:18:49.213 Test: blockdev nvme passthru vendor specific ...passed 00:18:49.213 Test: blockdev nvme admin passthru ...[2024-05-14 22:00:49.624896] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:49.213 [2024-05-14 22:00:49.624917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:49.213 passed 00:18:49.213 Test: blockdev copy ...passed 00:18:49.213 00:18:49.213 Run Summary: Type Total Ran Passed Failed Inactive 00:18:49.213 suites 1 1 n/a 0 0 00:18:49.213 tests 23 23 23 0 0 00:18:49.213 asserts 152 152 152 0 n/a 00:18:49.213 00:18:49.213 Elapsed time = 0.039 seconds 00:18:49.213 0 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66242 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 66242 ']' 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 66242 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 66242 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:18:49.214 killing process with pid 66242 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66242' 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 66242 00:18:49.214 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 66242 00:18:49.471 22:00:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:49.471 00:18:49.472 real 0m1.378s 00:18:49.472 user 0m2.658s 00:18:49.472 sys 0m0.604s 00:18:49.472 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:49.472 22:00:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:49.472 ************************************ 00:18:49.472 END TEST bdev_bounds 00:18:49.472 ************************************ 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.472 ************************************ 00:18:49.472 START TEST bdev_nbd 00:18:49.472 ************************************ 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:18:49.472 00:18:49.472 real 0m0.004s 00:18:49.472 user 0m0.005s 00:18:49.472 sys 0m0.001s 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:49.472 22:00:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:49.472 ************************************ 00:18:49.472 END TEST bdev_nbd 00:18:49.472 ************************************ 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:18:49.472 skipping fio tests on NVMe due to multi-ns failures. 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:49.472 22:00:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:49.472 22:00:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.472 ************************************ 00:18:49.472 START TEST bdev_verify 00:18:49.472 ************************************ 00:18:49.472 22:00:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:49.472 [2024-05-14 22:00:49.942513] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:49.472 [2024-05-14 22:00:49.942789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:50.066 EAL: TSC is not safe to use in SMP mode 00:18:50.066 EAL: TSC is not invariant 00:18:50.066 [2024-05-14 22:00:50.479382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:50.066 [2024-05-14 22:00:50.569764] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:50.066 [2024-05-14 22:00:50.569827] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:50.066 [2024-05-14 22:00:50.572708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.066 [2024-05-14 22:00:50.572695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.066 [2024-05-14 22:00:50.631637] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:50.323 Running I/O for 5 seconds... 00:18:55.613 00:18:55.613 Latency(us) 00:18:55.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.613 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:55.613 Verification LBA range: start 0x0 length 0xa0000 00:18:55.613 Nvme0n1 : 5.01 20419.44 79.76 0.00 0.00 6260.22 763.34 10306.98 00:18:55.613 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.613 Verification LBA range: start 0xa0000 length 0xa0000 00:18:55.613 Nvme0n1 : 5.00 21048.49 82.22 0.00 0.00 6072.67 673.98 11975.17 00:18:55.613 =================================================================================================================== 00:18:55.613 Total : 41467.93 161.98 0.00 0.00 6165.03 673.98 11975.17 00:18:55.872 00:18:55.872 real 0m6.492s 00:18:55.872 user 0m11.577s 00:18:55.872 sys 0m0.615s 00:18:55.872 ************************************ 00:18:55.872 END TEST bdev_verify 00:18:55.872 ************************************ 00:18:55.872 22:00:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:55.872 22:00:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:56.129 22:00:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:56.129 22:00:56 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:18:56.129 22:00:56 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.129 22:00:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.129 ************************************ 00:18:56.129 START TEST bdev_verify_big_io 00:18:56.129 ************************************ 00:18:56.129 22:00:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:56.129 [2024-05-14 22:00:56.482537] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:56.129 [2024-05-14 22:00:56.482789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:56.694 EAL: TSC is not safe to use in SMP mode 00:18:56.694 EAL: TSC is not invariant 00:18:56.694 [2024-05-14 22:00:57.010019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:56.694 [2024-05-14 22:00:57.101310] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:56.694 [2024-05-14 22:00:57.101411] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:56.694 [2024-05-14 22:00:57.104349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.694 [2024-05-14 22:00:57.104331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.694 [2024-05-14 22:00:57.163163] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:56.694 Running I/O for 5 seconds... 00:19:01.947 00:19:01.947 Latency(us) 00:19:01.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:01.947 Verification LBA range: start 0x0 length 0xa000 00:19:01.947 Nvme0n1 : 5.01 8248.45 515.53 0.00 0.00 15432.97 592.06 26810.08 00:19:01.947 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:01.947 Verification LBA range: start 0xa000 length 0xa000 00:19:01.947 Nvme0n1 : 5.01 8342.83 521.43 0.00 0.00 15257.56 312.78 30742.22 00:19:01.947 =================================================================================================================== 00:19:01.947 Total : 16591.29 1036.96 0.00 0.00 15344.79 312.78 30742.22 00:19:05.227 00:19:05.227 real 0m9.216s 00:19:05.227 user 0m17.100s 00:19:05.227 sys 0m0.564s 00:19:05.227 22:01:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:05.227 ************************************ 00:19:05.227 END TEST bdev_verify_big_io 00:19:05.227 ************************************ 00:19:05.227 22:01:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.227 22:01:05 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.227 22:01:05 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:19:05.227 22:01:05 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:05.227 22:01:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.227 ************************************ 00:19:05.227 START TEST bdev_write_zeroes 00:19:05.227 ************************************ 00:19:05.227 22:01:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.227 [2024-05-14 22:01:05.747586] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:05.227 [2024-05-14 22:01:05.747844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:05.792 EAL: TSC is not safe to use in SMP mode 00:19:05.792 EAL: TSC is not invariant 00:19:05.792 [2024-05-14 22:01:06.288763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.792 [2024-05-14 22:01:06.381252] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:06.048 [2024-05-14 22:01:06.383659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.048 [2024-05-14 22:01:06.443553] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:06.048 Running I/O for 1 seconds... 00:19:06.980 00:19:06.980 Latency(us) 00:19:06.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.980 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:06.980 Nvme0n1 : 1.00 67321.27 262.97 0.00 0.00 1899.37 394.70 17158.45 00:19:06.980 =================================================================================================================== 00:19:06.980 Total : 67321.27 262.97 0.00 0.00 1899.37 394.70 17158.45 00:19:07.238 00:19:07.238 real 0m1.970s 00:19:07.238 user 0m1.368s 00:19:07.238 sys 0m0.600s 00:19:07.238 22:01:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:07.238 22:01:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:07.238 ************************************ 00:19:07.238 END TEST bdev_write_zeroes 00:19:07.238 ************************************ 00:19:07.238 22:01:07 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:07.238 22:01:07 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:19:07.238 22:01:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:07.238 22:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.238 ************************************ 00:19:07.238 START TEST bdev_json_nonenclosed 00:19:07.238 ************************************ 00:19:07.238 22:01:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:07.238 [2024-05-14 22:01:07.766766] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:07.238 [2024-05-14 22:01:07.766961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:07.803 EAL: TSC is not safe to use in SMP mode 00:19:07.803 EAL: TSC is not invariant 00:19:07.803 [2024-05-14 22:01:08.313544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.065 [2024-05-14 22:01:08.404976] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:08.065 [2024-05-14 22:01:08.407258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.065 [2024-05-14 22:01:08.407300] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:08.065 [2024-05-14 22:01:08.407315] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:08.065 [2024-05-14 22:01:08.407323] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.065 00:19:08.065 real 0m0.767s 00:19:08.065 user 0m0.166s 00:19:08.065 sys 0m0.598s 00:19:08.065 22:01:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.065 ************************************ 00:19:08.065 END TEST bdev_json_nonenclosed 00:19:08.065 ************************************ 00:19:08.065 22:01:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:08.065 22:01:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.065 22:01:08 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:19:08.065 22:01:08 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.065 22:01:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.065 ************************************ 00:19:08.065 START TEST bdev_json_nonarray 00:19:08.065 ************************************ 00:19:08.065 22:01:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.065 [2024-05-14 22:01:08.584376] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:08.065 [2024-05-14 22:01:08.584625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:08.643 EAL: TSC is not safe to use in SMP mode 00:19:08.643 EAL: TSC is not invariant 00:19:08.643 [2024-05-14 22:01:09.164742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.902 [2024-05-14 22:01:09.267435] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:08.902 [2024-05-14 22:01:09.270150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.902 [2024-05-14 22:01:09.270205] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:08.902 [2024-05-14 22:01:09.270220] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:08.902 [2024-05-14 22:01:09.270231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.902 00:19:08.902 real 0m0.828s 00:19:08.902 user 0m0.203s 00:19:08.902 sys 0m0.622s 00:19:08.902 ************************************ 00:19:08.902 END TEST bdev_json_nonarray 00:19:08.902 ************************************ 00:19:08.902 22:01:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.902 22:01:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:08.902 22:01:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:08.902 00:19:08.902 real 0m23.804s 00:19:08.902 user 0m35.236s 00:19:08.902 sys 0m5.295s 00:19:08.902 22:01:09 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.902 22:01:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.902 ************************************ 00:19:08.902 END TEST blockdev_nvme 00:19:08.902 ************************************ 00:19:08.902 22:01:09 -- spdk/autotest.sh@209 -- # uname -s 00:19:08.902 22:01:09 -- spdk/autotest.sh@209 -- # [[ FreeBSD == Linux ]] 00:19:08.902 22:01:09 -- spdk/autotest.sh@212 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:08.902 22:01:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:08.902 22:01:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.902 22:01:09 -- common/autotest_common.sh@10 -- # set +x 00:19:09.159 ************************************ 00:19:09.159 START TEST nvme 00:19:09.159 ************************************ 00:19:09.159 22:01:09 nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:09.159 * Looking for test storage... 00:19:09.159 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:09.159 22:01:09 nvme -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.484 hw.nic_uio.bdfs="0:16:0" 00:19:09.484 22:01:09 nvme -- nvme/nvme.sh@79 -- # uname 00:19:09.484 22:01:09 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:19:09.484 22:01:09 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:09.484 22:01:09 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:19:09.484 22:01:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.484 22:01:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.484 ************************************ 00:19:09.484 START TEST nvme_reset 00:19:09.484 ************************************ 00:19:09.484 22:01:09 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:10.107 EAL: TSC is not safe to use in SMP mode 00:19:10.107 EAL: TSC is not invariant 00:19:10.107 [2024-05-14 22:01:10.667977] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:10.365 Initializing NVMe Controllers 00:19:10.365 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:10.365 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:10.365 00:19:10.365 real 0m0.850s 00:19:10.365 user 0m0.026s 00:19:10.365 sys 0m0.823s 00:19:10.365 22:01:10 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:10.365 22:01:10 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:10.365 ************************************ 00:19:10.365 END TEST nvme_reset 00:19:10.365 ************************************ 00:19:10.365 22:01:10 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:10.365 22:01:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:10.365 22:01:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:10.365 22:01:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.365 ************************************ 00:19:10.365 START TEST nvme_identify 00:19:10.365 ************************************ 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:19:10.365 22:01:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:10.365 22:01:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:10.365 22:01:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:10.365 22:01:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:19:10.365 22:01:10 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:19:10.365 22:01:10 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:10.931 EAL: TSC is not safe to use in SMP mode 00:19:10.931 EAL: TSC is not invariant 00:19:10.931 [2024-05-14 22:01:11.391210] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:10.931 ===================================================== 00:19:10.931 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:10.931 ===================================================== 00:19:10.931 Controller Capabilities/Features 00:19:10.931 ================================ 00:19:10.931 Vendor ID: 1b36 00:19:10.931 Subsystem Vendor ID: 1af4 00:19:10.931 Serial Number: 12340 00:19:10.931 Model Number: QEMU NVMe Ctrl 00:19:10.931 Firmware Version: 8.0.0 00:19:10.931 Recommended Arb Burst: 6 00:19:10.931 IEEE OUI Identifier: 00 54 52 00:19:10.931 Multi-path I/O 00:19:10.931 May have multiple subsystem ports: No 00:19:10.931 May have multiple controllers: No 00:19:10.931 Associated with SR-IOV VF: No 00:19:10.931 Max Data Transfer Size: 524288 00:19:10.931 Max Number of Namespaces: 256 00:19:10.931 Max Number of I/O Queues: 64 00:19:10.931 NVMe Specification Version (VS): 1.4 00:19:10.931 NVMe Specification Version (Identify): 1.4 00:19:10.931 Maximum Queue Entries: 2048 00:19:10.931 Contiguous Queues Required: Yes 00:19:10.931 Arbitration Mechanisms Supported 00:19:10.931 Weighted Round Robin: Not Supported 00:19:10.931 Vendor Specific: Not Supported 00:19:10.931 Reset Timeout: 7500 ms 00:19:10.931 Doorbell Stride: 4 bytes 00:19:10.931 NVM Subsystem Reset: Not Supported 00:19:10.931 Command Sets Supported 00:19:10.931 NVM Command Set: Supported 00:19:10.931 Boot Partition: Not Supported 00:19:10.931 Memory Page Size Minimum: 4096 bytes 00:19:10.931 Memory Page Size Maximum: 65536 bytes 00:19:10.931 Persistent Memory Region: Not Supported 00:19:10.931 Optional Asynchronous Events Supported 00:19:10.931 Namespace Attribute Notices: Supported 00:19:10.931 Firmware Activation Notices: Not Supported 00:19:10.931 ANA Change Notices: Not Supported 00:19:10.931 PLE Aggregate Log Change Notices: Not Supported 00:19:10.931 LBA Status Info Alert Notices: Not Supported 00:19:10.931 EGE Aggregate Log Change Notices: Not Supported 00:19:10.931 Normal NVM Subsystem Shutdown event: Not Supported 00:19:10.931 Zone Descriptor Change Notices: Not Supported 00:19:10.931 Discovery Log Change Notices: Not Supported 00:19:10.931 Controller Attributes 00:19:10.931 128-bit Host Identifier: Not Supported 00:19:10.931 Non-Operational Permissive Mode: Not Supported 00:19:10.931 NVM Sets: Not Supported 00:19:10.931 Read Recovery Levels: Not Supported 00:19:10.931 Endurance Groups: Not Supported 00:19:10.931 Predictable Latency Mode: Not Supported 00:19:10.931 Traffic Based Keep ALive: Not Supported 00:19:10.931 Namespace Granularity: Not Supported 00:19:10.931 SQ Associations: Not Supported 00:19:10.931 UUID List: Not Supported 00:19:10.931 Multi-Domain Subsystem: Not Supported 00:19:10.931 Fixed Capacity Management: Not Supported 00:19:10.931 Variable Capacity Management: Not Supported 00:19:10.931 Delete Endurance Group: Not Supported 00:19:10.931 Delete NVM Set: Not Supported 00:19:10.931 Extended LBA Formats Supported: Supported 00:19:10.931 Flexible Data Placement Supported: Not Supported 00:19:10.931 00:19:10.931 Controller Memory Buffer Support 00:19:10.931 ================================ 00:19:10.931 Supported: No 00:19:10.931 00:19:10.931 Persistent Memory Region Support 00:19:10.931 ================================ 00:19:10.931 Supported: No 00:19:10.931 00:19:10.931 Admin Command Set Attributes 00:19:10.931 ============================ 00:19:10.931 Security Send/Receive: Not Supported 00:19:10.931 Format NVM: Supported 00:19:10.931 Firmware Activate/Download: Not Supported 00:19:10.931 Namespace Management: Supported 00:19:10.931 Device Self-Test: Not Supported 00:19:10.931 Directives: Supported 00:19:10.931 NVMe-MI: Not Supported 00:19:10.931 Virtualization Management: Not Supported 00:19:10.931 Doorbell Buffer Config: Supported 00:19:10.931 Get LBA Status Capability: Not Supported 00:19:10.931 Command & Feature Lockdown Capability: Not Supported 00:19:10.931 Abort Command Limit: 4 00:19:10.931 Async Event Request Limit: 4 00:19:10.931 Number of Firmware Slots: N/A 00:19:10.931 Firmware Slot 1 Read-Only: N/A 00:19:10.931 Firmware Activation Without Reset: N/A 00:19:10.932 Multiple Update Detection Support: N/A 00:19:10.932 Firmware Update Granularity: No Information Provided 00:19:10.932 Per-Namespace SMART Log: Yes 00:19:10.932 Asymmetric Namespace Access Log Page: Not Supported 00:19:10.932 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:10.932 Command Effects Log Page: Supported 00:19:10.932 Get Log Page Extended Data: Supported 00:19:10.932 Telemetry Log Pages: Not Supported 00:19:10.932 Persistent Event Log Pages: Not Supported 00:19:10.932 Supported Log Pages Log Page: May Support 00:19:10.932 Commands Supported & Effects Log Page: Not Supported 00:19:10.932 Feature Identifiers & Effects Log Page:May Support 00:19:10.932 NVMe-MI Commands & Effects Log Page: May Support 00:19:10.932 Data Area 4 for Telemetry Log: Not Supported 00:19:10.932 Error Log Page Entries Supported: 1 00:19:10.932 Keep Alive: Not Supported 00:19:10.932 00:19:10.932 NVM Command Set Attributes 00:19:10.932 ========================== 00:19:10.932 Submission Queue Entry Size 00:19:10.932 Max: 64 00:19:10.932 Min: 64 00:19:10.932 Completion Queue Entry Size 00:19:10.932 Max: 16 00:19:10.932 Min: 16 00:19:10.932 Number of Namespaces: 256 00:19:10.932 Compare Command: Supported 00:19:10.932 Write Uncorrectable Command: Not Supported 00:19:10.932 Dataset Management Command: Supported 00:19:10.932 Write Zeroes Command: Supported 00:19:10.932 Set Features Save Field: Supported 00:19:10.932 Reservations: Not Supported 00:19:10.932 Timestamp: Supported 00:19:10.932 Copy: Supported 00:19:10.932 Volatile Write Cache: Present 00:19:10.932 Atomic Write Unit (Normal): 1 00:19:10.932 Atomic Write Unit (PFail): 1 00:19:10.932 Atomic Compare & Write Unit: 1 00:19:10.932 Fused Compare & Write: Not Supported 00:19:10.932 Scatter-Gather List 00:19:10.932 SGL Command Set: Supported 00:19:10.932 SGL Keyed: Not Supported 00:19:10.932 SGL Bit Bucket Descriptor: Not Supported 00:19:10.932 SGL Metadata Pointer: Not Supported 00:19:10.932 Oversized SGL: Not Supported 00:19:10.932 SGL Metadata Address: Not Supported 00:19:10.932 SGL Offset: Not Supported 00:19:10.932 Transport SGL Data Block: Not Supported 00:19:10.932 Replay Protected Memory Block: Not Supported 00:19:10.932 00:19:10.932 Firmware Slot Information 00:19:10.932 ========================= 00:19:10.932 Active slot: 1 00:19:10.932 Slot 1 Firmware Revision: 1.0 00:19:10.932 00:19:10.932 00:19:10.932 Commands Supported and Effects 00:19:10.932 ============================== 00:19:10.932 Admin Commands 00:19:10.932 -------------- 00:19:10.932 Delete I/O Submission Queue (00h): Supported 00:19:10.932 Create I/O Submission Queue (01h): Supported 00:19:10.932 Get Log Page (02h): Supported 00:19:10.932 Delete I/O Completion Queue (04h): Supported 00:19:10.932 Create I/O Completion Queue (05h): Supported 00:19:10.932 Identify (06h): Supported 00:19:10.932 Abort (08h): Supported 00:19:10.932 Set Features (09h): Supported 00:19:10.932 Get Features (0Ah): Supported 00:19:10.932 Asynchronous Event Request (0Ch): Supported 00:19:10.932 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:10.932 Directive Send (19h): Supported 00:19:10.932 Directive Receive (1Ah): Supported 00:19:10.932 Virtualization Management (1Ch): Supported 00:19:10.932 Doorbell Buffer Config (7Ch): Supported 00:19:10.932 Format NVM (80h): Supported LBA-Change 00:19:10.932 I/O Commands 00:19:10.932 ------------ 00:19:10.932 Flush (00h): Supported LBA-Change 00:19:10.932 Write (01h): Supported LBA-Change 00:19:10.932 Read (02h): Supported 00:19:10.932 Compare (05h): Supported 00:19:10.932 Write Zeroes (08h): Supported LBA-Change 00:19:10.932 Dataset Management (09h): Supported LBA-Change 00:19:10.932 Unknown (0Ch): Supported 00:19:10.932 Unknown (12h): Supported 00:19:10.932 Copy (19h): Supported LBA-Change 00:19:10.932 Unknown (1Dh): Supported LBA-Change 00:19:10.932 00:19:10.932 Error Log 00:19:10.932 ========= 00:19:10.932 00:19:10.932 Arbitration 00:19:10.932 =========== 00:19:10.932 Arbitration Burst: no limit 00:19:10.932 00:19:10.932 Power Management 00:19:10.932 ================ 00:19:10.932 Number of Power States: 1 00:19:10.932 Current Power State: Power State #0 00:19:10.932 Power State #0: 00:19:10.932 Max Power: 25.00 W 00:19:10.932 Non-Operational State: Operational 00:19:10.932 Entry Latency: 16 microseconds 00:19:10.932 Exit Latency: 4 microseconds 00:19:10.932 Relative Read Throughput: 0 00:19:10.932 Relative Read Latency: 0 00:19:10.932 Relative Write Throughput: 0 00:19:10.932 Relative Write Latency: 0 00:19:10.932 Idle Power: Not Reported 00:19:10.932 Active Power: Not Reported 00:19:10.932 Non-Operational Permissive Mode: Not Supported 00:19:10.932 00:19:10.932 Health Information 00:19:10.932 ================== 00:19:10.932 Critical Warnings: 00:19:10.932 Available Spare Space: OK 00:19:10.932 Temperature: OK 00:19:10.932 Device Reliability: OK 00:19:10.932 Read Only: No 00:19:10.932 Volatile Memory Backup: OK 00:19:10.932 Current Temperature: 323 Kelvin (50 Celsius) 00:19:10.932 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:10.932 Available Spare: 0% 00:19:10.932 Available Spare Threshold: 0% 00:19:10.932 Life Percentage Used: 0% 00:19:10.932 Data Units Read: 12318 00:19:10.932 Data Units Written: 12303 00:19:10.932 Host Read Commands: 290859 00:19:10.932 Host Write Commands: 290708 00:19:10.932 Controller Busy Time: 0 minutes 00:19:10.932 Power Cycles: 0 00:19:10.932 Power On Hours: 0 hours 00:19:10.932 Unsafe Shutdowns: 0 00:19:10.932 Unrecoverable Media Errors: 0 00:19:10.932 Lifetime Error Log Entries: 0 00:19:10.932 Warning Temperature Time: 0 minutes 00:19:10.932 Critical Temperature Time: 0 minutes 00:19:10.932 00:19:10.932 Number of Queues 00:19:10.932 ================ 00:19:10.932 Number of I/O Submission Queues: 64 00:19:10.932 Number of I/O Completion Queues: 64 00:19:10.932 00:19:10.932 ZNS Specific Controller Data 00:19:10.932 ============================ 00:19:10.932 Zone Append Size Limit: 0 00:19:10.932 00:19:10.932 00:19:10.932 Active Namespaces 00:19:10.932 ================= 00:19:10.932 Namespace ID:1 00:19:10.932 Error Recovery Timeout: Unlimited 00:19:10.932 Command Set Identifier: NVM (00h) 00:19:10.932 Deallocate: Supported 00:19:10.932 Deallocated/Unwritten Error: Supported 00:19:10.932 Deallocated Read Value: All 0x00 00:19:10.932 Deallocate in Write Zeroes: Not Supported 00:19:10.932 Deallocated Guard Field: 0xFFFF 00:19:10.932 Flush: Supported 00:19:10.932 Reservation: Not Supported 00:19:10.932 Namespace Sharing Capabilities: Private 00:19:10.932 Size (in LBAs): 1310720 (5GiB) 00:19:10.932 Capacity (in LBAs): 1310720 (5GiB) 00:19:10.932 Utilization (in LBAs): 1310720 (5GiB) 00:19:10.932 Thin Provisioning: Not Supported 00:19:10.932 Per-NS Atomic Units: No 00:19:10.932 Maximum Single Source Range Length: 128 00:19:10.932 Maximum Copy Length: 128 00:19:10.932 Maximum Source Range Count: 128 00:19:10.932 NGUID/EUI64 Never Reused: No 00:19:10.933 Namespace Write Protected: No 00:19:10.933 Number of LBA Formats: 8 00:19:10.933 Current LBA Format: LBA Format #04 00:19:10.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:10.933 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:10.933 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:10.933 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:10.933 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:10.933 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:10.933 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:10.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:10.933 00:19:10.933 22:01:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:10.933 22:01:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:11.497 EAL: TSC is not safe to use in SMP mode 00:19:11.497 EAL: TSC is not invariant 00:19:11.497 [2024-05-14 22:01:12.050496] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:11.497 ===================================================== 00:19:11.497 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:11.497 ===================================================== 00:19:11.497 Controller Capabilities/Features 00:19:11.497 ================================ 00:19:11.497 Vendor ID: 1b36 00:19:11.497 Subsystem Vendor ID: 1af4 00:19:11.497 Serial Number: 12340 00:19:11.497 Model Number: QEMU NVMe Ctrl 00:19:11.497 Firmware Version: 8.0.0 00:19:11.497 Recommended Arb Burst: 6 00:19:11.497 IEEE OUI Identifier: 00 54 52 00:19:11.497 Multi-path I/O 00:19:11.497 May have multiple subsystem ports: No 00:19:11.497 May have multiple controllers: No 00:19:11.497 Associated with SR-IOV VF: No 00:19:11.497 Max Data Transfer Size: 524288 00:19:11.497 Max Number of Namespaces: 256 00:19:11.497 Max Number of I/O Queues: 64 00:19:11.497 NVMe Specification Version (VS): 1.4 00:19:11.497 NVMe Specification Version (Identify): 1.4 00:19:11.497 Maximum Queue Entries: 2048 00:19:11.497 Contiguous Queues Required: Yes 00:19:11.497 Arbitration Mechanisms Supported 00:19:11.497 Weighted Round Robin: Not Supported 00:19:11.497 Vendor Specific: Not Supported 00:19:11.497 Reset Timeout: 7500 ms 00:19:11.497 Doorbell Stride: 4 bytes 00:19:11.497 NVM Subsystem Reset: Not Supported 00:19:11.497 Command Sets Supported 00:19:11.497 NVM Command Set: Supported 00:19:11.497 Boot Partition: Not Supported 00:19:11.497 Memory Page Size Minimum: 4096 bytes 00:19:11.497 Memory Page Size Maximum: 65536 bytes 00:19:11.497 Persistent Memory Region: Not Supported 00:19:11.497 Optional Asynchronous Events Supported 00:19:11.497 Namespace Attribute Notices: Supported 00:19:11.497 Firmware Activation Notices: Not Supported 00:19:11.497 ANA Change Notices: Not Supported 00:19:11.497 PLE Aggregate Log Change Notices: Not Supported 00:19:11.497 LBA Status Info Alert Notices: Not Supported 00:19:11.497 EGE Aggregate Log Change Notices: Not Supported 00:19:11.497 Normal NVM Subsystem Shutdown event: Not Supported 00:19:11.497 Zone Descriptor Change Notices: Not Supported 00:19:11.497 Discovery Log Change Notices: Not Supported 00:19:11.498 Controller Attributes 00:19:11.498 128-bit Host Identifier: Not Supported 00:19:11.498 Non-Operational Permissive Mode: Not Supported 00:19:11.498 NVM Sets: Not Supported 00:19:11.498 Read Recovery Levels: Not Supported 00:19:11.498 Endurance Groups: Not Supported 00:19:11.498 Predictable Latency Mode: Not Supported 00:19:11.498 Traffic Based Keep ALive: Not Supported 00:19:11.498 Namespace Granularity: Not Supported 00:19:11.498 SQ Associations: Not Supported 00:19:11.498 UUID List: Not Supported 00:19:11.498 Multi-Domain Subsystem: Not Supported 00:19:11.498 Fixed Capacity Management: Not Supported 00:19:11.498 Variable Capacity Management: Not Supported 00:19:11.498 Delete Endurance Group: Not Supported 00:19:11.498 Delete NVM Set: Not Supported 00:19:11.498 Extended LBA Formats Supported: Supported 00:19:11.498 Flexible Data Placement Supported: Not Supported 00:19:11.498 00:19:11.498 Controller Memory Buffer Support 00:19:11.498 ================================ 00:19:11.498 Supported: No 00:19:11.498 00:19:11.498 Persistent Memory Region Support 00:19:11.498 ================================ 00:19:11.498 Supported: No 00:19:11.498 00:19:11.498 Admin Command Set Attributes 00:19:11.498 ============================ 00:19:11.498 Security Send/Receive: Not Supported 00:19:11.498 Format NVM: Supported 00:19:11.498 Firmware Activate/Download: Not Supported 00:19:11.498 Namespace Management: Supported 00:19:11.498 Device Self-Test: Not Supported 00:19:11.498 Directives: Supported 00:19:11.498 NVMe-MI: Not Supported 00:19:11.498 Virtualization Management: Not Supported 00:19:11.498 Doorbell Buffer Config: Supported 00:19:11.498 Get LBA Status Capability: Not Supported 00:19:11.498 Command & Feature Lockdown Capability: Not Supported 00:19:11.498 Abort Command Limit: 4 00:19:11.498 Async Event Request Limit: 4 00:19:11.498 Number of Firmware Slots: N/A 00:19:11.498 Firmware Slot 1 Read-Only: N/A 00:19:11.498 Firmware Activation Without Reset: N/A 00:19:11.498 Multiple Update Detection Support: N/A 00:19:11.498 Firmware Update Granularity: No Information Provided 00:19:11.498 Per-Namespace SMART Log: Yes 00:19:11.498 Asymmetric Namespace Access Log Page: Not Supported 00:19:11.498 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:11.498 Command Effects Log Page: Supported 00:19:11.498 Get Log Page Extended Data: Supported 00:19:11.498 Telemetry Log Pages: Not Supported 00:19:11.498 Persistent Event Log Pages: Not Supported 00:19:11.498 Supported Log Pages Log Page: May Support 00:19:11.498 Commands Supported & Effects Log Page: Not Supported 00:19:11.498 Feature Identifiers & Effects Log Page:May Support 00:19:11.498 NVMe-MI Commands & Effects Log Page: May Support 00:19:11.498 Data Area 4 for Telemetry Log: Not Supported 00:19:11.498 Error Log Page Entries Supported: 1 00:19:11.498 Keep Alive: Not Supported 00:19:11.498 00:19:11.498 NVM Command Set Attributes 00:19:11.498 ========================== 00:19:11.498 Submission Queue Entry Size 00:19:11.498 Max: 64 00:19:11.498 Min: 64 00:19:11.498 Completion Queue Entry Size 00:19:11.498 Max: 16 00:19:11.498 Min: 16 00:19:11.498 Number of Namespaces: 256 00:19:11.498 Compare Command: Supported 00:19:11.498 Write Uncorrectable Command: Not Supported 00:19:11.498 Dataset Management Command: Supported 00:19:11.498 Write Zeroes Command: Supported 00:19:11.498 Set Features Save Field: Supported 00:19:11.498 Reservations: Not Supported 00:19:11.498 Timestamp: Supported 00:19:11.498 Copy: Supported 00:19:11.498 Volatile Write Cache: Present 00:19:11.498 Atomic Write Unit (Normal): 1 00:19:11.498 Atomic Write Unit (PFail): 1 00:19:11.498 Atomic Compare & Write Unit: 1 00:19:11.498 Fused Compare & Write: Not Supported 00:19:11.498 Scatter-Gather List 00:19:11.498 SGL Command Set: Supported 00:19:11.498 SGL Keyed: Not Supported 00:19:11.498 SGL Bit Bucket Descriptor: Not Supported 00:19:11.498 SGL Metadata Pointer: Not Supported 00:19:11.498 Oversized SGL: Not Supported 00:19:11.498 SGL Metadata Address: Not Supported 00:19:11.498 SGL Offset: Not Supported 00:19:11.498 Transport SGL Data Block: Not Supported 00:19:11.498 Replay Protected Memory Block: Not Supported 00:19:11.498 00:19:11.498 Firmware Slot Information 00:19:11.498 ========================= 00:19:11.498 Active slot: 1 00:19:11.498 Slot 1 Firmware Revision: 1.0 00:19:11.498 00:19:11.498 00:19:11.498 Commands Supported and Effects 00:19:11.498 ============================== 00:19:11.498 Admin Commands 00:19:11.498 -------------- 00:19:11.498 Delete I/O Submission Queue (00h): Supported 00:19:11.498 Create I/O Submission Queue (01h): Supported 00:19:11.498 Get Log Page (02h): Supported 00:19:11.498 Delete I/O Completion Queue (04h): Supported 00:19:11.498 Create I/O Completion Queue (05h): Supported 00:19:11.498 Identify (06h): Supported 00:19:11.498 Abort (08h): Supported 00:19:11.498 Set Features (09h): Supported 00:19:11.498 Get Features (0Ah): Supported 00:19:11.498 Asynchronous Event Request (0Ch): Supported 00:19:11.498 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:11.498 Directive Send (19h): Supported 00:19:11.498 Directive Receive (1Ah): Supported 00:19:11.498 Virtualization Management (1Ch): Supported 00:19:11.498 Doorbell Buffer Config (7Ch): Supported 00:19:11.498 Format NVM (80h): Supported LBA-Change 00:19:11.498 I/O Commands 00:19:11.498 ------------ 00:19:11.498 Flush (00h): Supported LBA-Change 00:19:11.498 Write (01h): Supported LBA-Change 00:19:11.498 Read (02h): Supported 00:19:11.498 Compare (05h): Supported 00:19:11.498 Write Zeroes (08h): Supported LBA-Change 00:19:11.498 Dataset Management (09h): Supported LBA-Change 00:19:11.498 Unknown (0Ch): Supported 00:19:11.498 Unknown (12h): Supported 00:19:11.498 Copy (19h): Supported LBA-Change 00:19:11.498 Unknown (1Dh): Supported LBA-Change 00:19:11.498 00:19:11.498 Error Log 00:19:11.498 ========= 00:19:11.498 00:19:11.498 Arbitration 00:19:11.498 =========== 00:19:11.498 Arbitration Burst: no limit 00:19:11.498 00:19:11.498 Power Management 00:19:11.498 ================ 00:19:11.498 Number of Power States: 1 00:19:11.498 Current Power State: Power State #0 00:19:11.498 Power State #0: 00:19:11.498 Max Power: 25.00 W 00:19:11.498 Non-Operational State: Operational 00:19:11.498 Entry Latency: 16 microseconds 00:19:11.498 Exit Latency: 4 microseconds 00:19:11.498 Relative Read Throughput: 0 00:19:11.498 Relative Read Latency: 0 00:19:11.498 Relative Write Throughput: 0 00:19:11.498 Relative Write Latency: 0 00:19:11.757 Idle Power: Not Reported 00:19:11.757 Active Power: Not Reported 00:19:11.757 Non-Operational Permissive Mode: Not Supported 00:19:11.757 00:19:11.757 Health Information 00:19:11.757 ================== 00:19:11.757 Critical Warnings: 00:19:11.757 Available Spare Space: OK 00:19:11.757 Temperature: OK 00:19:11.757 Device Reliability: OK 00:19:11.757 Read Only: No 00:19:11.757 Volatile Memory Backup: OK 00:19:11.757 Current Temperature: 323 Kelvin (50 Celsius) 00:19:11.757 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:11.757 Available Spare: 0% 00:19:11.757 Available Spare Threshold: 0% 00:19:11.757 Life Percentage Used: 0% 00:19:11.757 Data Units Read: 12318 00:19:11.757 Data Units Written: 12303 00:19:11.757 Host Read Commands: 290859 00:19:11.757 Host Write Commands: 290708 00:19:11.757 Controller Busy Time: 0 minutes 00:19:11.757 Power Cycles: 0 00:19:11.757 Power On Hours: 0 hours 00:19:11.757 Unsafe Shutdowns: 0 00:19:11.757 Unrecoverable Media Errors: 0 00:19:11.757 Lifetime Error Log Entries: 0 00:19:11.757 Warning Temperature Time: 0 minutes 00:19:11.757 Critical Temperature Time: 0 minutes 00:19:11.757 00:19:11.757 Number of Queues 00:19:11.757 ================ 00:19:11.757 Number of I/O Submission Queues: 64 00:19:11.757 Number of I/O Completion Queues: 64 00:19:11.757 00:19:11.757 ZNS Specific Controller Data 00:19:11.757 ============================ 00:19:11.757 Zone Append Size Limit: 0 00:19:11.757 00:19:11.757 00:19:11.757 Active Namespaces 00:19:11.757 ================= 00:19:11.757 Namespace ID:1 00:19:11.757 Error Recovery Timeout: Unlimited 00:19:11.757 Command Set Identifier: NVM (00h) 00:19:11.757 Deallocate: Supported 00:19:11.757 Deallocated/Unwritten Error: Supported 00:19:11.757 Deallocated Read Value: All 0x00 00:19:11.757 Deallocate in Write Zeroes: Not Supported 00:19:11.757 Deallocated Guard Field: 0xFFFF 00:19:11.757 Flush: Supported 00:19:11.757 Reservation: Not Supported 00:19:11.757 Namespace Sharing Capabilities: Private 00:19:11.757 Size (in LBAs): 1310720 (5GiB) 00:19:11.757 Capacity (in LBAs): 1310720 (5GiB) 00:19:11.757 Utilization (in LBAs): 1310720 (5GiB) 00:19:11.757 Thin Provisioning: Not Supported 00:19:11.757 Per-NS Atomic Units: No 00:19:11.757 Maximum Single Source Range Length: 128 00:19:11.757 Maximum Copy Length: 128 00:19:11.757 Maximum Source Range Count: 128 00:19:11.757 NGUID/EUI64 Never Reused: No 00:19:11.757 Namespace Write Protected: No 00:19:11.757 Number of LBA Formats: 8 00:19:11.757 Current LBA Format: LBA Format #04 00:19:11.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:11.757 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:11.757 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:11.757 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:11.757 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:11.757 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:11.757 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:11.757 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:11.757 00:19:11.757 00:19:11.757 real 0m1.335s 00:19:11.757 user 0m0.052s 00:19:11.757 sys 0m1.299s 00:19:11.757 22:01:12 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:11.757 22:01:12 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:11.757 ************************************ 00:19:11.757 END TEST nvme_identify 00:19:11.757 ************************************ 00:19:11.757 22:01:12 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:11.757 22:01:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:11.757 22:01:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:11.757 22:01:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.757 ************************************ 00:19:11.757 START TEST nvme_perf 00:19:11.757 ************************************ 00:19:11.757 22:01:12 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:19:11.757 22:01:12 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:12.324 EAL: TSC is not safe to use in SMP mode 00:19:12.324 EAL: TSC is not invariant 00:19:12.324 [2024-05-14 22:01:12.701222] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:13.256 Initializing NVMe Controllers 00:19:13.256 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:13.256 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:13.256 Initialization complete. Launching workers. 00:19:13.256 ======================================================== 00:19:13.256 Latency(us) 00:19:13.256 Device Information : IOPS MiB/s Average min max 00:19:13.256 PCIE (0000:00:10.0) NSID 1 from core 0: 86400.03 1012.50 1481.32 182.35 7567.24 00:19:13.256 ======================================================== 00:19:13.256 Total : 86400.03 1012.50 1481.32 182.35 7567.24 00:19:13.256 00:19:13.256 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:13.256 ================================================================================= 00:19:13.256 1.00000% : 1228.795us 00:19:13.256 10.00000% : 1310.715us 00:19:13.256 25.00000% : 1362.846us 00:19:13.256 50.00000% : 1437.318us 00:19:13.256 75.00000% : 1534.132us 00:19:13.256 90.00000% : 1668.183us 00:19:13.256 95.00000% : 1802.233us 00:19:13.256 98.00000% : 2055.439us 00:19:13.256 99.00000% : 2368.223us 00:19:13.256 99.50000% : 2606.535us 00:19:13.256 99.90000% : 7357.877us 00:19:13.256 99.99000% : 7566.399us 00:19:13.256 99.99900% : 7596.188us 00:19:13.256 99.99990% : 7596.188us 00:19:13.256 99.99999% : 7596.188us 00:19:13.256 00:19:13.256 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:13.256 ============================================================================== 00:19:13.256 Range in us Cumulative IO count 00:19:13.256 181.527 - 182.457: 0.0012% ( 1) 00:19:13.256 202.006 - 202.937: 0.0023% ( 1) 00:19:13.256 202.937 - 203.868: 0.0035% ( 1) 00:19:13.256 203.868 - 204.799: 0.0046% ( 1) 00:19:13.256 204.799 - 205.730: 0.0058% ( 1) 00:19:13.256 207.592 - 208.523: 0.0069% ( 1) 00:19:13.256 209.454 - 210.385: 0.0081% ( 1) 00:19:13.256 381.671 - 383.533: 0.0093% ( 1) 00:19:13.256 383.533 - 385.395: 0.0104% ( 1) 00:19:13.256 385.395 - 387.257: 0.0116% ( 1) 00:19:13.256 387.257 - 389.118: 0.0127% ( 1) 00:19:13.256 389.118 - 390.980: 0.0139% ( 1) 00:19:13.256 390.980 - 392.842: 0.0150% ( 1) 00:19:13.256 430.078 - 431.940: 0.0162% ( 1) 00:19:13.256 431.940 - 433.802: 0.0185% ( 2) 00:19:13.256 433.802 - 435.664: 0.0197% ( 1) 00:19:13.256 435.664 - 437.526: 0.0208% ( 1) 00:19:13.256 437.526 - 439.387: 0.0231% ( 2) 00:19:13.256 439.387 - 441.249: 0.0243% ( 1) 00:19:13.256 441.249 - 443.111: 0.0254% ( 1) 00:19:13.256 443.111 - 444.973: 0.0278% ( 2) 00:19:13.256 444.973 - 446.835: 0.0289% ( 1) 00:19:13.256 446.835 - 448.696: 0.0301% ( 1) 00:19:13.256 448.696 - 450.558: 0.0312% ( 1) 00:19:13.256 450.558 - 452.420: 0.0335% ( 2) 00:19:13.256 452.420 - 454.282: 0.0347% ( 1) 00:19:13.256 454.282 - 456.144: 0.0358% ( 1) 00:19:13.256 456.144 - 458.005: 0.0382% ( 2) 00:19:13.256 459.867 - 461.729: 0.0393% ( 1) 00:19:13.256 495.242 - 498.965: 0.0405% ( 1) 00:19:13.256 521.307 - 525.031: 0.0439% ( 3) 00:19:13.256 636.739 - 640.463: 0.0451% ( 1) 00:19:13.256 640.463 - 644.187: 0.0474% ( 2) 00:19:13.257 644.187 - 647.910: 0.0486% ( 1) 00:19:13.257 647.910 - 651.634: 0.0509% ( 2) 00:19:13.257 651.634 - 655.357: 0.0532% ( 2) 00:19:13.257 692.594 - 696.317: 0.0544% ( 1) 00:19:13.257 774.513 - 778.237: 0.0590% ( 4) 00:19:13.257 778.237 - 781.961: 0.0624% ( 3) 00:19:13.257 781.961 - 785.684: 0.0659% ( 3) 00:19:13.257 785.684 - 789.408: 0.0694% ( 3) 00:19:13.257 789.408 - 793.131: 0.0740% ( 4) 00:19:13.257 793.131 - 796.855: 0.0775% ( 3) 00:19:13.257 796.855 - 800.579: 0.0821% ( 4) 00:19:13.257 800.579 - 804.302: 0.0856% ( 3) 00:19:13.257 804.302 - 808.026: 0.0890% ( 3) 00:19:13.257 808.026 - 811.750: 0.0937% ( 4) 00:19:13.257 811.750 - 815.473: 0.0960% ( 2) 00:19:13.257 983.036 - 990.483: 0.0971% ( 1) 00:19:13.257 990.483 - 997.931: 0.1018% ( 4) 00:19:13.257 997.931 - 1005.378: 0.1064% ( 4) 00:19:13.257 1005.378 - 1012.825: 0.1075% ( 1) 00:19:13.257 1131.981 - 1139.428: 0.1087% ( 1) 00:19:13.257 1139.428 - 1146.875: 0.1099% ( 1) 00:19:13.257 1146.875 - 1154.323: 0.1133% ( 3) 00:19:13.257 1154.323 - 1161.770: 0.1214% ( 7) 00:19:13.257 1161.770 - 1169.217: 0.1330% ( 10) 00:19:13.257 1169.217 - 1176.664: 0.1515% ( 16) 00:19:13.257 1176.664 - 1184.112: 0.1804% ( 25) 00:19:13.257 1184.112 - 1191.559: 0.2394% ( 51) 00:19:13.257 1191.559 - 1199.006: 0.3238% ( 73) 00:19:13.257 1199.006 - 1206.453: 0.4394% ( 100) 00:19:13.257 1206.453 - 1213.901: 0.6118% ( 149) 00:19:13.257 1213.901 - 1221.348: 0.8280% ( 187) 00:19:13.257 1221.348 - 1228.795: 1.1426% ( 272) 00:19:13.257 1228.795 - 1236.242: 1.5057% ( 314) 00:19:13.257 1236.242 - 1243.690: 1.9555% ( 389) 00:19:13.257 1243.690 - 1251.137: 2.5130% ( 482) 00:19:13.257 1251.137 - 1258.584: 3.1513% ( 552) 00:19:13.257 1258.584 - 1266.031: 3.8845% ( 634) 00:19:13.257 1266.031 - 1273.479: 4.7333% ( 734) 00:19:13.257 1273.479 - 1280.926: 5.7730% ( 899) 00:19:13.257 1280.926 - 1288.373: 6.9017% ( 976) 00:19:13.257 1288.373 - 1295.820: 8.1714% ( 1098) 00:19:13.257 1295.820 - 1303.268: 9.5638% ( 1204) 00:19:13.257 1303.268 - 1310.715: 11.1273% ( 1352) 00:19:13.257 1310.715 - 1318.162: 12.7984% ( 1445) 00:19:13.257 1318.162 - 1325.609: 14.6082% ( 1565) 00:19:13.257 1325.609 - 1333.057: 16.6193% ( 1739) 00:19:13.257 1333.057 - 1340.504: 18.6766% ( 1779) 00:19:13.257 1340.504 - 1347.951: 20.8981% ( 1921) 00:19:13.257 1347.951 - 1355.398: 23.1925% ( 1984) 00:19:13.257 1355.398 - 1362.846: 25.5944% ( 2077) 00:19:13.257 1362.846 - 1370.293: 28.0576% ( 2130) 00:19:13.257 1370.293 - 1377.740: 30.5845% ( 2185) 00:19:13.257 1377.740 - 1385.187: 33.1633% ( 2230) 00:19:13.257 1385.187 - 1392.635: 35.7665% ( 2251) 00:19:13.257 1392.635 - 1400.082: 38.3812% ( 2261) 00:19:13.257 1400.082 - 1407.529: 40.9959% ( 2261) 00:19:13.257 1407.529 - 1414.976: 43.6257% ( 2274) 00:19:13.257 1414.976 - 1422.423: 46.2277% ( 2250) 00:19:13.257 1422.423 - 1429.871: 48.7649% ( 2194) 00:19:13.257 1429.871 - 1437.318: 51.2212% ( 2124) 00:19:13.257 1437.318 - 1444.765: 53.6324% ( 2085) 00:19:13.257 1444.765 - 1452.212: 55.9511% ( 2005) 00:19:13.257 1452.212 - 1459.660: 58.2027% ( 1947) 00:19:13.257 1459.660 - 1467.107: 60.4242% ( 1921) 00:19:13.257 1467.107 - 1474.554: 62.5231% ( 1815) 00:19:13.257 1474.554 - 1482.001: 64.5666% ( 1767) 00:19:13.257 1482.001 - 1489.449: 66.5464% ( 1712) 00:19:13.257 1489.449 - 1496.896: 68.3863% ( 1591) 00:19:13.257 1496.896 - 1504.343: 70.0990% ( 1481) 00:19:13.257 1504.343 - 1511.790: 71.7481% ( 1426) 00:19:13.257 1511.790 - 1519.238: 73.2676% ( 1314) 00:19:13.257 1519.238 - 1526.685: 74.6912% ( 1231) 00:19:13.257 1526.685 - 1534.132: 76.0281% ( 1156) 00:19:13.257 1534.132 - 1541.579: 77.2898% ( 1091) 00:19:13.257 1541.579 - 1549.027: 78.4659% ( 1017) 00:19:13.257 1549.027 - 1556.474: 79.6165% ( 995) 00:19:13.257 1556.474 - 1563.921: 80.6619% ( 904) 00:19:13.257 1563.921 - 1571.368: 81.6854% ( 885) 00:19:13.257 1571.368 - 1578.816: 82.6464% ( 831) 00:19:13.257 1578.816 - 1586.263: 83.5415% ( 774) 00:19:13.257 1586.263 - 1593.710: 84.4169% ( 757) 00:19:13.257 1593.710 - 1601.157: 85.2195% ( 694) 00:19:13.257 1601.157 - 1608.605: 85.9353% ( 619) 00:19:13.257 1608.605 - 1616.052: 86.6211% ( 593) 00:19:13.257 1616.052 - 1623.499: 87.2560% ( 549) 00:19:13.257 1623.499 - 1630.946: 87.8492% ( 513) 00:19:13.257 1630.946 - 1638.394: 88.3905% ( 468) 00:19:13.257 1638.394 - 1645.841: 88.8889% ( 431) 00:19:13.257 1645.841 - 1653.288: 89.3700% ( 416) 00:19:13.257 1653.288 - 1660.735: 89.8337% ( 401) 00:19:13.257 1660.735 - 1668.183: 90.2963% ( 400) 00:19:13.257 1668.183 - 1675.630: 90.6837% ( 335) 00:19:13.257 1675.630 - 1683.077: 91.0376% ( 306) 00:19:13.257 1683.077 - 1690.524: 91.3787% ( 295) 00:19:13.257 1690.524 - 1697.972: 91.6921% ( 271) 00:19:13.257 1697.972 - 1705.419: 91.9951% ( 262) 00:19:13.257 1705.419 - 1712.866: 92.2923% ( 257) 00:19:13.257 1712.866 - 1720.313: 92.5675% ( 238) 00:19:13.257 1720.313 - 1727.760: 92.8439% ( 239) 00:19:13.257 1727.760 - 1735.208: 93.0949% ( 217) 00:19:13.257 1735.208 - 1742.655: 93.3470% ( 218) 00:19:13.257 1742.655 - 1750.102: 93.5690% ( 192) 00:19:13.257 1750.102 - 1757.549: 93.7934% ( 194) 00:19:13.257 1757.549 - 1764.997: 94.0096% ( 187) 00:19:13.257 1764.997 - 1772.444: 94.2340% ( 194) 00:19:13.257 1772.444 - 1779.891: 94.4468% ( 184) 00:19:13.257 1779.891 - 1787.338: 94.6445% ( 171) 00:19:13.257 1787.338 - 1794.786: 94.8307% ( 161) 00:19:13.257 1794.786 - 1802.233: 95.0238% ( 167) 00:19:13.257 1802.233 - 1809.680: 95.2123% ( 163) 00:19:13.257 1809.680 - 1817.127: 95.3950% ( 158) 00:19:13.257 1817.127 - 1824.575: 95.5650% ( 147) 00:19:13.257 1824.575 - 1832.022: 95.7258% ( 139) 00:19:13.257 1832.022 - 1839.469: 95.8877% ( 140) 00:19:13.257 1839.469 - 1846.916: 96.0450% ( 136) 00:19:13.257 1846.916 - 1854.364: 96.1826% ( 119) 00:19:13.257 1854.364 - 1861.811: 96.3144% ( 114) 00:19:13.257 1861.811 - 1869.258: 96.4393% ( 108) 00:19:13.257 1869.258 - 1876.705: 96.5723% ( 115) 00:19:13.257 1876.705 - 1884.153: 96.6845% ( 97) 00:19:13.257 1884.153 - 1891.600: 96.7862% ( 88) 00:19:13.257 1891.600 - 1899.047: 96.8788% ( 80) 00:19:13.257 1899.047 - 1906.494: 96.9759% ( 84) 00:19:13.257 1906.494 - 1921.389: 97.1598% ( 159) 00:19:13.257 1921.389 - 1936.283: 97.3182% ( 137) 00:19:13.257 1936.283 - 1951.178: 97.4431% ( 108) 00:19:13.257 1951.178 - 1966.072: 97.5692% ( 109) 00:19:13.257 1966.072 - 1980.967: 97.6825% ( 98) 00:19:13.257 1980.967 - 1995.861: 97.7854% ( 89) 00:19:13.257 1995.861 - 2010.756: 97.8560% ( 61) 00:19:13.257 2010.756 - 2025.650: 97.9288% ( 63) 00:19:13.257 2025.650 - 2040.545: 97.9889% ( 52) 00:19:13.257 2040.545 - 2055.439: 98.0502% ( 53) 00:19:13.257 2055.439 - 2070.334: 98.0907% ( 35) 00:19:13.257 2070.334 - 2085.228: 98.1312% ( 35) 00:19:13.257 2085.228 - 2100.123: 98.1693% ( 33) 00:19:13.257 2100.123 - 2115.017: 98.2052% ( 31) 00:19:13.257 2115.017 - 2129.912: 98.2434% ( 33) 00:19:13.257 2129.912 - 2144.806: 98.2769% ( 29) 00:19:13.257 2144.806 - 2159.701: 98.3093% ( 28) 00:19:13.257 2159.701 - 2174.595: 98.3417% ( 28) 00:19:13.257 2174.595 - 2189.490: 98.3787% ( 32) 00:19:13.257 2189.490 - 2204.384: 98.4342% ( 48) 00:19:13.257 2204.384 - 2219.279: 98.4885% ( 47) 00:19:13.257 2219.279 - 2234.173: 98.5498% ( 53) 00:19:13.257 2234.173 - 2249.068: 98.6169% ( 58) 00:19:13.257 2249.068 - 2263.962: 98.6828% ( 57) 00:19:13.257 2263.962 - 2278.856: 98.7406% ( 50) 00:19:13.257 2278.856 - 2293.751: 98.7985% ( 50) 00:19:13.257 2293.751 - 2308.645: 98.8389% ( 35) 00:19:13.257 2308.645 - 2323.540: 98.8794% ( 35) 00:19:13.257 2323.540 - 2338.434: 98.9245% ( 39) 00:19:13.257 2338.434 - 2353.329: 98.9650% ( 35) 00:19:13.257 2353.329 - 2368.223: 99.0124% ( 41) 00:19:13.257 2368.223 - 2383.118: 99.0563% ( 38) 00:19:13.257 2383.118 - 2398.012: 99.0945% ( 33) 00:19:13.257 2398.012 - 2412.907: 99.1315% ( 32) 00:19:13.257 2412.907 - 2427.801: 99.1650% ( 29) 00:19:13.257 2427.801 - 2442.696: 99.2009% ( 31) 00:19:13.257 2442.696 - 2457.590: 99.2437% ( 37) 00:19:13.257 2457.590 - 2472.485: 99.2795% ( 31) 00:19:13.257 2472.485 - 2487.379: 99.3119% ( 28) 00:19:13.257 2487.379 - 2502.274: 99.3466% ( 30) 00:19:13.257 2502.274 - 2517.168: 99.3813% ( 30) 00:19:13.257 2517.168 - 2532.063: 99.4102% ( 25) 00:19:13.257 2532.063 - 2546.957: 99.4368% ( 23) 00:19:13.257 2546.957 - 2561.852: 99.4576% ( 18) 00:19:13.257 2561.852 - 2576.746: 99.4750% ( 15) 00:19:13.257 2576.746 - 2591.641: 99.4900% ( 13) 00:19:13.257 2591.641 - 2606.535: 99.5062% ( 14) 00:19:13.257 2606.535 - 2621.430: 99.5212% ( 13) 00:19:13.257 2621.430 - 2636.324: 99.5374% ( 14) 00:19:13.257 2636.324 - 2651.219: 99.5536% ( 14) 00:19:13.257 2651.219 - 2666.113: 99.5698% ( 14) 00:19:13.257 2666.113 - 2681.008: 99.5848% ( 13) 00:19:13.257 2681.008 - 2695.902: 99.6010% ( 14) 00:19:13.257 2695.902 - 2710.797: 99.6057% ( 4) 00:19:13.257 2710.797 - 2725.691: 99.6080% ( 2) 00:19:13.257 2725.691 - 2740.586: 99.6103% ( 2) 00:19:13.257 2740.586 - 2755.480: 99.6126% ( 2) 00:19:13.257 2755.480 - 2770.375: 99.6161% ( 3) 00:19:13.257 2770.375 - 2785.269: 99.6184% ( 2) 00:19:13.257 2785.269 - 2800.164: 99.6195% ( 1) 00:19:13.257 2829.953 - 2844.847: 99.6230% ( 3) 00:19:13.257 2844.847 - 2859.741: 99.6253% ( 2) 00:19:13.257 2859.741 - 2874.636: 99.6265% ( 1) 00:19:13.257 2874.636 - 2889.530: 99.6288% ( 2) 00:19:13.257 2889.530 - 2904.425: 99.6323% ( 3) 00:19:13.257 2904.425 - 2919.319: 99.6334% ( 1) 00:19:13.257 2919.319 - 2934.214: 99.6369% ( 3) 00:19:13.257 2934.214 - 2949.108: 99.6403% ( 3) 00:19:13.257 2949.108 - 2964.003: 99.6438% ( 3) 00:19:13.257 2964.003 - 2978.897: 99.6496% ( 5) 00:19:13.257 2978.897 - 2993.792: 99.6612% ( 10) 00:19:13.257 2993.792 - 3008.686: 99.6658% ( 4) 00:19:13.257 3008.686 - 3023.581: 99.6693% ( 3) 00:19:13.257 3023.581 - 3038.475: 99.6716% ( 2) 00:19:13.257 3038.475 - 3053.370: 99.6750% ( 3) 00:19:13.257 3053.370 - 3068.264: 99.6762% ( 1) 00:19:13.257 3068.264 - 3083.159: 99.6843% ( 7) 00:19:13.257 3083.159 - 3098.053: 99.6889% ( 4) 00:19:13.257 3098.053 - 3112.948: 99.6959% ( 6) 00:19:13.257 3112.948 - 3127.842: 99.7028% ( 6) 00:19:13.257 3127.842 - 3142.737: 99.7086% ( 5) 00:19:13.257 3142.737 - 3157.631: 99.7155% ( 6) 00:19:13.257 3157.631 - 3172.526: 99.7190% ( 3) 00:19:13.257 3172.526 - 3187.420: 99.7213% ( 2) 00:19:13.257 3261.893 - 3276.787: 99.7225% ( 1) 00:19:13.257 3276.787 - 3291.682: 99.7282% ( 5) 00:19:13.257 3291.682 - 3306.576: 99.7340% ( 5) 00:19:13.257 3306.576 - 3321.471: 99.7410% ( 6) 00:19:13.257 3321.471 - 3336.365: 99.7479% ( 6) 00:19:13.257 3336.365 - 3351.260: 99.7548% ( 6) 00:19:13.257 3351.260 - 3366.154: 99.7618% ( 6) 00:19:13.257 3366.154 - 3381.049: 99.7699% ( 7) 00:19:13.257 3381.049 - 3395.943: 99.7768% ( 6) 00:19:13.257 3395.943 - 3410.837: 99.7907% ( 12) 00:19:13.257 3410.837 - 3425.732: 99.8103% ( 17) 00:19:13.257 3425.732 - 3440.626: 99.8208% ( 9) 00:19:13.257 3440.626 - 3455.521: 99.8277% ( 6) 00:19:13.257 3455.521 - 3470.415: 99.8300% ( 2) 00:19:13.257 3470.415 - 3485.310: 99.8369% ( 6) 00:19:13.257 3485.310 - 3500.204: 99.8439% ( 6) 00:19:13.257 3500.204 - 3515.099: 99.8508% ( 6) 00:19:13.257 3515.099 - 3529.993: 99.8520% ( 1) 00:19:13.257 7238.721 - 7268.510: 99.8635% ( 10) 00:19:13.257 7268.510 - 7298.299: 99.8774% ( 12) 00:19:13.257 7298.299 - 7328.088: 99.8913% ( 12) 00:19:13.257 7328.088 - 7357.877: 99.9052% ( 12) 00:19:13.257 7357.877 - 7387.665: 99.9190% ( 12) 00:19:13.257 7387.665 - 7417.454: 99.9329% ( 12) 00:19:13.257 7417.454 - 7447.243: 99.9468% ( 12) 00:19:13.257 7447.243 - 7477.032: 99.9642% ( 15) 00:19:13.257 7477.032 - 7506.821: 99.9769% ( 11) 00:19:13.257 7506.821 - 7536.610: 99.9884% ( 10) 00:19:13.257 7536.610 - 7566.399: 99.9988% ( 9) 00:19:13.257 7566.399 - 7596.188: 100.0000% ( 1) 00:19:13.257 00:19:13.257 22:01:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:13.823 EAL: TSC is not safe to use in SMP mode 00:19:13.823 EAL: TSC is not invariant 00:19:13.823 [2024-05-14 22:01:14.355051] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:15.202 Initializing NVMe Controllers 00:19:15.202 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:15.202 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:15.202 Initialization complete. Launching workers. 00:19:15.202 ======================================================== 00:19:15.202 Latency(us) 00:19:15.202 Device Information : IOPS MiB/s Average min max 00:19:15.202 PCIE (0000:00:10.0) NSID 1 from core 0: 74029.13 867.53 1729.17 469.79 10114.28 00:19:15.202 ======================================================== 00:19:15.202 Total : 74029.13 867.53 1729.17 469.79 10114.28 00:19:15.202 00:19:15.202 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:15.202 ================================================================================= 00:19:15.202 1.00000% : 1161.770us 00:19:15.202 10.00000% : 1407.529us 00:19:15.202 25.00000% : 1519.238us 00:19:15.202 50.00000% : 1690.524us 00:19:15.202 75.00000% : 1861.811us 00:19:15.202 90.00000% : 2115.017us 00:19:15.202 95.00000% : 2338.434us 00:19:15.202 98.00000% : 2621.430us 00:19:15.202 99.00000% : 2978.897us 00:19:15.202 99.50000% : 3187.420us 00:19:15.202 99.90000% : 3842.778us 00:19:15.202 99.99000% : 7059.987us 00:19:15.202 99.99900% : 10128.251us 00:19:15.202 99.99990% : 10128.251us 00:19:15.202 99.99999% : 10128.251us 00:19:15.202 00:19:15.202 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:15.202 ============================================================================== 00:19:15.202 Range in us Cumulative IO count 00:19:15.202 469.176 - 471.038: 0.0014% ( 1) 00:19:15.202 487.794 - 491.518: 0.0054% ( 3) 00:19:15.202 491.518 - 495.242: 0.0135% ( 6) 00:19:15.202 495.242 - 498.965: 0.0189% ( 4) 00:19:15.202 498.965 - 502.689: 0.0203% ( 1) 00:19:15.202 502.689 - 506.413: 0.0216% ( 1) 00:19:15.202 733.553 - 737.277: 0.0230% ( 1) 00:19:15.202 834.091 - 837.815: 0.0243% ( 1) 00:19:15.202 852.709 - 856.433: 0.0270% ( 2) 00:19:15.202 856.433 - 860.157: 0.0324% ( 4) 00:19:15.202 860.157 - 863.880: 0.0351% ( 2) 00:19:15.202 863.880 - 867.604: 0.0392% ( 3) 00:19:15.202 867.604 - 871.327: 0.0405% ( 1) 00:19:15.202 871.327 - 875.051: 0.0432% ( 2) 00:19:15.202 875.051 - 878.775: 0.0486% ( 4) 00:19:15.202 878.775 - 882.498: 0.0540% ( 4) 00:19:15.202 882.498 - 886.222: 0.0581% ( 3) 00:19:15.202 886.222 - 889.946: 0.0648% ( 5) 00:19:15.202 889.946 - 893.669: 0.0716% ( 5) 00:19:15.202 893.669 - 897.393: 0.0756% ( 3) 00:19:15.202 897.393 - 901.116: 0.0891% ( 10) 00:19:15.202 901.116 - 904.840: 0.0918% ( 2) 00:19:15.202 904.840 - 908.564: 0.0959% ( 3) 00:19:15.202 908.564 - 912.287: 0.0986% ( 2) 00:19:15.202 912.287 - 916.011: 0.0999% ( 1) 00:19:15.202 916.011 - 919.735: 0.1026% ( 2) 00:19:15.202 919.735 - 923.458: 0.1040% ( 1) 00:19:15.202 923.458 - 927.182: 0.1094% ( 4) 00:19:15.202 927.182 - 930.905: 0.1175% ( 6) 00:19:15.202 930.905 - 934.629: 0.1202% ( 2) 00:19:15.202 934.629 - 938.353: 0.1215% ( 1) 00:19:15.202 968.142 - 975.589: 0.1269% ( 4) 00:19:15.202 975.589 - 983.036: 0.1539% ( 20) 00:19:15.202 983.036 - 990.483: 0.1715% ( 13) 00:19:15.202 990.483 - 997.931: 0.2012% ( 22) 00:19:15.202 997.931 - 1005.378: 0.2215% ( 15) 00:19:15.202 1005.378 - 1012.825: 0.2458% ( 18) 00:19:15.202 1012.825 - 1020.272: 0.2552% ( 7) 00:19:15.202 1020.272 - 1027.720: 0.2795% ( 18) 00:19:15.202 1027.720 - 1035.167: 0.2971% ( 13) 00:19:15.202 1035.167 - 1042.614: 0.3119% ( 11) 00:19:15.202 1042.614 - 1050.061: 0.3403% ( 21) 00:19:15.202 1050.061 - 1057.509: 0.3849% ( 33) 00:19:15.202 1057.509 - 1064.956: 0.4308% ( 34) 00:19:15.202 1064.956 - 1072.403: 0.4713% ( 30) 00:19:15.202 1072.403 - 1079.850: 0.5010% ( 22) 00:19:15.202 1079.850 - 1087.298: 0.5158% ( 11) 00:19:15.202 1087.298 - 1094.745: 0.5388% ( 17) 00:19:15.202 1094.745 - 1102.192: 0.5577% ( 14) 00:19:15.202 1102.192 - 1109.639: 0.5847% ( 20) 00:19:15.202 1109.639 - 1117.087: 0.6104% ( 19) 00:19:15.202 1117.087 - 1124.534: 0.6374% ( 20) 00:19:15.202 1124.534 - 1131.981: 0.6779% ( 30) 00:19:15.202 1131.981 - 1139.428: 0.7292% ( 38) 00:19:15.202 1139.428 - 1146.875: 0.8156% ( 64) 00:19:15.202 1146.875 - 1154.323: 0.9642% ( 110) 00:19:15.202 1154.323 - 1161.770: 1.0951% ( 97) 00:19:15.202 1161.770 - 1169.217: 1.2315% ( 101) 00:19:15.202 1169.217 - 1176.664: 1.3315% ( 74) 00:19:15.202 1176.664 - 1184.112: 1.4395% ( 80) 00:19:15.203 1184.112 - 1191.559: 1.5340% ( 70) 00:19:15.203 1191.559 - 1199.006: 1.6610% ( 94) 00:19:15.203 1199.006 - 1206.453: 1.7730% ( 83) 00:19:15.203 1206.453 - 1213.901: 1.9270% ( 114) 00:19:15.203 1213.901 - 1221.348: 2.0688% ( 105) 00:19:15.203 1221.348 - 1228.795: 2.2700% ( 149) 00:19:15.203 1228.795 - 1236.242: 2.5265% ( 190) 00:19:15.203 1236.242 - 1243.690: 2.6575% ( 97) 00:19:15.203 1243.690 - 1251.137: 2.8412% ( 136) 00:19:15.203 1251.137 - 1258.584: 3.0113% ( 126) 00:19:15.203 1258.584 - 1266.031: 3.2004% ( 140) 00:19:15.203 1266.031 - 1273.479: 3.4016% ( 149) 00:19:15.203 1273.479 - 1280.926: 3.6662% ( 196) 00:19:15.203 1280.926 - 1288.373: 3.9647% ( 221) 00:19:15.203 1288.373 - 1295.820: 4.2320% ( 198) 00:19:15.203 1295.820 - 1303.268: 4.5075% ( 204) 00:19:15.203 1303.268 - 1310.715: 4.8060% ( 221) 00:19:15.203 1310.715 - 1318.162: 5.0693% ( 195) 00:19:15.203 1318.162 - 1325.609: 5.3650% ( 219) 00:19:15.203 1325.609 - 1333.057: 5.7188% ( 262) 00:19:15.203 1333.057 - 1340.504: 6.0902% ( 275) 00:19:15.203 1340.504 - 1347.951: 6.4777% ( 287) 00:19:15.203 1347.951 - 1355.398: 6.8950% ( 309) 00:19:15.203 1355.398 - 1362.846: 7.3082% ( 306) 00:19:15.203 1362.846 - 1370.293: 7.7808% ( 350) 00:19:15.203 1370.293 - 1377.740: 8.2399% ( 340) 00:19:15.203 1377.740 - 1385.187: 8.6937% ( 336) 00:19:15.203 1385.187 - 1392.635: 9.2784% ( 433) 00:19:15.203 1392.635 - 1400.082: 9.9616% ( 506) 00:19:15.203 1400.082 - 1407.529: 10.7057% ( 551) 00:19:15.203 1407.529 - 1414.976: 11.5389% ( 617) 00:19:15.203 1414.976 - 1422.423: 12.2964% ( 561) 00:19:15.203 1422.423 - 1429.871: 13.1229% ( 612) 00:19:15.203 1429.871 - 1437.318: 13.9425% ( 607) 00:19:15.203 1437.318 - 1444.765: 14.8324% ( 659) 00:19:15.203 1444.765 - 1452.212: 15.7682% ( 693) 00:19:15.203 1452.212 - 1459.660: 16.7040% ( 693) 00:19:15.203 1459.660 - 1467.107: 17.6250% ( 682) 00:19:15.203 1467.107 - 1474.554: 18.6499% ( 759) 00:19:15.203 1474.554 - 1482.001: 19.6721% ( 757) 00:19:15.203 1482.001 - 1489.449: 20.8024% ( 837) 00:19:15.203 1489.449 - 1496.896: 22.0879% ( 952) 00:19:15.203 1496.896 - 1504.343: 23.1871% ( 814) 00:19:15.203 1504.343 - 1511.790: 24.2040% ( 753) 00:19:15.203 1511.790 - 1519.238: 25.2856% ( 801) 00:19:15.203 1519.238 - 1526.685: 26.3659% ( 800) 00:19:15.203 1526.685 - 1534.132: 27.5232% ( 857) 00:19:15.203 1534.132 - 1541.579: 28.7695% ( 923) 00:19:15.203 1541.579 - 1549.027: 29.9565% ( 879) 00:19:15.203 1549.027 - 1556.474: 31.2974% ( 993) 00:19:15.203 1556.474 - 1563.921: 32.4412% ( 847) 00:19:15.203 1563.921 - 1571.368: 33.6430% ( 890) 00:19:15.203 1571.368 - 1578.816: 34.9893% ( 997) 00:19:15.203 1578.816 - 1586.263: 36.3316% ( 994) 00:19:15.203 1586.263 - 1593.710: 37.5753% ( 921) 00:19:15.203 1593.710 - 1601.157: 38.8514% ( 945) 00:19:15.203 1601.157 - 1608.605: 39.9992% ( 850) 00:19:15.203 1608.605 - 1616.052: 41.1484% ( 851) 00:19:15.203 1616.052 - 1623.499: 42.2746% ( 834) 00:19:15.203 1623.499 - 1630.946: 43.3035% ( 762) 00:19:15.203 1630.946 - 1638.394: 44.3919% ( 806) 00:19:15.203 1638.394 - 1645.841: 45.3331% ( 697) 00:19:15.203 1645.841 - 1653.288: 46.2203% ( 657) 00:19:15.203 1653.288 - 1660.735: 47.0670% ( 627) 00:19:15.203 1660.735 - 1668.183: 47.8664% ( 592) 00:19:15.203 1668.183 - 1675.630: 48.7185% ( 631) 00:19:15.203 1675.630 - 1683.077: 49.6165% ( 665) 00:19:15.203 1683.077 - 1690.524: 50.6495% ( 765) 00:19:15.203 1690.524 - 1697.972: 51.6042% ( 707) 00:19:15.203 1697.972 - 1705.419: 52.7493% ( 848) 00:19:15.203 1705.419 - 1712.866: 53.8391% ( 807) 00:19:15.203 1712.866 - 1720.313: 54.9707% ( 838) 00:19:15.203 1720.313 - 1727.760: 56.3292% ( 1006) 00:19:15.203 1727.760 - 1735.208: 57.6782% ( 999) 00:19:15.203 1735.208 - 1742.655: 58.9759% ( 961) 00:19:15.203 1742.655 - 1750.102: 60.1926% ( 901) 00:19:15.203 1750.102 - 1757.549: 61.3458% ( 854) 00:19:15.203 1757.549 - 1764.997: 62.4166% ( 793) 00:19:15.203 1764.997 - 1772.444: 63.5023% ( 804) 00:19:15.203 1772.444 - 1779.891: 64.7217% ( 903) 00:19:15.203 1779.891 - 1787.338: 65.9370% ( 900) 00:19:15.203 1787.338 - 1794.786: 67.2361% ( 962) 00:19:15.203 1794.786 - 1802.233: 68.4109% ( 870) 00:19:15.203 1802.233 - 1809.680: 69.4871% ( 797) 00:19:15.203 1809.680 - 1817.127: 70.5863% ( 814) 00:19:15.203 1817.127 - 1824.575: 71.5775% ( 734) 00:19:15.203 1824.575 - 1832.022: 72.4620% ( 655) 00:19:15.203 1832.022 - 1839.469: 73.2560% ( 588) 00:19:15.203 1839.469 - 1846.916: 73.9690% ( 528) 00:19:15.203 1846.916 - 1854.364: 74.6955% ( 538) 00:19:15.203 1854.364 - 1861.811: 75.4611% ( 567) 00:19:15.203 1861.811 - 1869.258: 76.1255% ( 492) 00:19:15.203 1869.258 - 1876.705: 76.8048% ( 503) 00:19:15.203 1876.705 - 1884.153: 77.4543% ( 481) 00:19:15.203 1884.153 - 1891.600: 78.0417% ( 435) 00:19:15.203 1891.600 - 1899.047: 78.5710% ( 392) 00:19:15.203 1899.047 - 1906.494: 79.0599% ( 362) 00:19:15.203 1906.494 - 1921.389: 80.0240% ( 714) 00:19:15.203 1921.389 - 1936.283: 80.9126% ( 658) 00:19:15.203 1936.283 - 1951.178: 81.9281% ( 752) 00:19:15.203 1951.178 - 1966.072: 82.7720% ( 625) 00:19:15.203 1966.072 - 1980.967: 83.6430% ( 645) 00:19:15.203 1980.967 - 1995.861: 84.4694% ( 612) 00:19:15.203 1995.861 - 2010.756: 85.3148% ( 626) 00:19:15.203 2010.756 - 2025.650: 86.1655% ( 630) 00:19:15.203 2025.650 - 2040.545: 86.9663% ( 593) 00:19:15.203 2040.545 - 2055.439: 87.7454% ( 577) 00:19:15.203 2055.439 - 2070.334: 88.4989% ( 558) 00:19:15.203 2070.334 - 2085.228: 89.1633% ( 492) 00:19:15.203 2085.228 - 2100.123: 89.7980% ( 470) 00:19:15.203 2100.123 - 2115.017: 90.3219% ( 388) 00:19:15.203 2115.017 - 2129.912: 90.8675% ( 404) 00:19:15.203 2129.912 - 2144.806: 91.4549% ( 435) 00:19:15.203 2144.806 - 2159.701: 91.9275% ( 350) 00:19:15.203 2159.701 - 2174.595: 92.2624% ( 248) 00:19:15.203 2174.595 - 2189.490: 92.6148% ( 261) 00:19:15.203 2189.490 - 2204.384: 92.9227% ( 228) 00:19:15.203 2204.384 - 2219.279: 93.2306% ( 228) 00:19:15.203 2219.279 - 2234.173: 93.5223% ( 216) 00:19:15.203 2234.173 - 2249.068: 93.8153% ( 217) 00:19:15.203 2249.068 - 2263.962: 94.0705% ( 189) 00:19:15.203 2263.962 - 2278.856: 94.3217% ( 186) 00:19:15.203 2278.856 - 2293.751: 94.5391% ( 161) 00:19:15.203 2293.751 - 2308.645: 94.7228% ( 136) 00:19:15.203 2308.645 - 2323.540: 94.9253% ( 150) 00:19:15.203 2323.540 - 2338.434: 95.1319% ( 153) 00:19:15.203 2338.434 - 2353.329: 95.3588% ( 168) 00:19:15.203 2353.329 - 2368.223: 95.5411% ( 135) 00:19:15.203 2368.223 - 2383.118: 95.7369% ( 145) 00:19:15.203 2383.118 - 2398.012: 95.9503% ( 158) 00:19:15.203 2398.012 - 2412.907: 96.2122% ( 194) 00:19:15.203 2412.907 - 2427.801: 96.4337% ( 164) 00:19:15.203 2427.801 - 2442.696: 96.6133% ( 133) 00:19:15.203 2442.696 - 2457.590: 96.7902% ( 131) 00:19:15.203 2457.590 - 2472.485: 96.9158% ( 93) 00:19:15.203 2472.485 - 2487.379: 97.0522% ( 101) 00:19:15.203 2487.379 - 2502.274: 97.1696% ( 87) 00:19:15.203 2502.274 - 2517.168: 97.2723% ( 76) 00:19:15.203 2517.168 - 2532.063: 97.3749% ( 76) 00:19:15.203 2532.063 - 2546.957: 97.4951% ( 89) 00:19:15.203 2546.957 - 2561.852: 97.6693% ( 129) 00:19:15.203 2561.852 - 2576.746: 97.8030% ( 99) 00:19:15.203 2576.746 - 2591.641: 97.8948% ( 68) 00:19:15.203 2591.641 - 2606.535: 97.9488% ( 40) 00:19:15.203 2606.535 - 2621.430: 98.0406% ( 68) 00:19:15.203 2621.430 - 2636.324: 98.0987% ( 43) 00:19:15.203 2636.324 - 2651.219: 98.1405% ( 31) 00:19:15.203 2651.219 - 2666.113: 98.1635% ( 17) 00:19:15.203 2666.113 - 2681.008: 98.1973% ( 25) 00:19:15.203 2681.008 - 2695.902: 98.2472% ( 37) 00:19:15.203 2695.902 - 2710.797: 98.2823% ( 26) 00:19:15.203 2710.797 - 2725.691: 98.3472% ( 48) 00:19:15.203 2725.691 - 2740.586: 98.3998% ( 39) 00:19:15.203 2740.586 - 2755.480: 98.4619% ( 46) 00:19:15.203 2755.480 - 2770.375: 98.4889% ( 20) 00:19:15.203 2770.375 - 2785.269: 98.5024% ( 10) 00:19:15.203 2785.269 - 2800.164: 98.5295% ( 20) 00:19:15.203 2800.164 - 2815.058: 98.5592% ( 22) 00:19:15.203 2815.058 - 2829.953: 98.6105% ( 38) 00:19:15.203 2829.953 - 2844.847: 98.6429% ( 24) 00:19:15.203 2844.847 - 2859.741: 98.6820% ( 29) 00:19:15.203 2859.741 - 2874.636: 98.7239% ( 31) 00:19:15.203 2874.636 - 2889.530: 98.7820% ( 43) 00:19:15.203 2889.530 - 2904.425: 98.8117% ( 22) 00:19:15.203 2904.425 - 2919.319: 98.8495% ( 28) 00:19:15.203 2919.319 - 2934.214: 98.8900% ( 30) 00:19:15.203 2934.214 - 2949.108: 98.9116% ( 16) 00:19:15.203 2949.108 - 2964.003: 98.9643% ( 39) 00:19:15.203 2964.003 - 2978.897: 99.0318% ( 50) 00:19:15.203 2978.897 - 2993.792: 99.0588% ( 20) 00:19:15.203 2993.792 - 3008.686: 99.1236% ( 48) 00:19:15.501 3008.686 - 3023.581: 99.2033% ( 59) 00:19:15.501 3023.581 - 3038.475: 99.2573% ( 40) 00:19:15.501 3038.475 - 3053.370: 99.2749% ( 13) 00:19:15.501 3053.370 - 3068.264: 99.3005% ( 19) 00:19:15.501 3068.264 - 3083.159: 99.3275% ( 20) 00:19:15.501 3083.159 - 3098.053: 99.3370% ( 7) 00:19:15.501 3098.053 - 3112.948: 99.3464% ( 7) 00:19:15.501 3112.948 - 3127.842: 99.3667% ( 15) 00:19:15.501 3127.842 - 3142.737: 99.4004% ( 25) 00:19:15.501 3142.737 - 3157.631: 99.4382% ( 28) 00:19:15.501 3157.631 - 3172.526: 99.4788% ( 30) 00:19:15.501 3172.526 - 3187.420: 99.5206% ( 31) 00:19:15.501 3187.420 - 3202.315: 99.5274% ( 5) 00:19:15.501 3202.315 - 3217.209: 99.5341% ( 5) 00:19:15.501 3217.209 - 3232.104: 99.5395% ( 4) 00:19:15.501 3232.104 - 3246.998: 99.5436% ( 3) 00:19:15.501 3246.998 - 3261.893: 99.5530% ( 7) 00:19:15.501 3261.893 - 3276.787: 99.5733% ( 15) 00:19:15.501 3276.787 - 3291.682: 99.5841% ( 8) 00:19:15.501 3291.682 - 3306.576: 99.5935% ( 7) 00:19:15.501 3306.576 - 3321.471: 99.6165% ( 17) 00:19:15.501 3321.471 - 3336.365: 99.6327% ( 12) 00:19:15.501 3336.365 - 3351.260: 99.6408% ( 6) 00:19:15.501 3351.260 - 3366.154: 99.6435% ( 2) 00:19:15.501 3366.154 - 3381.049: 99.6449% ( 1) 00:19:15.501 3381.049 - 3395.943: 99.6611% ( 12) 00:19:15.501 3395.943 - 3410.837: 99.6813% ( 15) 00:19:15.501 3410.837 - 3425.732: 99.7002% ( 14) 00:19:15.501 3425.732 - 3440.626: 99.7110% ( 8) 00:19:15.501 3440.626 - 3455.521: 99.7164% ( 4) 00:19:15.501 3455.521 - 3470.415: 99.7178% ( 1) 00:19:15.501 3515.099 - 3529.993: 99.7272% ( 7) 00:19:15.501 3529.993 - 3544.888: 99.7367% ( 7) 00:19:15.501 3544.888 - 3559.782: 99.7448% ( 6) 00:19:15.501 3589.571 - 3604.466: 99.7542% ( 7) 00:19:15.501 3604.466 - 3619.360: 99.7596% ( 4) 00:19:15.501 3634.255 - 3649.149: 99.7637% ( 3) 00:19:15.501 3693.833 - 3708.727: 99.7677% ( 3) 00:19:15.501 3708.727 - 3723.622: 99.7718% ( 3) 00:19:15.501 3723.622 - 3738.516: 99.7785% ( 5) 00:19:15.501 3738.516 - 3753.411: 99.8082% ( 22) 00:19:15.501 3753.411 - 3768.305: 99.8258% ( 13) 00:19:15.501 3768.305 - 3783.200: 99.8474% ( 16) 00:19:15.501 3783.200 - 3798.094: 99.8609% ( 10) 00:19:15.501 3798.094 - 3812.989: 99.8798% ( 14) 00:19:15.501 3812.989 - 3842.778: 99.9082% ( 21) 00:19:15.501 3842.778 - 3872.567: 99.9298% ( 16) 00:19:15.501 3872.567 - 3902.356: 99.9460% ( 12) 00:19:15.501 3902.356 - 3932.145: 99.9595% ( 10) 00:19:15.501 3932.145 - 3961.934: 99.9662% ( 5) 00:19:15.501 3961.934 - 3991.722: 99.9676% ( 1) 00:19:15.501 4051.300 - 4081.089: 99.9716% ( 3) 00:19:15.501 4170.456 - 4200.245: 99.9730% ( 1) 00:19:15.501 4468.346 - 4498.135: 99.9743% ( 1) 00:19:15.501 4706.658 - 4736.447: 99.9757% ( 1) 00:19:15.501 4944.970 - 4974.759: 99.9770% ( 1) 00:19:15.501 5332.226 - 5362.015: 99.9784% ( 1) 00:19:15.501 5391.804 - 5421.593: 99.9811% ( 2) 00:19:15.501 5987.584 - 6017.373: 99.9838% ( 2) 00:19:15.501 6196.107 - 6225.896: 99.9851% ( 1) 00:19:15.501 6553.574 - 6583.363: 99.9865% ( 1) 00:19:15.501 6851.464 - 6881.253: 99.9878% ( 1) 00:19:15.501 7030.198 - 7059.987: 99.9946% ( 5) 00:19:15.501 7804.711 - 7864.289: 99.9959% ( 1) 00:19:15.501 10068.673 - 10128.251: 100.0000% ( 3) 00:19:15.501 00:19:15.501 22:01:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:15.501 00:19:15.501 real 0m3.850s 00:19:15.501 user 0m2.619s 00:19:15.501 sys 0m1.232s 00:19:15.501 22:01:15 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:15.501 ************************************ 00:19:15.501 END TEST nvme_perf 00:19:15.501 22:01:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:15.501 ************************************ 00:19:15.501 22:01:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:15.501 22:01:16 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:15.501 22:01:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:15.501 22:01:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.501 ************************************ 00:19:15.501 START TEST nvme_hello_world 00:19:15.501 ************************************ 00:19:15.501 22:01:16 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:16.066 EAL: TSC is not safe to use in SMP mode 00:19:16.066 EAL: TSC is not invariant 00:19:16.066 [2024-05-14 22:01:16.614006] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:16.066 Initializing NVMe Controllers 00:19:16.066 Attaching to 0000:00:10.0 00:19:16.066 Attached to 0000:00:10.0 00:19:16.066 Namespace ID: 1 size: 5GB 00:19:16.066 Initialization complete. 00:19:16.066 INFO: using host memory buffer for IO 00:19:16.066 Hello world! 00:19:16.323 00:19:16.323 real 0m0.637s 00:19:16.323 user 0m0.008s 00:19:16.323 sys 0m0.628s 00:19:16.323 22:01:16 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:16.323 ************************************ 00:19:16.323 END TEST nvme_hello_world 00:19:16.323 ************************************ 00:19:16.323 22:01:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:16.323 22:01:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:16.323 22:01:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:16.323 22:01:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:16.323 22:01:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.323 ************************************ 00:19:16.323 START TEST nvme_sgl 00:19:16.323 ************************************ 00:19:16.323 22:01:16 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:16.888 EAL: TSC is not safe to use in SMP mode 00:19:16.888 EAL: TSC is not invariant 00:19:16.888 [2024-05-14 22:01:17.329506] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:16.888 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:16.888 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:16.888 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:16.888 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:16.888 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:16.888 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:16.888 NVMe Readv/Writev Request test 00:19:16.888 Attaching to 0000:00:10.0 00:19:16.888 Attached to 0000:00:10.0 00:19:16.888 0000:00:10.0: build_io_request_2 test passed 00:19:16.888 0000:00:10.0: build_io_request_4 test passed 00:19:16.888 0000:00:10.0: build_io_request_5 test passed 00:19:16.888 0000:00:10.0: build_io_request_6 test passed 00:19:16.888 0000:00:10.0: build_io_request_7 test passed 00:19:16.888 0000:00:10.0: build_io_request_10 test passed 00:19:16.888 Cleaning up... 00:19:16.888 00:19:16.888 real 0m0.671s 00:19:16.888 user 0m0.022s 00:19:16.888 sys 0m0.645s 00:19:16.888 22:01:17 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:16.888 ************************************ 00:19:16.889 END TEST nvme_sgl 00:19:16.889 22:01:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:16.889 ************************************ 00:19:16.889 22:01:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:16.889 22:01:17 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:16.889 22:01:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:16.889 22:01:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.889 ************************************ 00:19:16.889 START TEST nvme_e2edp 00:19:16.889 ************************************ 00:19:16.889 22:01:17 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:17.452 EAL: TSC is not safe to use in SMP mode 00:19:17.452 EAL: TSC is not invariant 00:19:17.452 [2024-05-14 22:01:17.972458] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:17.452 NVMe Write/Read with End-to-End data protection test 00:19:17.452 Attaching to 0000:00:10.0 00:19:17.452 Attached to 0000:00:10.0 00:19:17.452 Cleaning up... 00:19:17.452 00:19:17.452 real 0m0.592s 00:19:17.452 user 0m0.000s 00:19:17.452 sys 0m0.593s 00:19:17.452 22:01:18 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:17.452 ************************************ 00:19:17.452 END TEST nvme_e2edp 00:19:17.452 ************************************ 00:19:17.452 22:01:18 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:17.710 22:01:18 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:17.710 22:01:18 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:17.710 22:01:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.710 22:01:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.710 ************************************ 00:19:17.710 START TEST nvme_reserve 00:19:17.710 ************************************ 00:19:17.710 22:01:18 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:18.275 EAL: TSC is not safe to use in SMP mode 00:19:18.275 EAL: TSC is not invariant 00:19:18.275 [2024-05-14 22:01:18.622503] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:18.275 ===================================================== 00:19:18.275 NVMe Controller at PCI bus 0, device 16, function 0 00:19:18.275 ===================================================== 00:19:18.275 Reservations: Not Supported 00:19:18.275 Reservation test passed 00:19:18.275 00:19:18.275 real 0m0.611s 00:19:18.275 user 0m0.012s 00:19:18.275 sys 0m0.595s 00:19:18.275 22:01:18 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:18.275 22:01:18 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:18.275 ************************************ 00:19:18.275 END TEST nvme_reserve 00:19:18.275 ************************************ 00:19:18.275 22:01:18 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:18.275 22:01:18 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:18.275 22:01:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:18.275 22:01:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.275 ************************************ 00:19:18.275 START TEST nvme_err_injection 00:19:18.275 ************************************ 00:19:18.275 22:01:18 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:18.840 EAL: TSC is not safe to use in SMP mode 00:19:18.840 EAL: TSC is not invariant 00:19:18.840 [2024-05-14 22:01:19.276252] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:18.840 NVMe Error Injection test 00:19:18.840 Attaching to 0000:00:10.0 00:19:18.840 Attached to 0000:00:10.0 00:19:18.840 0000:00:10.0: get features failed as expected 00:19:18.840 0000:00:10.0: get features successfully as expected 00:19:18.840 0000:00:10.0: read failed as expected 00:19:18.840 0000:00:10.0: read successfully as expected 00:19:18.840 Cleaning up... 00:19:18.840 00:19:18.840 real 0m0.621s 00:19:18.840 user 0m0.019s 00:19:18.840 sys 0m0.602s 00:19:18.840 22:01:19 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:18.840 22:01:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:18.840 ************************************ 00:19:18.840 END TEST nvme_err_injection 00:19:18.840 ************************************ 00:19:18.840 22:01:19 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:18.840 22:01:19 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:19:18.840 22:01:19 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:18.840 22:01:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.840 ************************************ 00:19:18.840 START TEST nvme_overhead 00:19:18.840 ************************************ 00:19:18.840 22:01:19 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:19.411 EAL: TSC is not safe to use in SMP mode 00:19:19.411 EAL: TSC is not invariant 00:19:19.411 [2024-05-14 22:01:19.955482] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:20.785 Initializing NVMe Controllers 00:19:20.785 Attaching to 0000:00:10.0 00:19:20.785 Attached to 0000:00:10.0 00:19:20.785 Initialization complete. Launching workers. 00:19:20.785 submit (in ns) avg, min, max = 9358.3, 7098.6, 70979.7 00:19:20.785 complete (in ns) avg, min, max = 7481.6, 5962.7, 50457.1 00:19:20.785 00:19:20.785 Submit histogram 00:19:20.785 ================ 00:19:20.785 Range in us Cumulative Count 00:19:20.785 7.098 - 7.127: 0.0081% ( 1) 00:19:20.785 7.389 - 7.418: 0.0162% ( 1) 00:19:20.785 7.796 - 7.855: 0.0243% ( 1) 00:19:20.785 7.913 - 7.971: 0.0324% ( 1) 00:19:20.785 7.971 - 8.029: 0.2104% ( 22) 00:19:20.785 8.029 - 8.087: 1.4644% ( 155) 00:19:20.785 8.087 - 8.145: 8.3172% ( 847) 00:19:20.785 8.145 - 8.204: 21.5049% ( 1630) 00:19:20.785 8.204 - 8.262: 32.2654% ( 1330) 00:19:20.785 8.262 - 8.320: 39.6440% ( 912) 00:19:20.785 8.320 - 8.378: 44.5227% ( 603) 00:19:20.785 8.378 - 8.436: 47.2896% ( 342) 00:19:20.785 8.436 - 8.495: 48.9644% ( 207) 00:19:20.785 8.495 - 8.553: 50.0485% ( 134) 00:19:20.785 8.553 - 8.611: 50.8414% ( 98) 00:19:20.785 8.611 - 8.669: 51.2217% ( 47) 00:19:20.785 8.669 - 8.727: 51.6748% ( 56) 00:19:20.785 8.727 - 8.785: 52.0227% ( 43) 00:19:20.785 8.785 - 8.844: 52.9450% ( 114) 00:19:20.785 8.844 - 8.902: 55.0566% ( 261) 00:19:20.785 8.902 - 8.960: 60.1133% ( 625) 00:19:20.785 8.960 - 9.018: 66.6909% ( 813) 00:19:20.785 9.018 - 9.076: 72.7346% ( 747) 00:19:20.785 9.076 - 9.135: 76.3997% ( 453) 00:19:20.785 9.135 - 9.193: 78.7702% ( 293) 00:19:20.785 9.193 - 9.251: 80.2994% ( 189) 00:19:20.785 9.251 - 9.309: 81.1893% ( 110) 00:19:20.785 9.309 - 9.367: 81.7152% ( 65) 00:19:20.785 9.367 - 9.425: 82.2735% ( 69) 00:19:20.785 9.425 - 9.484: 82.8560% ( 72) 00:19:20.785 9.484 - 9.542: 83.3414% ( 60) 00:19:20.785 9.542 - 9.600: 83.7945% ( 56) 00:19:20.785 9.600 - 9.658: 84.0777% ( 35) 00:19:20.785 9.658 - 9.716: 84.2152% ( 17) 00:19:20.785 9.716 - 9.775: 84.3204% ( 13) 00:19:20.785 9.775 - 9.833: 84.4498% ( 16) 00:19:20.785 9.833 - 9.891: 84.5550% ( 13) 00:19:20.785 9.891 - 9.949: 84.6440% ( 11) 00:19:20.785 9.949 - 10.007: 84.7006% ( 7) 00:19:20.785 10.007 - 10.065: 84.7654% ( 8) 00:19:20.785 10.065 - 10.124: 84.8058% ( 5) 00:19:20.785 10.124 - 10.182: 84.8382% ( 4) 00:19:20.785 10.182 - 10.240: 84.8786% ( 5) 00:19:20.785 10.240 - 10.298: 84.9110% ( 4) 00:19:20.785 10.298 - 10.356: 84.9191% ( 1) 00:19:20.785 10.356 - 10.415: 84.9353% ( 2) 00:19:20.785 10.415 - 10.473: 85.0000% ( 8) 00:19:20.785 10.473 - 10.531: 85.0566% ( 7) 00:19:20.785 10.531 - 10.589: 85.1214% ( 8) 00:19:20.785 10.589 - 10.647: 85.1861% ( 8) 00:19:20.785 10.647 - 10.705: 85.2508% ( 8) 00:19:20.785 10.705 - 10.764: 85.3074% ( 7) 00:19:20.785 10.764 - 10.822: 85.3479% ( 5) 00:19:20.785 10.822 - 10.880: 85.3964% ( 6) 00:19:20.785 10.880 - 10.938: 85.4531% ( 7) 00:19:20.785 10.938 - 10.996: 85.5340% ( 10) 00:19:20.785 10.996 - 11.055: 85.6149% ( 10) 00:19:20.785 11.055 - 11.113: 85.6715% ( 7) 00:19:20.785 11.113 - 11.171: 85.7686% ( 12) 00:19:20.785 11.171 - 11.229: 85.8900% ( 15) 00:19:20.785 11.229 - 11.287: 85.9628% ( 9) 00:19:20.785 11.287 - 11.345: 86.0518% ( 11) 00:19:20.785 11.345 - 11.404: 86.1570% ( 13) 00:19:20.785 11.404 - 11.462: 86.3107% ( 19) 00:19:20.785 11.462 - 11.520: 86.3835% ( 9) 00:19:20.785 11.520 - 11.578: 86.4806% ( 12) 00:19:20.785 11.578 - 11.636: 86.6505% ( 21) 00:19:20.785 11.636 - 11.694: 86.7961% ( 18) 00:19:20.785 11.694 - 11.753: 86.9175% ( 15) 00:19:20.785 11.753 - 11.811: 87.1117% ( 24) 00:19:20.785 11.811 - 11.869: 87.3625% ( 31) 00:19:20.785 11.869 - 11.927: 87.5081% ( 18) 00:19:20.785 11.927 - 11.985: 87.7346% ( 28) 00:19:20.785 11.985 - 12.044: 87.9693% ( 29) 00:19:20.785 12.044 - 12.102: 88.1796% ( 26) 00:19:20.785 12.102 - 12.160: 88.4871% ( 38) 00:19:20.785 12.160 - 12.218: 88.7621% ( 34) 00:19:20.785 12.218 - 12.276: 89.0129% ( 31) 00:19:20.785 12.276 - 12.334: 89.2961% ( 35) 00:19:20.785 12.334 - 12.393: 89.4984% ( 25) 00:19:20.785 12.393 - 12.451: 89.6278% ( 16) 00:19:20.785 12.451 - 12.509: 89.8382% ( 26) 00:19:20.785 12.509 - 12.567: 90.0971% ( 32) 00:19:20.785 12.567 - 12.625: 90.3479% ( 31) 00:19:20.785 12.625 - 12.684: 90.6715% ( 40) 00:19:20.785 12.684 - 12.742: 90.9142% ( 30) 00:19:20.785 12.742 - 12.800: 91.2217% ( 38) 00:19:20.785 12.800 - 12.858: 91.4806% ( 32) 00:19:20.785 12.858 - 12.916: 91.7395% ( 32) 00:19:20.785 12.916 - 12.974: 91.9741% ( 29) 00:19:20.785 12.974 - 13.033: 92.2249% ( 31) 00:19:20.785 13.033 - 13.091: 92.5081% ( 35) 00:19:20.785 13.091 - 13.149: 92.7427% ( 29) 00:19:20.785 13.149 - 13.207: 92.9531% ( 26) 00:19:20.785 13.207 - 13.265: 93.1634% ( 26) 00:19:20.785 13.265 - 13.324: 93.4709% ( 38) 00:19:20.785 13.324 - 13.382: 93.6650% ( 24) 00:19:20.785 13.382 - 13.440: 93.8107% ( 18) 00:19:20.785 13.440 - 13.498: 94.0696% ( 32) 00:19:20.785 13.498 - 13.556: 94.3042% ( 29) 00:19:20.785 13.556 - 13.614: 94.5469% ( 30) 00:19:20.785 13.614 - 13.673: 94.7977% ( 31) 00:19:20.785 13.673 - 13.731: 95.0081% ( 26) 00:19:20.785 13.731 - 13.789: 95.2427% ( 29) 00:19:20.785 13.789 - 13.847: 95.4531% ( 26) 00:19:20.785 13.847 - 13.905: 95.6230% ( 21) 00:19:20.785 13.905 - 13.964: 95.7929% ( 21) 00:19:20.785 13.964 - 14.022: 95.9951% ( 25) 00:19:20.785 14.022 - 14.080: 96.1489% ( 19) 00:19:20.785 14.080 - 14.138: 96.3430% ( 24) 00:19:20.785 14.138 - 14.196: 96.4887% ( 18) 00:19:20.785 14.196 - 14.254: 96.6019% ( 14) 00:19:20.785 14.254 - 14.313: 96.6990% ( 12) 00:19:20.785 14.313 - 14.371: 96.8123% ( 14) 00:19:20.785 14.371 - 14.429: 96.9337% ( 15) 00:19:20.785 14.429 - 14.487: 96.9984% ( 8) 00:19:20.785 14.487 - 14.545: 97.1036% ( 13) 00:19:20.785 14.545 - 14.604: 97.1521% ( 6) 00:19:20.785 14.604 - 14.662: 97.2573% ( 13) 00:19:20.785 14.662 - 14.720: 97.3463% ( 11) 00:19:20.786 14.720 - 14.778: 97.3948% ( 6) 00:19:20.786 14.778 - 14.836: 97.4676% ( 9) 00:19:20.786 14.836 - 14.894: 97.5809% ( 14) 00:19:20.786 14.894 - 15.011: 97.7427% ( 20) 00:19:20.786 15.011 - 15.127: 97.9288% ( 23) 00:19:20.786 15.127 - 15.244: 98.0502% ( 15) 00:19:20.786 15.244 - 15.360: 98.1715% ( 15) 00:19:20.786 15.360 - 15.476: 98.2443% ( 9) 00:19:20.786 15.476 - 15.593: 98.3172% ( 9) 00:19:20.786 15.593 - 15.709: 98.3900% ( 9) 00:19:20.786 15.709 - 15.825: 98.4547% ( 8) 00:19:20.786 15.825 - 15.942: 98.4951% ( 5) 00:19:20.786 15.942 - 16.058: 98.5680% ( 9) 00:19:20.786 16.058 - 16.174: 98.6165% ( 6) 00:19:20.786 16.174 - 16.291: 98.6650% ( 6) 00:19:20.786 16.291 - 16.407: 98.6974% ( 4) 00:19:20.786 16.407 - 16.524: 98.7379% ( 5) 00:19:20.786 16.524 - 16.640: 98.7783% ( 5) 00:19:20.786 16.640 - 16.756: 98.8107% ( 4) 00:19:20.786 16.756 - 16.873: 98.8430% ( 4) 00:19:20.786 16.873 - 16.989: 98.8835% ( 5) 00:19:20.786 16.989 - 17.105: 98.8997% ( 2) 00:19:20.786 17.105 - 17.222: 98.9320% ( 4) 00:19:20.786 17.222 - 17.338: 98.9401% ( 1) 00:19:20.786 17.338 - 17.454: 98.9725% ( 4) 00:19:20.786 17.454 - 17.571: 98.9968% ( 3) 00:19:20.786 17.571 - 17.687: 99.0049% ( 1) 00:19:20.786 17.687 - 17.804: 99.0615% ( 7) 00:19:20.786 17.804 - 17.920: 99.0858% ( 3) 00:19:20.786 17.920 - 18.036: 99.1343% ( 6) 00:19:20.786 18.036 - 18.153: 99.1586% ( 3) 00:19:20.786 18.153 - 18.269: 99.1909% ( 4) 00:19:20.786 18.269 - 18.385: 99.2233% ( 4) 00:19:20.786 18.502 - 18.618: 99.2557% ( 4) 00:19:20.786 18.618 - 18.734: 99.2638% ( 1) 00:19:20.786 18.734 - 18.851: 99.2718% ( 1) 00:19:20.786 18.851 - 18.967: 99.2799% ( 1) 00:19:20.786 18.967 - 19.084: 99.3123% ( 4) 00:19:20.786 19.084 - 19.200: 99.3366% ( 3) 00:19:20.786 19.200 - 19.316: 99.3447% ( 1) 00:19:20.786 19.316 - 19.433: 99.3851% ( 5) 00:19:20.786 19.433 - 19.549: 99.4337% ( 6) 00:19:20.786 19.549 - 19.665: 99.4417% ( 1) 00:19:20.786 19.665 - 19.782: 99.4741% ( 4) 00:19:20.786 19.782 - 19.898: 99.5065% ( 4) 00:19:20.786 19.898 - 20.014: 99.5307% ( 3) 00:19:20.786 20.014 - 20.131: 99.5388% ( 1) 00:19:20.786 20.131 - 20.247: 99.5469% ( 1) 00:19:20.786 20.247 - 20.364: 99.5550% ( 1) 00:19:20.786 20.596 - 20.713: 99.5793% ( 3) 00:19:20.786 20.945 - 21.062: 99.5874% ( 1) 00:19:20.786 21.294 - 21.411: 99.5955% ( 1) 00:19:20.786 21.993 - 22.109: 99.6036% ( 1) 00:19:20.786 22.109 - 22.225: 99.6117% ( 1) 00:19:20.786 22.924 - 23.040: 99.6197% ( 1) 00:19:20.786 23.040 - 23.156: 99.6278% ( 1) 00:19:20.786 23.273 - 23.389: 99.6359% ( 1) 00:19:20.786 23.389 - 23.505: 99.6521% ( 2) 00:19:20.786 23.505 - 23.622: 99.6602% ( 1) 00:19:20.786 23.622 - 23.738: 99.6683% ( 1) 00:19:20.786 23.738 - 23.854: 99.6764% ( 1) 00:19:20.786 24.087 - 24.204: 99.6926% ( 2) 00:19:20.786 24.320 - 24.436: 99.7006% ( 1) 00:19:20.786 24.669 - 24.785: 99.7087% ( 1) 00:19:20.786 25.134 - 25.251: 99.7249% ( 2) 00:19:20.786 25.251 - 25.367: 99.7330% ( 1) 00:19:20.786 25.367 - 25.484: 99.7411% ( 1) 00:19:20.786 25.484 - 25.600: 99.7573% ( 2) 00:19:20.786 25.600 - 25.716: 99.7816% ( 3) 00:19:20.786 25.716 - 25.833: 99.7977% ( 2) 00:19:20.786 25.833 - 25.949: 99.8220% ( 3) 00:19:20.786 26.065 - 26.182: 99.8301% ( 1) 00:19:20.786 26.298 - 26.414: 99.8382% ( 1) 00:19:20.786 26.414 - 26.531: 99.8544% ( 2) 00:19:20.786 26.647 - 26.764: 99.8706% ( 2) 00:19:20.786 26.764 - 26.880: 99.8786% ( 1) 00:19:20.786 26.996 - 27.113: 99.8867% ( 1) 00:19:20.786 27.229 - 27.345: 99.8948% ( 1) 00:19:20.786 28.044 - 28.160: 99.9029% ( 1) 00:19:20.786 28.276 - 28.393: 99.9110% ( 1) 00:19:20.786 28.393 - 28.509: 99.9191% ( 1) 00:19:20.786 30.022 - 30.254: 99.9272% ( 1) 00:19:20.786 30.254 - 30.487: 99.9353% ( 1) 00:19:20.786 32.116 - 32.349: 99.9434% ( 1) 00:19:20.786 33.513 - 33.745: 99.9515% ( 1) 00:19:20.786 38.167 - 38.400: 99.9595% ( 1) 00:19:20.786 39.098 - 39.331: 99.9676% ( 1) 00:19:20.786 42.822 - 43.054: 99.9757% ( 1) 00:19:20.786 60.509 - 60.974: 99.9838% ( 1) 00:19:20.786 63.302 - 63.767: 99.9919% ( 1) 00:19:20.786 70.749 - 71.214: 100.0000% ( 1) 00:19:20.786 00:19:20.786 Complete histogram 00:19:20.786 ================== 00:19:20.786 Range in us Cumulative Count 00:19:20.786 5.935 - 5.964: 0.0081% ( 1) 00:19:20.786 5.964 - 5.993: 0.0324% ( 3) 00:19:20.786 5.993 - 6.022: 0.2346% ( 25) 00:19:20.786 6.022 - 6.051: 0.9061% ( 83) 00:19:20.786 6.051 - 6.080: 2.7508% ( 228) 00:19:20.786 6.080 - 6.109: 5.8576% ( 384) 00:19:20.786 6.109 - 6.138: 9.1909% ( 412) 00:19:20.786 6.138 - 6.167: 12.5566% ( 416) 00:19:20.786 6.167 - 6.196: 15.8414% ( 406) 00:19:20.786 6.196 - 6.225: 18.4709% ( 325) 00:19:20.786 6.225 - 6.255: 20.4288% ( 242) 00:19:20.786 6.255 - 6.284: 22.0874% ( 205) 00:19:20.786 6.284 - 6.313: 23.2605% ( 145) 00:19:20.786 6.313 - 6.342: 24.5307% ( 157) 00:19:20.786 6.342 - 6.371: 25.4369% ( 112) 00:19:20.786 6.371 - 6.400: 25.9709% ( 66) 00:19:20.786 6.400 - 6.429: 26.4725% ( 62) 00:19:20.786 6.429 - 6.458: 26.9903% ( 64) 00:19:20.786 6.458 - 6.487: 27.9531% ( 119) 00:19:20.786 6.487 - 6.516: 30.1942% ( 277) 00:19:20.786 6.516 - 6.545: 33.0178% ( 349) 00:19:20.786 6.545 - 6.575: 38.0663% ( 624) 00:19:20.786 6.575 - 6.604: 45.1375% ( 874) 00:19:20.786 6.604 - 6.633: 51.9984% ( 848) 00:19:20.786 6.633 - 6.662: 58.0583% ( 749) 00:19:20.786 6.662 - 6.691: 62.9207% ( 601) 00:19:20.786 6.691 - 6.720: 67.0712% ( 513) 00:19:20.786 6.720 - 6.749: 70.3964% ( 411) 00:19:20.786 6.749 - 6.778: 72.6214% ( 275) 00:19:20.786 6.778 - 6.807: 74.2476% ( 201) 00:19:20.786 6.807 - 6.836: 75.5502% ( 161) 00:19:20.786 6.836 - 6.865: 76.5372% ( 122) 00:19:20.786 6.865 - 6.895: 77.3058% ( 95) 00:19:20.786 6.895 - 6.924: 77.9450% ( 79) 00:19:20.786 6.924 - 6.953: 78.3252% ( 47) 00:19:20.786 6.953 - 6.982: 78.6974% ( 46) 00:19:20.786 6.982 - 7.011: 79.0453% ( 43) 00:19:20.786 7.011 - 7.040: 79.3366% ( 36) 00:19:20.786 7.040 - 7.069: 79.6036% ( 33) 00:19:20.786 7.069 - 7.098: 79.7977% ( 24) 00:19:20.786 7.098 - 7.127: 79.9757% ( 22) 00:19:20.786 7.127 - 7.156: 80.0971% ( 15) 00:19:20.786 7.156 - 7.185: 80.2589% ( 20) 00:19:20.786 7.185 - 7.215: 80.3479% ( 11) 00:19:20.786 7.215 - 7.244: 80.4854% ( 17) 00:19:20.786 7.244 - 7.273: 80.6068% ( 15) 00:19:20.786 7.273 - 7.302: 80.6877% ( 10) 00:19:20.786 7.302 - 7.331: 80.7686% ( 10) 00:19:20.786 7.331 - 7.360: 80.8252% ( 7) 00:19:20.786 7.360 - 7.389: 80.8657% ( 5) 00:19:20.786 7.389 - 7.418: 80.9547% ( 11) 00:19:20.786 7.418 - 7.447: 80.9790% ( 3) 00:19:20.786 7.447 - 7.505: 81.1003% ( 15) 00:19:20.786 7.505 - 7.564: 81.2702% ( 21) 00:19:20.786 7.564 - 7.622: 81.4401% ( 21) 00:19:20.786 7.622 - 7.680: 81.5615% ( 15) 00:19:20.786 7.680 - 7.738: 81.6828% ( 15) 00:19:20.786 7.738 - 7.796: 81.8042% ( 15) 00:19:20.786 7.796 - 7.855: 81.9337% ( 16) 00:19:20.786 7.855 - 7.913: 82.1764% ( 30) 00:19:20.786 7.913 - 7.971: 82.3301% ( 19) 00:19:20.786 7.971 - 8.029: 82.4595% ( 16) 00:19:20.786 8.029 - 8.087: 82.6214% ( 20) 00:19:20.786 8.087 - 8.145: 82.7670% ( 18) 00:19:20.786 8.145 - 8.204: 82.9207% ( 19) 00:19:20.786 8.204 - 8.262: 83.0016% ( 10) 00:19:20.786 8.262 - 8.320: 83.0825% ( 10) 00:19:20.786 8.320 - 8.378: 83.1958% ( 14) 00:19:20.786 8.378 - 8.436: 83.2686% ( 9) 00:19:20.786 8.436 - 8.495: 83.3333% ( 8) 00:19:20.786 8.495 - 8.553: 83.4304% ( 12) 00:19:20.786 8.553 - 8.611: 83.5194% ( 11) 00:19:20.786 8.611 - 8.669: 83.6408% ( 15) 00:19:20.786 8.669 - 8.727: 83.8350% ( 24) 00:19:20.786 8.727 - 8.785: 83.9078% ( 9) 00:19:20.786 8.785 - 8.844: 83.9968% ( 11) 00:19:20.786 8.844 - 8.902: 84.0939% ( 12) 00:19:20.786 8.902 - 8.960: 84.1828% ( 11) 00:19:20.786 8.960 - 9.018: 84.2638% ( 10) 00:19:20.786 9.018 - 9.076: 84.3932% ( 16) 00:19:20.786 9.076 - 9.135: 84.4903% ( 12) 00:19:20.786 9.135 - 9.193: 84.6197% ( 16) 00:19:20.786 9.193 - 9.251: 84.7330% ( 14) 00:19:20.786 9.251 - 9.309: 84.8867% ( 19) 00:19:20.786 9.309 - 9.367: 85.0405% ( 19) 00:19:20.786 9.367 - 9.425: 85.1780% ( 17) 00:19:20.786 9.425 - 9.484: 85.3155% ( 17) 00:19:20.786 9.484 - 9.542: 85.5016% ( 23) 00:19:20.786 9.542 - 9.600: 85.6796% ( 22) 00:19:20.786 9.600 - 9.658: 85.8657% ( 23) 00:19:20.786 9.658 - 9.716: 86.1570% ( 36) 00:19:20.786 9.716 - 9.775: 86.4401% ( 35) 00:19:20.786 9.775 - 9.833: 86.6181% ( 22) 00:19:20.786 9.833 - 9.891: 86.8123% ( 24) 00:19:20.786 9.891 - 9.949: 86.9660% ( 19) 00:19:20.786 9.949 - 10.007: 87.0712% ( 13) 00:19:20.786 10.007 - 10.065: 87.2411% ( 21) 00:19:20.787 10.065 - 10.124: 87.4191% ( 22) 00:19:20.787 10.124 - 10.182: 87.5809% ( 20) 00:19:20.787 10.182 - 10.240: 87.7265% ( 18) 00:19:20.787 10.240 - 10.298: 87.8560% ( 16) 00:19:20.787 10.298 - 10.356: 87.9773% ( 15) 00:19:20.787 10.356 - 10.415: 88.1230% ( 18) 00:19:20.787 10.415 - 10.473: 88.3010% ( 22) 00:19:20.787 10.473 - 10.531: 88.4304% ( 16) 00:19:20.787 10.531 - 10.589: 88.5761% ( 18) 00:19:20.787 10.589 - 10.647: 88.7460% ( 21) 00:19:20.787 10.647 - 10.705: 88.9078% ( 20) 00:19:20.787 10.705 - 10.764: 89.0291% ( 15) 00:19:20.787 10.764 - 10.822: 89.2233% ( 24) 00:19:20.787 10.822 - 10.880: 89.4094% ( 23) 00:19:20.787 10.880 - 10.938: 89.5793% ( 21) 00:19:20.787 10.938 - 10.996: 89.8463% ( 33) 00:19:20.787 10.996 - 11.055: 90.0566% ( 26) 00:19:20.787 11.055 - 11.113: 90.2184% ( 20) 00:19:20.787 11.113 - 11.171: 90.3479% ( 16) 00:19:20.787 11.171 - 11.229: 90.5178% ( 21) 00:19:20.787 11.229 - 11.287: 90.6877% ( 21) 00:19:20.787 11.287 - 11.345: 90.8414% ( 19) 00:19:20.787 11.345 - 11.404: 90.9709% ( 16) 00:19:20.787 11.404 - 11.462: 91.1570% ( 23) 00:19:20.787 11.462 - 11.520: 91.3835% ( 28) 00:19:20.787 11.520 - 11.578: 91.5049% ( 15) 00:19:20.787 11.578 - 11.636: 91.6667% ( 20) 00:19:20.787 11.636 - 11.694: 91.8042% ( 17) 00:19:20.787 11.694 - 11.753: 91.9498% ( 18) 00:19:20.787 11.753 - 11.811: 92.1764% ( 28) 00:19:20.787 11.811 - 11.869: 92.3463% ( 21) 00:19:20.787 11.869 - 11.927: 92.4434% ( 12) 00:19:20.787 11.927 - 11.985: 92.6052% ( 20) 00:19:20.787 11.985 - 12.044: 92.8236% ( 27) 00:19:20.787 12.044 - 12.102: 92.9693% ( 18) 00:19:20.787 12.102 - 12.160: 93.1311% ( 20) 00:19:20.787 12.160 - 12.218: 93.3495% ( 27) 00:19:20.787 12.218 - 12.276: 93.5356% ( 23) 00:19:20.787 12.276 - 12.334: 93.6812% ( 18) 00:19:20.787 12.334 - 12.393: 93.8026% ( 15) 00:19:20.787 12.393 - 12.451: 93.9239% ( 15) 00:19:20.787 12.451 - 12.509: 94.0049% ( 10) 00:19:20.787 12.509 - 12.567: 94.2314% ( 28) 00:19:20.787 12.567 - 12.625: 94.3770% ( 18) 00:19:20.787 12.625 - 12.684: 94.5307% ( 19) 00:19:20.787 12.684 - 12.742: 94.6602% ( 16) 00:19:20.787 12.742 - 12.800: 94.8058% ( 18) 00:19:20.787 12.800 - 12.858: 94.9434% ( 17) 00:19:20.787 12.858 - 12.916: 95.0728% ( 16) 00:19:20.787 12.916 - 12.974: 95.2104% ( 17) 00:19:20.787 12.974 - 13.033: 95.3479% ( 17) 00:19:20.787 13.033 - 13.091: 95.4854% ( 17) 00:19:20.787 13.091 - 13.149: 95.6068% ( 15) 00:19:20.787 13.149 - 13.207: 95.7605% ( 19) 00:19:20.787 13.207 - 13.265: 95.9061% ( 18) 00:19:20.787 13.265 - 13.324: 96.0032% ( 12) 00:19:20.787 13.324 - 13.382: 96.1246% ( 15) 00:19:20.787 13.382 - 13.440: 96.2945% ( 21) 00:19:20.787 13.440 - 13.498: 96.4239% ( 16) 00:19:20.787 13.498 - 13.556: 96.5372% ( 14) 00:19:20.787 13.556 - 13.614: 96.6667% ( 16) 00:19:20.787 13.614 - 13.673: 96.7233% ( 7) 00:19:20.787 13.673 - 13.731: 96.7718% ( 6) 00:19:20.787 13.731 - 13.789: 96.9013% ( 16) 00:19:20.787 13.789 - 13.847: 97.0227% ( 15) 00:19:20.787 13.847 - 13.905: 97.1117% ( 11) 00:19:20.787 13.905 - 13.964: 97.2411% ( 16) 00:19:20.787 13.964 - 14.022: 97.3139% ( 9) 00:19:20.787 14.022 - 14.080: 97.4110% ( 12) 00:19:20.787 14.080 - 14.138: 97.5081% ( 12) 00:19:20.787 14.138 - 14.196: 97.5809% ( 9) 00:19:20.787 14.196 - 14.254: 97.6375% ( 7) 00:19:20.787 14.254 - 14.313: 97.7265% ( 11) 00:19:20.787 14.313 - 14.371: 97.8074% ( 10) 00:19:20.787 14.371 - 14.429: 97.8803% ( 9) 00:19:20.787 14.429 - 14.487: 97.9531% ( 9) 00:19:20.787 14.487 - 14.545: 98.0178% ( 8) 00:19:20.787 14.545 - 14.604: 98.1068% ( 11) 00:19:20.787 14.604 - 14.662: 98.1715% ( 8) 00:19:20.787 14.662 - 14.720: 98.1877% ( 2) 00:19:20.787 14.720 - 14.778: 98.2524% ( 8) 00:19:20.787 14.778 - 14.836: 98.3172% ( 8) 00:19:20.787 14.836 - 14.894: 98.3495% ( 4) 00:19:20.787 14.894 - 15.011: 98.4385% ( 11) 00:19:20.787 15.011 - 15.127: 98.5356% ( 12) 00:19:20.787 15.127 - 15.244: 98.6570% ( 15) 00:19:20.787 15.244 - 15.360: 98.7217% ( 8) 00:19:20.787 15.360 - 15.476: 98.7621% ( 5) 00:19:20.787 15.476 - 15.593: 98.7864% ( 3) 00:19:20.787 15.593 - 15.709: 98.8269% ( 5) 00:19:20.787 15.709 - 15.825: 98.8835% ( 7) 00:19:20.787 15.825 - 15.942: 98.9320% ( 6) 00:19:20.787 15.942 - 16.058: 98.9887% ( 7) 00:19:20.787 16.058 - 16.174: 99.0129% ( 3) 00:19:20.787 16.174 - 16.291: 99.0615% ( 6) 00:19:20.787 16.291 - 16.407: 99.1262% ( 8) 00:19:20.787 16.407 - 16.524: 99.1424% ( 2) 00:19:20.787 16.524 - 16.640: 99.1667% ( 3) 00:19:20.787 16.640 - 16.756: 99.1990% ( 4) 00:19:20.787 16.756 - 16.873: 99.2314% ( 4) 00:19:20.787 16.873 - 16.989: 99.2718% ( 5) 00:19:20.787 16.989 - 17.105: 99.3123% ( 5) 00:19:20.787 17.105 - 17.222: 99.3528% ( 5) 00:19:20.787 17.222 - 17.338: 99.3932% ( 5) 00:19:20.787 17.454 - 17.571: 99.4094% ( 2) 00:19:20.787 17.571 - 17.687: 99.4175% ( 1) 00:19:20.787 17.687 - 17.804: 99.4337% ( 2) 00:19:20.787 17.804 - 17.920: 99.4417% ( 1) 00:19:20.787 18.269 - 18.385: 99.4660% ( 3) 00:19:20.787 18.385 - 18.502: 99.4822% ( 2) 00:19:20.787 18.502 - 18.618: 99.4984% ( 2) 00:19:20.787 18.618 - 18.734: 99.5146% ( 2) 00:19:20.787 18.734 - 18.851: 99.5307% ( 2) 00:19:20.787 18.967 - 19.084: 99.5469% ( 2) 00:19:20.787 19.084 - 19.200: 99.5550% ( 1) 00:19:20.787 19.200 - 19.316: 99.5712% ( 2) 00:19:20.787 19.316 - 19.433: 99.5874% ( 2) 00:19:20.787 19.665 - 19.782: 99.5955% ( 1) 00:19:20.787 19.782 - 19.898: 99.6036% ( 1) 00:19:20.787 20.014 - 20.131: 99.6117% ( 1) 00:19:20.787 20.247 - 20.364: 99.6197% ( 1) 00:19:20.787 20.364 - 20.480: 99.6278% ( 1) 00:19:20.787 20.829 - 20.945: 99.6359% ( 1) 00:19:20.787 20.945 - 21.062: 99.6440% ( 1) 00:19:20.787 21.062 - 21.178: 99.6521% ( 1) 00:19:20.787 21.178 - 21.294: 99.6602% ( 1) 00:19:20.787 21.294 - 21.411: 99.6683% ( 1) 00:19:20.787 21.411 - 21.527: 99.6926% ( 3) 00:19:20.787 21.644 - 21.760: 99.7006% ( 1) 00:19:20.787 21.760 - 21.876: 99.7087% ( 1) 00:19:20.787 21.876 - 21.993: 99.7168% ( 1) 00:19:20.787 21.993 - 22.109: 99.7573% ( 5) 00:19:20.787 22.109 - 22.225: 99.7654% ( 1) 00:19:20.787 22.342 - 22.458: 99.7735% ( 1) 00:19:20.787 22.458 - 22.574: 99.7816% ( 1) 00:19:20.787 22.574 - 22.691: 99.7896% ( 1) 00:19:20.787 22.691 - 22.807: 99.7977% ( 1) 00:19:20.787 22.924 - 23.040: 99.8058% ( 1) 00:19:20.787 23.156 - 23.273: 99.8220% ( 2) 00:19:20.787 23.971 - 24.087: 99.8301% ( 1) 00:19:20.787 24.436 - 24.553: 99.8382% ( 1) 00:19:20.787 24.553 - 24.669: 99.8544% ( 2) 00:19:20.787 25.367 - 25.484: 99.8625% ( 1) 00:19:20.787 25.716 - 25.833: 99.8786% ( 2) 00:19:20.787 25.833 - 25.949: 99.8867% ( 1) 00:19:20.787 26.298 - 26.414: 99.8948% ( 1) 00:19:20.787 26.531 - 26.647: 99.9029% ( 1) 00:19:20.787 27.811 - 27.927: 99.9110% ( 1) 00:19:20.787 28.044 - 28.160: 99.9191% ( 1) 00:19:20.787 28.858 - 28.974: 99.9272% ( 1) 00:19:20.787 29.324 - 29.440: 99.9353% ( 1) 00:19:20.787 30.254 - 30.487: 99.9434% ( 1) 00:19:20.787 33.280 - 33.513: 99.9515% ( 1) 00:19:20.787 33.978 - 34.211: 99.9595% ( 1) 00:19:20.787 34.676 - 34.909: 99.9676% ( 1) 00:19:20.787 35.840 - 36.073: 99.9757% ( 1) 00:19:20.787 37.469 - 37.702: 99.9838% ( 1) 00:19:20.787 37.702 - 37.934: 99.9919% ( 1) 00:19:20.787 50.269 - 50.502: 100.0000% ( 1) 00:19:20.787 00:19:20.787 00:19:20.787 real 0m1.627s 00:19:20.787 user 0m1.007s 00:19:20.787 sys 0m0.619s 00:19:20.787 22:01:21 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.787 22:01:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 ************************************ 00:19:20.787 END TEST nvme_overhead 00:19:20.787 ************************************ 00:19:20.787 22:01:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:20.787 22:01:21 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:19:20.787 22:01:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.787 22:01:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 ************************************ 00:19:20.787 START TEST nvme_arbitration 00:19:20.787 ************************************ 00:19:20.787 22:01:21 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:21.045 EAL: TSC is not safe to use in SMP mode 00:19:21.045 EAL: TSC is not invariant 00:19:21.045 [2024-05-14 22:01:21.585242] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:25.226 Initializing NVMe Controllers 00:19:25.226 Attaching to 0000:00:10.0 00:19:25.226 Attached to 0000:00:10.0 00:19:25.226 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:25.226 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:19:25.226 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:19:25.226 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:19:25.226 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:25.226 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:25.226 Initialization complete. Launching workers. 00:19:25.226 Starting thread on core 1 with urgent priority queue 00:19:25.226 Starting thread on core 2 with urgent priority queue 00:19:25.226 Starting thread on core 3 with urgent priority queue 00:19:25.226 Starting thread on core 0 with urgent priority queue 00:19:25.226 QEMU NVMe Ctrl (12340 ) core 0: 5632.00 IO/s 17.76 secs/100000 ios 00:19:25.226 QEMU NVMe Ctrl (12340 ) core 1: 5743.00 IO/s 17.41 secs/100000 ios 00:19:25.226 QEMU NVMe Ctrl (12340 ) core 2: 5788.00 IO/s 17.28 secs/100000 ios 00:19:25.226 QEMU NVMe Ctrl (12340 ) core 3: 5638.67 IO/s 17.73 secs/100000 ios 00:19:25.226 ======================================================== 00:19:25.226 00:19:25.226 00:19:25.226 real 0m4.246s 00:19:25.226 user 0m12.686s 00:19:25.226 sys 0m0.593s 00:19:25.226 22:01:25 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.226 22:01:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 ************************************ 00:19:25.226 END TEST nvme_arbitration 00:19:25.226 ************************************ 00:19:25.226 22:01:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:25.226 22:01:25 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:25.226 22:01:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:25.226 22:01:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 ************************************ 00:19:25.226 START TEST nvme_single_aen 00:19:25.226 ************************************ 00:19:25.226 22:01:25 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:25.485 EAL: TSC is not safe to use in SMP mode 00:19:25.485 EAL: TSC is not invariant 00:19:25.485 [2024-05-14 22:01:25.867543] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:25.485 Asynchronous Event Request test 00:19:25.485 Attaching to 0000:00:10.0 00:19:25.485 Attached to 0000:00:10.0 00:19:25.485 Reset controller to setup AER completions for this process 00:19:25.485 Registering asynchronous event callbacks... 00:19:25.485 Getting orig temperature thresholds of all controllers 00:19:25.485 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:25.485 Setting all controllers temperature threshold low to trigger AER 00:19:25.485 Waiting for all controllers temperature threshold to be set lower 00:19:25.485 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:25.485 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:25.485 Waiting for all controllers to trigger AER and reset threshold 00:19:25.485 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:25.485 Cleaning up... 00:19:25.485 00:19:25.485 real 0m0.578s 00:19:25.485 user 0m0.017s 00:19:25.485 sys 0m0.559s 00:19:25.485 22:01:25 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:25.485 22:01:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:25.485 ************************************ 00:19:25.485 END TEST nvme_single_aen 00:19:25.485 ************************************ 00:19:25.485 22:01:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:25.485 22:01:25 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:25.485 22:01:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:25.485 22:01:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.485 ************************************ 00:19:25.485 START TEST nvme_doorbell_aers 00:19:25.485 ************************************ 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:25.485 22:01:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:26.052 EAL: TSC is not safe to use in SMP mode 00:19:26.052 EAL: TSC is not invariant 00:19:26.052 [2024-05-14 22:01:26.540355] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:26.052 Executing: test_write_invalid_db 00:19:26.052 Waiting for AER completion... 00:19:26.052 Asynchronous Event received. 00:19:26.052 Error Informaton Log Page received. 00:19:26.052 Success: test_write_invalid_db 00:19:26.052 00:19:26.052 Executing: test_invalid_db_write_overflow_sq 00:19:26.052 Waiting for AER completion... 00:19:26.052 Asynchronous Event received. 00:19:26.052 Error Informaton Log Page received. 00:19:26.052 Success: test_invalid_db_write_overflow_sq 00:19:26.052 00:19:26.052 Executing: test_invalid_db_write_overflow_cq 00:19:26.052 Waiting for AER completion... 00:19:26.052 Asynchronous Event received. 00:19:26.052 Error Informaton Log Page received. 00:19:26.052 Success: test_invalid_db_write_overflow_cq 00:19:26.052 00:19:26.052 00:19:26.052 real 0m0.640s 00:19:26.052 user 0m0.036s 00:19:26.052 sys 0m0.619s 00:19:26.052 22:01:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:26.052 ************************************ 00:19:26.052 END TEST nvme_doorbell_aers 00:19:26.052 ************************************ 00:19:26.052 22:01:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:26.052 22:01:26 nvme -- nvme/nvme.sh@97 -- # uname 00:19:26.052 22:01:26 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:19:26.052 22:01:26 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:26.052 22:01:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:26.052 22:01:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:26.052 22:01:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:26.052 ************************************ 00:19:26.052 START TEST bdev_nvme_reset_stuck_adm_cmd 00:19:26.052 ************************************ 00:19:26.052 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:26.334 * Looking for test storage... 00:19:26.334 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66918 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66918 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 66918 ']' 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:19:26.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:26.334 22:01:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:26.334 [2024-05-14 22:01:26.878478] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:26.334 [2024-05-14 22:01:26.878660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:26.954 EAL: TSC is not safe to use in SMP mode 00:19:26.954 EAL: TSC is not invariant 00:19:26.954 [2024-05-14 22:01:27.450181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.954 [2024-05-14 22:01:27.534497] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:26.954 [2024-05-14 22:01:27.534562] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:26.954 [2024-05-14 22:01:27.534571] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:26.954 [2024-05-14 22:01:27.534579] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:19:26.954 [2024-05-14 22:01:27.538883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.954 [2024-05-14 22:01:27.538657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.954 [2024-05-14 22:01:27.538770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.954 [2024-05-14 22:01:27.538878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:27.533 [2024-05-14 22:01:27.920636] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:27.533 nvme0n1 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:27.533 true 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715724087 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66930 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:19:27.533 22:01:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:30.062 [2024-05-14 22:01:30.065828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:30.062 [2024-05-14 22:01:30.066107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.062 [2024-05-14 22:01:30.066137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:30.062 [2024-05-14 22:01:30.066148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.062 [2024-05-14 22:01:30.067126] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.062 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66930 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66930 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66930 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.062 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.t1uy4p 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.nGcBl7 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 66918 ']' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps -c -o command 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # tail -1 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:19:30.063 killing process with pid 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66918' 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 66918 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:19:30.063 00:19:30.063 real 0m3.811s 00:19:30.063 user 0m12.078s 00:19:30.063 sys 0m0.894s 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:30.063 ************************************ 00:19:30.063 END TEST bdev_nvme_reset_stuck_adm_cmd 00:19:30.063 ************************************ 00:19:30.063 22:01:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 22:01:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:19:30.063 22:01:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:19:30.063 22:01:30 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:30.063 22:01:30 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:30.063 22:01:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 ************************************ 00:19:30.063 START TEST nvme_fio 00:19:30.063 ************************************ 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:19:30.063 22:01:30 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:30.063 22:01:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:19:30.629 EAL: TSC is not safe to use in SMP mode 00:19:30.629 EAL: TSC is not invariant 00:19:30.629 [2024-05-14 22:01:31.101238] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:30.629 22:01:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:19:30.629 22:01:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:31.195 EAL: TSC is not safe to use in SMP mode 00:19:31.195 EAL: TSC is not invariant 00:19:31.195 [2024-05-14 22:01:31.715262] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:31.195 22:01:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:19:31.195 22:01:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:31.195 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:31.196 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:31.196 22:01:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:31.453 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:31.453 fio-3.35 00:19:31.453 Starting 1 thread 00:19:32.019 EAL: TSC is not safe to use in SMP mode 00:19:32.019 EAL: TSC is not invariant 00:19:32.019 [2024-05-14 22:01:32.430575] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:34.545 00:19:34.545 test: (groupid=0, jobs=1): err= 0: pid=102840: Tue May 14 22:01:34 2024 00:19:34.545 read: IOPS=44.2k, BW=172MiB/s (181MB/s)(345MiB/2001msec) 00:19:34.545 slat (nsec): min=451, max=28158, avg=593.94, stdev=360.80 00:19:34.545 clat (usec): min=292, max=3812, avg=1448.82, stdev=289.27 00:19:34.545 lat (usec): min=292, max=3822, avg=1449.41, stdev=289.29 00:19:34.545 clat percentiles (usec): 00:19:34.545 | 1.00th=[ 660], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:19:34.545 | 30.00th=[ 1319], 40.00th=[ 1369], 50.00th=[ 1418], 60.00th=[ 1483], 00:19:34.545 | 70.00th=[ 1532], 80.00th=[ 1614], 90.00th=[ 1762], 95.00th=[ 1975], 00:19:34.545 | 99.00th=[ 2376], 99.50th=[ 2540], 99.90th=[ 3195], 99.95th=[ 3490], 00:19:34.545 | 99.99th=[ 3621] 00:19:34.545 bw ( KiB/s): min=171549, max=185833, per=100.00%, avg=180275.67, stdev=7651.25, samples=3 00:19:34.545 iops : min=42887, max=46458, avg=45068.67, stdev=1912.81, samples=3 00:19:34.545 write: IOPS=44.0k, BW=172MiB/s (180MB/s)(344MiB/2001msec); 0 zone resets 00:19:34.545 slat (nsec): min=478, max=29402, avg=844.00, stdev=846.75 00:19:34.545 clat (usec): min=288, max=3744, avg=1449.34, stdev=290.31 00:19:34.545 lat (usec): min=292, max=3748, avg=1450.18, stdev=290.33 00:19:34.545 clat percentiles (usec): 00:19:34.545 | 1.00th=[ 668], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:19:34.545 | 30.00th=[ 1319], 40.00th=[ 1369], 50.00th=[ 1418], 60.00th=[ 1483], 00:19:34.545 | 70.00th=[ 1532], 80.00th=[ 1631], 90.00th=[ 1762], 95.00th=[ 1975], 00:19:34.545 | 99.00th=[ 2376], 99.50th=[ 2573], 99.90th=[ 3294], 99.95th=[ 3458], 00:19:34.545 | 99.99th=[ 3589] 00:19:34.545 bw ( KiB/s): min=171130, max=185317, per=100.00%, avg=179364.67, stdev=7363.73, samples=3 00:19:34.545 iops : min=42782, max=46329, avg=44840.67, stdev=1840.99, samples=3 00:19:34.545 lat (usec) : 500=0.53%, 750=0.73%, 1000=1.00% 00:19:34.545 lat (msec) : 2=93.24%, 4=4.50% 00:19:34.545 cpu : usr=99.95%, sys=0.00%, ctx=26, majf=0, minf=2 00:19:34.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:34.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.545 issued rwts: total=88361,88119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.545 00:19:34.545 Run status group 0 (all jobs): 00:19:34.545 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=345MiB (362MB), run=2001-2001msec 00:19:34.545 WRITE: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2001-2001msec 00:19:35.479 22:01:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:19:35.479 22:01:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:19:35.479 00:19:35.479 real 0m5.414s 00:19:35.479 user 0m2.507s 00:19:35.479 sys 0m2.840s 00:19:35.479 ************************************ 00:19:35.479 END TEST nvme_fio 00:19:35.479 ************************************ 00:19:35.479 22:01:35 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.479 22:01:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:19:35.479 00:19:35.479 real 0m26.447s 00:19:35.479 user 0m31.417s 00:19:35.479 sys 0m13.080s 00:19:35.479 22:01:35 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.479 ************************************ 00:19:35.479 END TEST nvme 00:19:35.479 ************************************ 00:19:35.479 22:01:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:35.479 22:01:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:19:35.479 22:01:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:35.479 22:01:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:35.479 22:01:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.479 22:01:35 -- common/autotest_common.sh@10 -- # set +x 00:19:35.479 ************************************ 00:19:35.479 START TEST nvme_scc 00:19:35.479 ************************************ 00:19:35.479 22:01:35 nvme_scc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:35.738 * Looking for test storage... 00:19:35.738 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:35.738 22:01:36 nvme_scc -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.738 22:01:36 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.738 22:01:36 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.738 22:01:36 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.738 22:01:36 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:35.738 22:01:36 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:35.738 22:01:36 nvme_scc -- paths/export.sh@4 -- # export PATH 00:19:35.738 22:01:36 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:35.738 22:01:36 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:19:35.738 22:01:36 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.738 22:01:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:19:35.738 22:01:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:19:35.738 22:01:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:19:35.738 00:19:35.738 real 0m0.190s 00:19:35.738 user 0m0.143s 00:19:35.738 sys 0m0.127s 00:19:35.738 22:01:36 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.738 22:01:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:35.738 ************************************ 00:19:35.738 END TEST nvme_scc 00:19:35.738 ************************************ 00:19:35.738 22:01:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:19:35.738 22:01:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:19:35.738 22:01:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:19:35.738 22:01:36 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:19:35.738 22:01:36 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:19:35.738 22:01:36 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:35.738 22:01:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:35.738 22:01:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.738 22:01:36 -- common/autotest_common.sh@10 -- # set +x 00:19:35.738 ************************************ 00:19:35.738 START TEST nvme_rpc 00:19:35.738 ************************************ 00:19:35.738 22:01:36 nvme_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:35.996 * Looking for test storage... 00:19:35.996 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67172 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:35.996 22:01:36 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67172 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 67172 ']' 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.996 22:01:36 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:35.996 [2024-05-14 22:01:36.440112] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:35.996 [2024-05-14 22:01:36.440337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:36.560 EAL: TSC is not safe to use in SMP mode 00:19:36.560 EAL: TSC is not invariant 00:19:36.560 [2024-05-14 22:01:36.961378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.560 [2024-05-14 22:01:37.052525] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:36.560 [2024-05-14 22:01:37.052591] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:36.560 [2024-05-14 22:01:37.055461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.560 [2024-05-14 22:01:37.055451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.125 22:01:37 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:37.125 22:01:37 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:19:37.125 22:01:37 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:37.383 [2024-05-14 22:01:37.750549] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:37.383 Nvme0n1 00:19:37.383 22:01:37 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:37.383 22:01:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:37.642 request: 00:19:37.642 { 00:19:37.642 "filename": "non_existing_file", 00:19:37.642 "bdev_name": "Nvme0n1", 00:19:37.642 "method": "bdev_nvme_apply_firmware", 00:19:37.642 "req_id": 1 00:19:37.642 } 00:19:37.642 Got JSON-RPC error response 00:19:37.642 response: 00:19:37.642 { 00:19:37.642 "code": -32603, 00:19:37.642 "message": "open file failed." 00:19:37.642 } 00:19:37.642 22:01:38 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:37.642 22:01:38 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:37.642 22:01:38 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:37.945 22:01:38 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:37.945 22:01:38 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67172 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 67172 ']' 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 67172 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 67172 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@954 -- # tail -1 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:19:37.945 killing process with pid 67172 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67172' 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@965 -- # kill 67172 00:19:37.945 22:01:38 nvme_rpc -- common/autotest_common.sh@970 -- # wait 67172 00:19:38.213 00:19:38.213 real 0m2.436s 00:19:38.213 user 0m4.527s 00:19:38.213 sys 0m0.828s 00:19:38.213 22:01:38 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:38.213 22:01:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.213 ************************************ 00:19:38.213 END TEST nvme_rpc 00:19:38.213 ************************************ 00:19:38.213 22:01:38 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:38.213 22:01:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:38.213 22:01:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:38.213 22:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:38.213 ************************************ 00:19:38.213 START TEST nvme_rpc_timeouts 00:19:38.213 ************************************ 00:19:38.213 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:38.471 * Looking for test storage... 00:19:38.471 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67209 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67209 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67237 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:38.471 22:01:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67237 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 67237 ']' 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:38.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:38.471 22:01:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:38.471 [2024-05-14 22:01:38.871949] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:38.471 [2024-05-14 22:01:38.872198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:39.038 EAL: TSC is not safe to use in SMP mode 00:19:39.038 EAL: TSC is not invariant 00:19:39.038 [2024-05-14 22:01:39.425786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:39.038 [2024-05-14 22:01:39.548850] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:39.038 [2024-05-14 22:01:39.548974] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:39.038 [2024-05-14 22:01:39.552842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.038 [2024-05-14 22:01:39.552823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.603 22:01:39 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:39.603 Checking default timeout settings: 00:19:39.603 22:01:39 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:19:39.603 22:01:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:39.603 22:01:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:39.861 Making settings changes with rpc: 00:19:39.861 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:39.861 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:40.118 Check default vs. modified settings: 00:19:40.118 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:40.118 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:40.376 Setting action_on_timeout is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:40.376 Setting timeout_us is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:40.376 Setting timeout_admin_us is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67209 /tmp/settings_modified_67209 00:19:40.376 22:01:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67237 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 67237 ']' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 67237 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # tail -1 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps -c -o command 67237 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:19:40.376 killing process with pid 67237 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67237' 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 67237 00:19:40.376 22:01:40 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 67237 00:19:40.633 RPC TIMEOUT SETTING TEST PASSED. 00:19:40.633 22:01:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:40.633 00:19:40.633 real 0m2.524s 00:19:40.633 user 0m4.664s 00:19:40.633 sys 0m0.899s 00:19:40.633 22:01:41 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:40.633 22:01:41 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:40.633 ************************************ 00:19:40.633 END TEST nvme_rpc_timeouts 00:19:40.633 ************************************ 00:19:40.891 22:01:41 -- spdk/autotest.sh@239 -- # uname -s 00:19:40.891 22:01:41 -- spdk/autotest.sh@239 -- # '[' FreeBSD = Linux ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]] 00:19:40.891 22:01:41 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:40.891 22:01:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.891 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.891 22:01:41 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:19:40.891 22:01:41 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:19:40.891 22:01:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:19:40.891 22:01:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:19:40.891 22:01:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:19:40.891 22:01:41 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:19:40.891 22:01:41 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:19:40.891 22:01:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:40.891 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.891 22:01:41 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:19:40.891 22:01:41 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:19:40.891 22:01:41 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:19:40.891 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:41.457 setup.sh cleanup function not yet supported on FreeBSD 00:19:41.457 22:01:41 -- common/autotest_common.sh@1447 -- # return 0 00:19:41.457 22:01:41 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:19:41.457 22:01:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.457 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:41.457 22:01:41 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:19:41.457 22:01:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.457 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:41.457 22:01:41 -- spdk/autotest.sh@383 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:41.457 22:01:41 -- spdk/autotest.sh@385 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:41.457 22:01:41 -- spdk/autotest.sh@387 -- # hash lcov 00:19:41.457 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 387: hash: lcov: not found 00:19:41.457 22:01:42 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.457 22:01:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:41.457 22:01:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.457 22:01:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.457 22:01:42 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:41.457 22:01:42 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:41.457 22:01:42 -- paths/export.sh@4 -- $ export PATH 00:19:41.457 22:01:42 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:41.457 22:01:42 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:19:41.715 22:01:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:19:41.715 22:01:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715724102.XXXXXX 00:19:41.715 22:01:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715724102.XXXXXX.AuRFe1i9 00:19:41.715 22:01:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:19:41.715 22:01:42 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:19:41.715 22:01:42 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:19:41.715 22:01:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:41.715 22:01:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:41.715 22:01:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:19:41.715 22:01:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:19:41.715 22:01:42 -- common/autotest_common.sh@10 -- $ set +x 00:19:41.715 22:01:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:41.715 22:01:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:19:41.715 22:01:42 -- pm/common@17 -- $ local monitor 00:19:41.715 22:01:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:41.716 22:01:42 -- pm/common@25 -- $ sleep 1 00:19:41.716 22:01:42 -- pm/common@21 -- $ date +%s 00:19:41.716 22:01:42 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715724102 00:19:41.716 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715724102_collect-vmstat.pm.log 00:19:42.652 22:01:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:19:42.652 22:01:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:42.652 22:01:43 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:19:42.652 22:01:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:42.652 22:01:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:42.652 22:01:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:42.652 22:01:43 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:42.652 22:01:43 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:42.652 22:01:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:42.652 22:01:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:42.652 22:01:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:42.652 22:01:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:42.652 22:01:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:42.652 22:01:43 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:42.652 22:01:43 -- pm/common@44 -- $ pid=67458 00:19:42.652 22:01:43 -- pm/common@50 -- $ kill -TERM 67458 00:19:42.652 + [[ -n 1293 ]] 00:19:42.652 + sudo kill 1293 00:19:42.919 [Pipeline] } 00:19:42.938 [Pipeline] // timeout 00:19:42.956 [Pipeline] } 00:19:42.998 [Pipeline] // stage 00:19:43.001 [Pipeline] } 00:19:43.010 [Pipeline] // catchError 00:19:43.016 [Pipeline] stage 00:19:43.017 [Pipeline] { (Stop VM) 00:19:43.025 [Pipeline] sh 00:19:43.297 + vagrant halt 00:19:47.477 ==> default: Halting domain... 00:20:09.403 [Pipeline] sh 00:20:09.733 + vagrant destroy -f 00:20:13.010 ==> default: Removing domain... 00:20:13.320 [Pipeline] sh 00:20:13.599 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:20:13.608 [Pipeline] } 00:20:13.626 [Pipeline] // stage 00:20:13.631 [Pipeline] } 00:20:13.647 [Pipeline] // dir 00:20:13.653 [Pipeline] } 00:20:13.671 [Pipeline] // wrap 00:20:13.676 [Pipeline] } 00:20:13.721 [Pipeline] // catchError 00:20:13.726 [Pipeline] stage 00:20:13.727 [Pipeline] { (Epilogue) 00:20:13.736 [Pipeline] sh 00:20:14.009 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:14.020 [Pipeline] catchError 00:20:14.022 [Pipeline] { 00:20:14.036 [Pipeline] sh 00:20:14.313 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:14.313 Artifacts sizes are good 00:20:14.321 [Pipeline] } 00:20:14.336 [Pipeline] // catchError 00:20:14.346 [Pipeline] archiveArtifacts 00:20:14.352 Archiving artifacts 00:20:14.395 [Pipeline] cleanWs 00:20:14.406 [WS-CLEANUP] Deleting project workspace... 00:20:14.406 [WS-CLEANUP] Deferred wipeout is used... 00:20:14.412 [WS-CLEANUP] done 00:20:14.413 [Pipeline] } 00:20:14.429 [Pipeline] // stage 00:20:14.434 [Pipeline] } 00:20:14.450 [Pipeline] // node 00:20:14.455 [Pipeline] End of Pipeline 00:20:14.485 Finished: SUCCESS